Follow
Ling Li
Title
Cited by
Cited by
Year
High-performance p-type black phosphorus transistor with scandium contact
L Li, M Engel, DB Farmer, S Han, HSP Wong
ACS nano 10 (4), 4672-4677, 2016
1362016
Vertical and lateral copper transport through graphene layers
L Li, X Chen, CH Wang, J Cao, S Lee, A Tang, C Ahn, S Singha Roy, ...
ACS nano 9 (8), 8361-8367, 2015
482015
BEOL compatible graphene/Cu with improved electromigration lifetime for future interconnects
L Li, Z Zhu, T Wang, JA Currivan-Incorvia, A Yoon, HSP Wong
2016 IEEE International Electron Devices Meeting (IEDM), 9.5. 1-9.5. 4, 2016
362016
In-situ Grown Graphene Enables Copper Interconnects with Improved Electromigration Reliability
L Li, Z Zhu, A Yoon, HSP Wong
IEEE Electron Device Letters, 1 - 1, 2019
322019
Cu diffusion barrier: Graphene benchmarked to TaN for ultimate interconnect scaling
L Li, X Chen, CH Wang, S Lee, J Cao, SS Roy, MS Arnold, HSP Wong
2015 Symposium on VLSI Technology (VLSI Technology), T122-T123, 2015
312015
Integrating Graphene into Future Generations of Interconnect Wires
L Li, HSP Wong
2018 IEEE International Electron Devices Meeting (IEDM), IEEE, pp. 5-5, 2018
162018
Griffin: Rethinking sparse optimization for deep learning architectures
JH Shin, A Shafiee, A Pedram, H Abdel-Aziz, L Li, J Hassoun
2022 IEEE International Symposium on High-Performance Computer Architecture …, 2022
102022
Sait: Sparse vision transformers through adaptive token pruning
L Li, D Thorsley, J Hassoun
arXiv preprint arXiv:2210.05832, 2022
82022
Compact modeling for gate-all-around nanowire tunneling FETs (GAA NW-tFETs)
Z Yu, L Li, L Zhang, J Zhang
2012 IEEE 11th International Conference on Solid-State and Integrated …, 2012
32012
Design Space Exploration of Sparse Accelerators for Deep Neural Networks.
JH Shin, A Shafiee, A Pedram, H Abdel-Aziz, L Li, J Hassoun
CoRR, abs/2107.12922, 2021
22021
Sub-5 nm gap formation for low power NEM switches
J Cao, L Li, K Kato, TJK Liu, HSP Wong
2015 Fourth Berkeley Symposium on Energy Efficient Electronic Systems (E3S), 1-3, 2015
22015
MaiT: Leverage Attention Masks for More Efficient Image Transformers
L Li, AS Ardestani, J Hassoun
arXiv preprint arXiv:2207.03006, 2022
12022
Algorithm/architecture solutions to improve beyond uniform quantization in embedded DNN accelerators
A Pedram, AS Ardestani, L Li, H Abdelaziz, J Fang, J Hassoun
Journal of Systems Architecture 126, 102454, 2022
12022
Mait: integrating spatial locality into image transformers with attention masks
L Li, A Shafiee, JH Hassoun
12021
Efficiency of vision transformers with adaptive token pruning
L Li, AS Ardestani
US Patent App. 17/978,959, 2023
2023
Efficient circuit for neural network processing
L Li, AS Ardestani, H Abdelaziz, JH Hassoun
US Patent App. 17/570,326, 2023
2023
Accelerate neural networks with compression at different levels
L Li, AS Ardestani
US Patent App. 17/578,428, 2023
2023
Integrating spatial locality into image transformers with masked attention
L Li, AS Ardestani, JH Hassoun
US Patent App. 17/573,630, 2023
2023
Partial sum compression
L Li, AS Ardestani
US Patent App. 17/407,150, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–19