Follow
Zhenglun Kong
Title
Cited by
Cited by
Year
SPViT: Enabling Faster Vision Transformers via Latency-Aware Soft Token Pruning
Z Kong, P Dong, X Ma, X Meng, W Niu, M Sun, X Shen, G Yuan, B Ren, ...
ECCV 2022, 620-640, 2022
1582022
Mest: Accurate and fast memory-economic sparse training framework on the edge
G Yuan, X Ma, W Niu, Z Li, Z Kong, N Liu, Y Gong, Z Zhan, C He, Q Jin, ...
NeurIPS 2021 34, 20838-20850, 2021
882021
Efficient transformer-based large scale language representations using hardware-friendly block structured pruning
B Li*, Z Kong*, T Zhang, J Li, Z Li, H Liu, C Ding
EMNLP 2020 findings, 2020
64*2020
Accelerating framework of transformer by hardware design and model compression co-optimization
P Qi, EHM Sha, Q Zhuge, H Peng, S Huang, Z Kong, Y Song, B Li
ICCAD 2021, 1-9, 2021
442021
Improving dnn fault tolerance using weight pruning and differential crossbar mapping for reram-based edge ai
G Yuan, Z Liao, X Ma, Y Cai, Z Kong, X Shen, J Fu, Z Li, C Zhang, H Peng, ...
ISQED 2021, 135-141, 2021
352021
Automatic tissue image segmentation based on image processing and deep learning
Z Kong, T Li, J Luo, S Xu
Journal of healthcare engineering 2019 (1), 2912458, 2019
322019
Heatvit: Hardware-efficient adaptive token pruning for vision transformers
P Dong, M Sun, A Lu, Y Xie, K Liu, Z Kong, X Meng, Z Li, X Lin, Z Fang, ...
HPCA 2023, 442-455, 2023
302023
NPAS: A Compiler-aware Framework of Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
Z Li, G Yuan, W Niu, P Zhao, Y Li, Y Cai, X Shen, Z Zhan, Z Kong, Q Jin, ...
CVPR 2021 Oral, 14255-14266, 2021
292021
You Need Multiple Exiting: Dynamic Early Exiting for Accelerating Unified Vision Language Model
S Tang, Y Wang, Z Kong, T Zhang, Y Li, C Ding, Y Wang, Y Liang, D Xu
CVPR 2022, 2022
24*2022
A Compression-Compilation Framework for On-mobile Real-time BERT Applications
W Niu*, Z Kong*, G Yuan, W Jiang, J Guan, C Ding, P Zhao, S Liu, B Ren, ...
IJCAI 2021, 2021
22*2021
SS-Auto: A single-shot, automatic structured weight pruning framework of DNNs with ultra-high efficiency
Z Li, Y Gong, X Ma, S Liu, M Sun, Z Zhan, Z Kong, G Yuan, Y Wang
arXiv preprint arXiv:2001.08839, 2020
202020
Hmc-tran: A tensor-core inspired hierarchical model compression for transformer-based dnns on gpu
S Huang, S Chen, H Peng, D Manu, Z Kong, G Yuan, L Yang, S Wang, ...
GLSVLSI 2021, 169-174, 2021
18*2021
Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse Training
G Yuan, Y Li, S Li, Z Kong, S Tulyakov, X Tang, Y Wang, J Ren
NeurIPS 2022, 2022
16*2022
Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Z Kong, H Ma, G Yuan, M Sun, Y Xie, P Dong, X Meng, X Shen, H Tang, ...
AAAI 2023 Oral, 2023
152023
Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the Edge
X Shen, P Dong, L Lu, Z Kong, Z Li, M Lin, C Wu, Y Wang
AAAI 2024, 2023
92023
Zhenglun Kong, Xin Meng, Zhengang Li, Xue Lin, Zhenman Fang, et al. Heatvit: Hardware-efficient adaptive token pruning for vision transformers
P Dong, M Sun, A Lu, Y Xie, K Liu
HPCA 2023, 442-455, 2022
92022
Automatic tissue image segmentation based on image processing and deep learning
Z Kong, J Luo, S Xu, T Li
Neural Imaging and Sensing 2018 10481, 79-85, 2018
92018
Data level lottery ticket hypothesis for vision transformers
X Shen, Z Kong, M Qin, P Dong, G Yuan, X Meng, H Tang, X Ma, Y Wang
IJCAI 2323 Oral, 2023
62023
Zhenglun Kong, Geng Yuan, and Yanzhi Wang. 2020. SS-Auto: A single-shot, automatic structured weight pruning framework of DNNs with ultra-high efficiency
Z Li, Y Gong, X Ma, S Liu, M Sun, Z Zhan
arXiv preprint arXiv:2001.08839, 2020
52020
Automatical and accurate segmentation of cerebral tissues in fMRI dataset with combination of image processing and deep learning
Z Kong, J Luo, S Xu, T Li
SPIE 2018 Optics and Biophotonics in Low-Resource Settings IV 10485, 24-30, 2018
52018
The system can't perform the operation now. Try again later.
Articles 1–20