关注
Haoli Bai
Haoli Bai
Noah's Ark Lab, Huawei
在 huawei.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Binarybert: Pushing the limit of bert quantization
H Bai, W Zhang, L Hou, L Shang, J Jin, X Jiang, Q Liu, M Lyu, I King
59th Annual Meeting of the Association for Computational Linguistics (ACL …, 2021
2302021
Few shot network compression via cross distillation
H Bai, J Wu, I King, M Lyu
Proceedings of the AAAI Conference on Artificial Intelligence, 3203-3210, 2020
662020
Towards efficient post-training quantization of pre-trained language models
H Bai, L Hou, L Shang, X Jiang, I King, MR Lyu
Advances in Neural Information Processing Systems, 2022
582022
Neural Relational Topic Models for Scientific Article Analysis
H Bai, Z Chen, MR Lyu, I King, Z Xu
Proceedings of the 27th ACM International Conference on Information and …, 2018
572018
DART: Domain-adversarial residual-transfer networks for unsupervised cross-domain image classification
X Fang, H Bai, Z Guo, B Shen, S Hoi, Z Xu
Neural Networks 127, 182-192, 2020
502020
Structured pruning of recurrent neural networks through neuron selection
L Wen, X Zhang, H Bai, Z Xu
Neural Networks 123, 134-141, 2020
462020
Rtn: Reparameterized ternary network
Y Li, X Dong, SQ Zhang, H Bai, Y Chen, W Wang
Proceedings of the AAAI Conference on Artificial Intelligence 34 (04), 4780-4787, 2020
342020
Structured Inference for Recurrent Hidden Semi-markov Model.
H Liu, L He, H Bai, B Dai, K Bai, Z Xu
IJCAI, 2447-2453, 2018
332018
M-nas: Meta neural architecture search
J Wang, J Wu, H Bai, J Cheng
Proceedings of the AAAI Conference on Artificial Intelligence 34 (04), 6186-6193, 2020
312020
Revisiting Parameter Sharing for Automatic Neural Channel Number Search
J Wang*, H Bai*, J Wu, X Shi, J Huang, I King, M Lyu, J Cheng
Advances in Neural Information Processing Systems 33, 2020
302020
Pocketflow: An automated framework for compressing and accelerating deep neural networks
J Wu, Y Zhang, H Bai, H Zhong, J Hou, W Liu, W Huang, J Huang
292018
Plug-and-Play: An Efficient Post-training Pruning Method for Large Language Models
Y Zhang, H Bai, H Lin, J Zhao, L Hou, CV Cannistraci
The Twelfth International Conference on Learning Representations, 2024
272024
Structured pruning for efficient generative pre-trained language models
C Tao, L Hou, H Bai, J Wei, X Jiang, Q Liu, P Luo, N Wong
Findings of the Association for Computational Linguistics: ACL 2023, 10880-10895, 2023
272023
Efficient bitwidth search for practical mixed precision neural network
Y Li, W Wang, H Bai, R Gong, X Dong
arXiv preprint arXiv:2003.07577 3, 2020
232020
Bayesian automatic model compression
J Wang, H Bai, J Wu, J Cheng
IEEE Journal of Selected Topics in Signal Processing 14 (4), 727-736, 2020
212020
Dynamically pruning segformer for efficient semantic segmentation
H Bai, H Mao, D Nair
ICASSP 2022, 2021
162021
Translider: Transfer ensemble learning from exploitation to exploration
K Zhong, Y Wei, C Yuan, H Bai, J Huang
Proceedings of the 26th ACM SIGKDD International Conference on Knowledge …, 2020
142020
Variational random function model for network modeling
Z Xu, B Liu, S Zhe, H Bai, Z Wang, J Neville
IEEE transactions on neural networks and learning systems 30 (1), 318-324, 2018
132018
IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact
R Liu, H Bai, H Lin, Y Li, H Gao, Z Xu, L Hou, J Yao, C Yuan
arXiv preprint arXiv:2403.01241, 2024
122024
Wukong-reader: Multi-modal pre-training for fine-grained visual document understanding
H Bai, Z Liu, X Meng, W Li, S Liu, N Xie, R Zheng, L Wang, L Hou, J Wei, ...
arXiv preprint arXiv:2212.09621, 2022
112022
系统目前无法执行此操作,请稍后再试。
文章 1–20