Loss-aware Binarization of Deep Networks L Hou, Q Yao, JT Kwok 5th International Conference on Learning Representations (ICLR-2017), 2016 | 218 | 2016 |
FILIP: Fine-grained Interactive Language-Image Pre-training L Yao*, R Huang*, L Hou*, G Lu, M Niu, H Xu, X Liang, Z Li, X Jiang, C Xu 10th International Conference on Learning Representations (ICLR-2022), 2022 | 191 | 2022 |
Dynabert: Dynamic bert with adaptive width and depth L Hou, Z Huang, L Shang, X Jiang, X Chen, Q Liu Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS-2020), 2020 | 170 | 2020 |
Loss-aware Weight Quantization of Deep Networks L Hou, JT Kwok 6th International Conference on Learning Representations (ICLR-2018), 2018 | 134 | 2018 |
TernaryBERT: Distillation-aware Ultra-low Bit BERT W Zhang*, L Hou*, Y Yin*, L Shang, X Chen, X Jiang, Q Liu Conference on Empirical Methods in Natural Language Processing (EMNLP-2020), 2020 | 110 | 2020 |
BinaryBERT: Pushing the Limit of BERT Quantization H Bai, W Zhang, L Hou, L Shang, J Jin, X Jiang, Q Liu, M Lyu, I King 59th Annual Meeting of the Association for Computational Linguistics (ACL-2021), 2021 | 100 | 2021 |
Efficient Learning of Timeseries Shapelets L Hou, JT Kwok, JM Zurada the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-2016), 2016 | 93 | 2016 |
Normalization Helps Training of Quantized LSTM L Hou, J Zhu, JT Kwok, F Gao, T Qin, T Liu Thirty-third Conference on Neural Information Processing Systems (NeurIPS-2019), 2019 | 39 | 2019 |
Improved OOD Generalization via Adversarial Training and Pre-training M Yi, L Hou, J Sun, L Shang, X Jiang, Q Liu, ZM Ma The Thirty-eighth International Conference on Machine Learning (ICML-2021), 2021 | 33 | 2021 |
Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation W Dai, L Hou, L Shang, X Jiang, Q Liu, P Fung Findings of the Association for Computational Linguistics (ACL-IJCNLP 2022), 2022 | 29 | 2022 |
Wukong: 100 million large-scale chinese cross-modal pre-training dataset and a foundation framework J Gu, X Meng, G Lu, L Hou, M Niu, H Xu, X Liang, W Zhang, X Jiang, C Xu arXiv preprint arXiv:2202.06767, 2022 | 29* | 2022 |
Analysis of Quantized Models L Hou, R Zhang, JT Kwok 7th International Conference on Learning Representations (ICLR-2019), 2019 | 27 | 2019 |
Compression of Generative Pre-trained Language Models via Quantization C Tao, L Hou, W Zhang, L Shang, X Jiang, Q Liu, P Luo, N Wong 60th Annual Meeting of the Association for Computational Linguistics (ACL-2022), 2022 | 24 | 2022 |
Ghostbert: Generate more features with cheap operations for BERT Z Huang, L Hou, L Shang, X Jiang, X Chen, Q Liu 59th Annual Meeting of the Association for Computational Linguistics (ACL …, 2021 | 16 | 2021 |
Reweighting Augmented Samples by Minimizing the Maximal Expected Loss M Yi, L Hou, L Shang, X Jiang, Q Liu, ZM Ma 9th International Conference on Learning Representations (ICLR-2021), 2021 | 15 | 2021 |
Towards efficient post-training quantization of pre-trained language models H Bai, L Hou, L Shang, X Jiang, I King, MR Lyu Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS-2022), 2021 | 10 | 2021 |
Power Law in Deep Neural Networks: Sparse Network Generation and Continual Learning With Preferential Attachment F Feng, L Hou, Q She, RHM Chan, JT Kwok IEEE Transactions on Neural Networks and Learning Systems, 2022 | 4* | 2022 |
Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document Understanding H Bai, Z Liu, X Meng, W Li, S Liu, N Xie, R Zheng, L Wang, L Hou, J Wei, ... arXiv preprint arXiv:2212.09621, 2022 | 2 | 2022 |
LiteVL: Efficient Video-Language Learning with Enhanced Spatial-Temporal Modeling D Chen, C Tao, L Hou, L Shang, X Jiang, Q Liu Conference on Empirical Methods in Natural Language Processing (EMNLP-2022), 2022 | 2 | 2022 |
CTRL: Connect Tabular and Language Model for CTR Prediction X Li, B Chen, L Hou, R Tang arXiv preprint arXiv:2306.02841, 2023 | | 2023 |