Can chatgpt understand too? a comparative study on chatgpt and fine-tuned bert Q Zhong, L Ding, J Liu, B Du, D Tao Technical report. arXiv preprint arXiv:2302.10198, 2023 | 268* | 2023 |
Towards making the most of chatgpt for machine translation K Peng, L Ding, Q Zhong, L Shen, X Liu, M Zhang, Y Ouyang, D Tao Findings of EMNLP2023, 2023 | 215 | 2023 |
Knowledge graph augmented network towards multiview representation learning for aspect-based sentiment analysis Q Zhong, L Ding, J Liu, B Du, H Jin, D Tao IEEE TKDE, 2022 | 92 | 2022 |
A contrastive cross-channel data augmentation framework for aspect-based sentiment analysis B Wang, L Ding, Q Zhong, X Li, D Tao COLING2022, 2022 | 77 | 2022 |
Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models Q Zhong, L Ding, L Shen, P Mi, J Liu, B Du, D Tao EMNLP2022 Findings, 2022 | 48 | 2022 |
Panda: Prompt transfer meets knowledge distillation for efficient model adaptation Q Zhong, L Ding, J Liu, B Du, D Tao IEEE TKDE, 2024 | 40 | 2024 |
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE Q Zhong, L Ding, Y Zhan, Y Qiao, Y Wen, L Shen, J Liu, B Yu, B Du, ... Technical report. arXiv preprint arXiv:2212.01853, 2022 | 33 | 2022 |
Adasam: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks H Sun, L Shen, Q Zhong, L Ding, S Chen, J Sun, J Li, G Sun, D Tao Neural Networks, 2023 | 31 | 2023 |
SemiText: Scene text detection with semi-supervised learning J Liu, Q Zhong, Y Yuan, H Su, B Du Neurocomputing 407, 343-353, 2020 | 31 | 2020 |
Unified instance and knowledge alignment pretraining for aspect-based sentiment analysis J Liu, Q Zhong, L Ding, H Jin, B Du, D Tao IEEE TASLP, Co-first author, 2021 | 28 | 2021 |
Token-Level Self-Evolution Training for Sequence-to-Sequence Learning K Peng, L Ding, Q Zhong, Y Ouyang, W Rong, Z Xiong, D Tao ACL2023 Main, 841-850, 2023 | 20 | 2023 |
E2S2: Encoding-enhanced sequence-to-sequence pretraining for language understanding and generation Q Zhong, L Ding, J Liu, B Du, D Tao IEEE TKDE, 2022 | 19 | 2022 |
Revisiting Token Dropping Strategy in Efficient BERT Pretraining Q Zhong, L Ding, J Liu, X Liu, M Zhang, B Du, D Tao ACL2023 Main, 2023 | 15 | 2023 |
ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding Q Zhong, L Ding, J Liu, B Du, D Tao ACL2024 Findings, 2024 | 9 | 2024 |
Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE Q Zhong, L Ding, K Peng, J Liu, B Du, L Shen, Y Zhan, D Tao Technical report. arXiv preprint arXiv:2302.09268, 2023 | 9 | 2023 |
Self-Evolution Learning for Discriminative Language Model Pretraining Q Zhong, L Ding, J Liu, B Du, D Tao ACL2023 Findings, 2023 | 7 | 2023 |
Joint image and feature adaptative attention-aware networks for cross-modality semantic segmentation Q Zhong, F Zeng, F Liao, J Liu, B Du, JS Shang Neural Computing and Applications 35 (5), 3665-3676, 2023 | 7 | 2023 |
Revisiting Knowledge Distillation for Autoregressive Language Models Q Zhong, L Ding, L Shen, J Liu, B Du, D Tao ACL2024 Main, 2024 | 5 | 2024 |
Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models M Zhu, Q Zhong, L Shen, L Ding, J Liu, B Du, D Tao EMNLP2023, Co-first author, 2023 | 4 | 2023 |
Achieving> 97% on GSM8K: Deeply Understanding the Problems Makes LLMs Perfect Reasoners Q Zhong, K Wang, Z Xu, J Liu, L Ding, B Du, D Tao arXiv preprint arXiv:2404.14963, 2024 | 3 | 2024 |