关注
Qihuang Zhong (钟起煌)
Qihuang Zhong (钟起煌)
在 whu.edu.cn 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Can chatgpt understand too? a comparative study on chatgpt and fine-tuned bert
Q Zhong, L Ding, J Liu, B Du, D Tao
Technical report. arXiv preprint arXiv:2302.10198, 2023
268*2023
Towards making the most of chatgpt for machine translation
K Peng, L Ding, Q Zhong, L Shen, X Liu, M Zhang, Y Ouyang, D Tao
Findings of EMNLP2023, 2023
2152023
Knowledge graph augmented network towards multiview representation learning for aspect-based sentiment analysis
Q Zhong, L Ding, J Liu, B Du, H Jin, D Tao
IEEE TKDE, 2022
922022
A contrastive cross-channel data augmentation framework for aspect-based sentiment analysis
B Wang, L Ding, Q Zhong, X Li, D Tao
COLING2022, 2022
772022
Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models
Q Zhong, L Ding, L Shen, P Mi, J Liu, B Du, D Tao
EMNLP2022 Findings, 2022
482022
Panda: Prompt transfer meets knowledge distillation for efficient model adaptation
Q Zhong, L Ding, J Liu, B Du, D Tao
IEEE TKDE, 2024
402024
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
Q Zhong, L Ding, Y Zhan, Y Qiao, Y Wen, L Shen, J Liu, B Yu, B Du, ...
Technical report. arXiv preprint arXiv:2212.01853, 2022
332022
Adasam: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks
H Sun, L Shen, Q Zhong, L Ding, S Chen, J Sun, J Li, G Sun, D Tao
Neural Networks, 2023
312023
SemiText: Scene text detection with semi-supervised learning
J Liu, Q Zhong, Y Yuan, H Su, B Du
Neurocomputing 407, 343-353, 2020
312020
Unified instance and knowledge alignment pretraining for aspect-based sentiment analysis
J Liu, Q Zhong, L Ding, H Jin, B Du, D Tao
IEEE TASLP, Co-first author, 2021
282021
Token-Level Self-Evolution Training for Sequence-to-Sequence Learning
K Peng, L Ding, Q Zhong, Y Ouyang, W Rong, Z Xiong, D Tao
ACL2023 Main, 841-850, 2023
202023
E2S2: Encoding-enhanced sequence-to-sequence pretraining for language understanding and generation
Q Zhong, L Ding, J Liu, B Du, D Tao
IEEE TKDE, 2022
192022
Revisiting Token Dropping Strategy in Efficient BERT Pretraining
Q Zhong, L Ding, J Liu, X Liu, M Zhang, B Du, D Tao
ACL2023 Main, 2023
152023
ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding
Q Zhong, L Ding, J Liu, B Du, D Tao
ACL2024 Findings, 2024
92024
Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE
Q Zhong, L Ding, K Peng, J Liu, B Du, L Shen, Y Zhan, D Tao
Technical report. arXiv preprint arXiv:2302.09268, 2023
92023
Self-Evolution Learning for Discriminative Language Model Pretraining
Q Zhong, L Ding, J Liu, B Du, D Tao
ACL2023 Findings, 2023
72023
Joint image and feature adaptative attention-aware networks for cross-modality semantic segmentation
Q Zhong, F Zeng, F Liao, J Liu, B Du, JS Shang
Neural Computing and Applications 35 (5), 3665-3676, 2023
72023
Revisiting Knowledge Distillation for Autoregressive Language Models
Q Zhong, L Ding, L Shen, J Liu, B Du, D Tao
ACL2024 Main, 2024
52024
Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models
M Zhu, Q Zhong, L Shen, L Ding, J Liu, B Du, D Tao
EMNLP2023, Co-first author, 2023
42023
Achieving> 97% on GSM8K: Deeply Understanding the Problems Makes LLMs Perfect Reasoners
Q Zhong, K Wang, Z Xu, J Liu, L Ding, B Du, D Tao
arXiv preprint arXiv:2404.14963, 2024
32024
系统目前无法执行此操作,请稍后再试。
文章 1–20