Parameter-efficient fine-tuning of large-scale pre-trained language models N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ... Nature Machine Intelligence 5 (3), 220-235, 2023 | 786* | 2023 |
Toolllm: Facilitating large language models to master 16000+ real-world apis Y Qin, S Liang, Y Ye, K Zhu, L Yan, Y Lu, Y Lin, X Cong, X Tang, B Qian, ... ICLR 2024 spotlight, 2023 | 440 | 2023 |
Enhancing chat language models by scaling high-quality instructional conversations N Ding, Y Chen, B Xu, Y Qin, Z Zheng, S Hu, Z Liu, M Sun, B Zhou arXiv preprint arXiv:2305.14233, 2023 | 302 | 2023 |
Tool learning with foundation models. CoRR, abs/2304.08354, 2023. doi: 10.48550 Y Qin, S Hu, Y Lin, W Chen, N Ding, G Cui, Z Zeng, Y Huang, C Xiao, ... arXiv preprint arXiv.2304.08354 10, 0 | 247* | |
Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors W Chen, Y Su, J Zuo, C Yang, C Yuan, CM Chan, H Yu, Y Lu, YH Hung, ... The Twelfth International Conference on Learning Representations, 2023 | 221* | 2023 |
On Transferability of Prompt Tuning for Natural Language Understanding Y Su, X Wang, Y Qin, CM Chan, Y Lin, Z Liu, P Li, J Li, L Hou, M Sun, ... NAACL 2022, 2021 | 142* | 2021 |
ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning Y Qin, Y Lin, R Takanobu, Z Liu, P Li, H Ji, M Huang, M Sun, J Zhou ACL 2021, 2020 | 129 | 2020 |
CPM: A large-scale generative Chinese pre-trained language model Z Zhang, X Han, H Zhou, P Ke, Y Gu, D Ye, Y Qin, Y Su, H Ji, J Guan, F Qi, ... AI Open 2, 93-99, 2021 | 120 | 2021 |
bert2BERT: Towards Reusable Pretrained Language Models C Chen, Y Yin, L Shang, X Jiang, Y Qin, F Wang, Z Wang, X Chen, Z Liu, ... ACL 2022, 2021 | 71 | 2021 |
Webcpm: Interactive web search for chinese long-form question answering Y Qin, Z Cai, D Jin, L Yan, S Liang, K Zhu, Y Lin, X Han, N Ding, H Wang, ... arXiv preprint arXiv:2305.06849, 2023 | 62 | 2023 |
Creator: Tool creation for disentangling abstract and concrete reasoning of large language models C Qian, C Han, YR Fung, Y Qin, Z Liu, H Ji arXiv preprint arXiv:2305.14318, 2023 | 61* | 2023 |
ELLE: Efficient Lifelong Pre-training for Emerging Data Y Qin, J Zhang, Y Lin, Z Liu, P Li, M Sun, J Zhou Findings of ACL 2022, 2022 | 58 | 2022 |
Exploring Universal Intrinsic Task Subspace for Few-Shot Learning via Prompt Tuning Y Qin, X Wang, Y Su, Y Lin, N Ding, J Yi, W Chen, Z Liu, J Li, L Hou, P Li, ... IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024 | 57* | 2024 |
Knowledge inheritance for pre-trained language models Y Qin, Y Lin, J Yi, J Zhang, X Han, Z Zhang, Y Su, Z Liu, P Li, M Sun, ... NAACL 2022, 2021 | 56 | 2021 |
Learning from Explanations with Neural Execution Tree Z Wang, Y Qin, W Zhou, J Yan, Q Ye, L Neves, Z Liu, X Ren ICLR 2020, 2019 | 45* | 2019 |
Debugbench: Evaluating debugging capability of large language models R Tian, Y Ye, Y Qin, X Cong, Y Lin, Z Liu, M Sun ACL 2024, 2024 | 36 | 2024 |
ProQA: Structural Prompt-based Pre-training for Unified Question Answering W Zhong, Y Gao, N Ding, Y Qin, Z Liu, M Zhou, J Wang, J Yin, N Duan NAACL 2022, 2022 | 34 | 2022 |
Moderate-fitting as a Natural Backdoor Defender for Pre-trained Language Models B Zhu, Y Qin, G Cui, Y Chen, W Zhao, C Fu, Y Deng, Z Liu, J Wang, W Wu, ... NeurIPS 2022, 2022 | 26 | 2022 |
Tell me more! towards implicit user intention understanding of language model driven agents C Qian, B He, Z Zhuang, J Deng, Y Qin, X Cong, Y Lin, Z Zhang, Z Liu, ... ACL 2024, 2024 | 25* | 2024 |
Improving sequence modeling ability of recurrent neural networks via sememes Y Qin, F Qi, S Ouyang, Z Liu, C Yang, Y Wang, Q Liu, M Sun IEEE/ACM Transactions on Audio, Speech, and Language Processing 28, 2364-2373, 2020 | 23 | 2020 |