CPM: A large-scale generative Chinese pre-trained language model Z Zhang, X Han, H Zhou, P Ke, Y Gu, D Ye, Y Qin, Y Su, H Ji, J Guan, F Qi, ... AI Open 2, 93-99, 2021 | 68 | 2021 |
ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning Y Qin, Y Lin, R Takanobu, Z Liu, P Li, H Ji, M Huang, M Sun, J Zhou ACL 2021, 2020 | 67 | 2020 |
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ... Nature Machine Intelligence, 2022 | 64 | 2022 |
Learning from Explanations with Neural Execution Tree Z Wang, Y Qin, W Zhou, J Yan, Q Ye, L Neves, Z Liu, X Ren ICLR 2020, 2019 | 33 | 2019 |
On transferability of prompt tuning for natural language processing Y Su, X Wang, Y Qin, CM Chan, Y Lin, H Wang, K Wen, Z Liu, P Li, J Li, ... Proceedings of the 2022 Conference of the North American Chapter of the …, 2022 | 29 | 2022 |
Knowledge inheritance for pre-trained language models Y Qin, Y Lin, J Yi, J Zhang, X Han, Z Zhang, Y Su, Z Liu, P Li, M Sun, ... NAACL 2022, 2021 | 20 | 2021 |
ELLE: Efficient Lifelong Pre-training for Emerging Data Y Qin, J Zhang, Y Lin, Z Liu, P Li, M Sun, J Zhou Findings of ACL 2022, 2022 | 19 | 2022 |
Exploring low-dimensional intrinsic task subspace via prompt tuning Y Qin, X Wang, Y Su, Y Lin, N Ding, Z Liu, J Li, L Hou, P Li, M Sun, J Zhou Previously Accepted by Findings of ACL 2022 and EMNLP 2022, 2021 | 17 | 2021 |
Improving sequence modeling ability of recurrent neural networks via sememes Y Qin, F Qi, S Ouyang, Z Liu, C Yang, Y Wang, Q Liu, M Sun IEEE/ACM Transactions on Audio, Speech, and Language Processing 28, 2364-2373, 2020 | 15 | 2020 |
Tool learning with foundation models Y Qin, S Hu, Y Lin, W Chen, N Ding, G Cui, Z Zeng, Y Huang, C Xiao, ... arXiv preprint arXiv:2304.08354, 2023 | 14 | 2023 |
bert2BERT: Towards Reusable Pretrained Language Models C Chen, Y Yin, L Shang, X Jiang, Y Qin, F Wang, Z Wang, X Chen, Z Liu, ... ACL 2022, 2021 | 14 | 2021 |
ProQA: Structural Prompt-based Pre-training for Unified Question Answering W Zhong, Y Gao, N Ding, Y Qin, Z Liu, M Zhou, J Wang, J Yin, N Duan NAACL 2022, 2022 | 13 | 2022 |
On Transferability of Prompt Tuning for Natural Language Understanding Y Su, X Wang, Y Qin, CM Chan, Y Lin, Z Liu, P Li, J Li, L Hou, M Sun, ... NAACL 2022, 2021 | 13 | 2021 |
Parameter-efficient fine-tuning of large-scale pre-trained language models N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ... Nature Machine Intelligence 5 (3), 220-235, 2023 | 7 | 2023 |
Enhancing recurrent neural networks with sememes Y Qin, F Qi, S Ouyang, Z Liu, C Yang, Y Wang, Q Liu, M Sun IEEE/ACM Transactions on Audio, Speech, and Language Processing 28, 2364-2373, 2020 | 6 | 2020 |
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models, 2022 N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ... URL https://arxiv. org/abs/2203.06904, 0 | 5 | |
Exploring Mode Connectivity for Pre-trained Language Models Y Qin, C Qian, J Yi, W Chen, Y Lin, X Han, Z Liu, M Sun, J Zhou EMNLP 2022, 2022 | 2 | 2022 |
Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Parameter-Efficient Tuning J Yi, W Chen, Y Qin, Y Lin, N Ding, X Han, Z Liu, M Sun, J Zhou Findings of the Association for Computational Linguistics: EMNLP 2022, 3348-3366, 2022 | 1 | 2022 |
Pass off fish eyes for pearls: Attacking model selection of pre-trained models B Zhu, Y Qin, F Qi, Y Deng, Z Liu, M Sun, M Gu Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022 | 1 | 2022 |
Exploring Universal Intrinsic Task Subspace via Prompt Tuning Y Qin, X Wang, Y Su, Y Lin, N Ding, J Yi, W Chen, Z Liu, J Li, L Hou, P Li, ... arXiv e-prints, arXiv: 2110.07867, 2021 | 1 | 2021 |