Instructuie: Multi-task instruction tuning for unified information extraction X Wang, W Zhou, C Zu, H Xia, T Chen, Y Zhang, R Zheng, J Ye, Q Zhang, ... arXiv preprint arXiv:2304.08085, 2023 | 72* | 2023 |
TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models X Wang, Y Zhang, T Chen, S Gao, S Jin, X Yang, Z Xi, R Zheng, Y Zou, ... arXiv preprint arXiv:2310.06762, 2023 | 19* | 2023 |
Codechameleon: Personalized encryption framework for jailbreaking large language models H Lv, X Wang, Y Zhang, C Huang, S Dou, J Ye, T Gui, Q Zhang, X Huang arXiv preprint arXiv:2402.16717, 2024 | 18 | 2024 |
Connectivity patterns are task embeddings Z Xi, R Zheng, Y Zhang, XJ Huang, Z Wei, M Peng, M Sun, Q Zhang, T Gui Findings of the Association for Computational Linguistics: ACL 2023, 11993-12013, 2023 | 4 | 2023 |
GumbelSoft: Diversified Language Model Watermarking via the GumbelMax-trick J Fu, X Zhao, R Yang, Y Zhang, J Chen, Y Xiao arXiv preprint arXiv:2402.12948, 2024 | 3 | 2024 |
P4: Plug-and-Play Discrete Prompting for Large Language Models Personalization Y Zhang, X Wang, T Chen, J Fu, T Gui, Q Zhang Findings of the Association for Computational Linguistics ACL 2024, 9129-9144, 2024 | | 2024 |
RoCoIns: Enhancing Robustness of Large Language Models through Code-Style Instructions Y Zhang, X Wang, Z Xi, H Xia, T Gui, Q Zhang, X Huang arXiv preprint arXiv:2402.16431, 2024 | | 2024 |