关注
Weize Chen
Weize Chen
在 mails.tsinghua.edu.cn 的电子邮件经过验证
标题
引用次数
引用次数
年份
Parameter-efficient fine-tuning of large-scale pre-trained language models
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
Nature Machine Intelligence 5 (3), 220-235, 2023
1512023
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
arXiv preprint arXiv:2203.06904, 2022
1372022
Tool learning with foundation models
Y Qin, S Hu, Y Lin, W Chen, N Ding, G Cui, Z Zeng, Y Huang, C Xiao, ...
arXiv preprint arXiv:2304.08354, 2023
1252023
Communicative agents for software development
C Qian, X Cong, C Yang, W Chen, Y Su, J Xu, Z Liu, M Sun
arXiv preprint arXiv:2307.07924, 2023
1032023
Chateval: Towards better llm-based evaluators through multi-agent debate
CM Chan, W Chen, Y Su, J Yu, W Xue, S Zhang, J Fu, Z Liu
arXiv preprint arXiv:2308.07201, 2023
752023
Fully hyperbolic neural networks
W Chen, X Han, Y Lin, H Zhao, Z Liu, P Li, M Sun, J Zhou
arXiv preprint arXiv:2105.14686, 2021
632021
Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents
W Chen, Y Su, J Zuo, C Yang, C Yuan, C Qian, CM Chan, Y Qin, Y Lu, ...
arXiv preprint arXiv:2308.10848, 2023
492023
Exploring lowdimensional intrinsic task subspace via prompt tuning
Y Qin, X Wang, Y Su, Y Lin, N Ding, Z Liu, J Li, L Hou, P Li, M Sun, J Zhou
arXiv preprint arXiv:2110.07867, 2021
292021
GACT: Activation compressed training for generic network architectures
X Liu, L Zheng, D Wang, Y Cen, W Chen, X Han, J Chen, Z Liu, J Tang, ...
International Conference on Machine Learning, 14139-14152, 2022
162022
Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors
W Chen, Y Su, J Zuo, C Yang, C Yuan, CM Chan, H Yu, Y Lu, YH Hung, ...
The Twelfth International Conference on Learning Representations, 2023
92023
Quantifying similarity between relations with fact distribution
W Chen, H Zhu, X Han, Z Liu, M Sun
arXiv preprint arXiv:1907.08937, 2019
92019
Exploring mode connectivity for pre-trained language models
Y Qin, C Qian, J Yi, W Chen, Y Lin, X Han, Z Liu, M Sun, J Zhou
arXiv preprint arXiv:2210.14102, 2022
82022
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. CoRR, abs/2203.06904, 2022. doi: 10.48550
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
arXiv preprint arXiv.2203.06904, 0
7
Exploring Universal Intrinsic Task Subspace via Prompt Tuning
Y Qin, X Wang, Y Su, Y Lin, N Ding, J Yi, W Chen, Z Liu, J Li, L Hou, P Li, ...
arXiv preprint arXiv:2110.07867, 2021
62021
Cross-Lingual Contrastive Learning for Fine-Grained Entity Typing for Low-Resource Languages
X Han, Y Luo, W Chen, Z Liu, M Sun, Z Botong, H Fei, S Zheng
Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022
52022
Gact: Activation compressed training for general architectures
X Liu, L Zheng, D Wang, Y Cen, W Chen, X Han, J Chen, Z Liu, J Tang, ...
arXiv preprint arXiv:2206.11357, 2022
32022
D-bot: Database diagnosis system using large language models
X Zhou, G Li, Z Sun, Z Liu, W Chen, J Wu, J Liu, R Feng, G Zeng
arXiv preprint arXiv:2312.01454, 2023
22023
Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Parameter-Efficient Tuning
J Yi, W Chen, Y Qin, Y Lin, N Ding, X Han, Z Liu, M Sun, J Zhou
Findings of the Association for Computational Linguistics: EMNLP 2022, 3348-3366, 2022
22022
Experiential Co-Learning of Software-Developing Agents
C Qian, Y Dang, J Li, W Liu, W Chen, C Yang, Z Liu, M Sun
arXiv preprint arXiv:2312.17025, 2023
2023
Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models
W Chen, X Xu, X Han, Y Lin, R Xie, Z Liu, M Sun, J Zhou
arXiv preprint arXiv:2310.12818, 2023
2023
系统目前无法执行此操作,请稍后再试。
文章 1–20