关注
Xiaozhi Wang
Xiaozhi Wang
其他姓名王 晓智
在 mails.tsinghua.edu.cn 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Parameter-efficient fine-tuning of large-scale pre-trained language models
N Ding*, Y Qin*, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
Nature Machine Intelligence 5 (3), 220-235, 2023
776*2023
KEPLER: A unified model for knowledge embedding and pre-trained language representation
X Wang, T Gao, Z Zhu, Z Zhang, Z Liu, J Li, J Tang
Transactions of the Association for Computational Linguistics 9, 176-194, 2021
7212021
MAVEN: A Massive General Domain Event Detection Dataset
X Wang, Z Wang, X Han, W Jiang, R Han, Z Liu, J Li, P Li, Y Lin, J Zhou
Proceedings of the 2020 Conference on Empirical Methods in Natural Language …, 2020
1932020
Adversarial training for weakly supervised event detection
X Wang*, X Han*, Z Liu, M Sun, P Li
Proceedings of the 2019 Conference of the North American Chapter of the …, 2019
1732019
On transferability of prompt tuning for natural language processing
Y Su*, X Wang*, Y Qin, CM Chan, Y Lin, H Wang, K Wen, Z Liu, P Li, J Li, ...
Proceedings of the 2022 Conference of the North American Chapter of the …, 2022
1432022
CPM: A large-scale generative Chinese pre-trained language model
Z Zhang, X Han, H Zhou, P Ke, Y Gu, D Ye, Y Qin, Y Su, H Ji, J Guan, F Qi, ...
AI Open 2, 93-99, 2021
1172021
CLEVE: Contrastive pre-training for event extraction
Z Wang*, X Wang*, X Han, Y Lin, L Hou, Z Liu, P Li, J Li, J Zhou
Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021
1132021
HMEAE: Hierarchical modular event argument extraction
X Wang*, Z Wang*, X Han, Z Liu, J Li, P Li, M Sun, J Zhou, X Ren
Proceedings of the 2019 Conference on empirical methods in natural language …, 2019
1052019
Benchmarking Foundation Models with Language-Model-as-an-Examiner
Y Bai*, J Ying*, Y Cao, X Lv, Y He, X Wang, J Yu, K Zeng, Y Xiao, H Lyu, ...
Advances in Neural Information Processing Systems 36, 78142--78167, 2023
1022023
KoLA: Carefully Benchmarking World Knowledge of Large Language Models
J Yu*, X Wang*, S Tu, S Cao, D Zhang-Li, X Lv, H Peng, Z Yao, X Zhang, ...
arXiv preprint arXiv:2306.09296, 2023
1022023
Finding Skill Neurons in Pre-trained Transformer-based Language Models
X Wang*, K Wen*, Z Zhang, L Hou, Z Liu, J Li
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
742022
Train No Evil: Selective Masking for Task-guided Pre-training
Y Gu, Z Zhang, X Wang, Z Liu, M Sun
Proceedings of the 2020 Conference on Empirical Methods in Natural Language …, 2020
682020
LEVEN: A Large-Scale Chinese Legal Event Detection Dataset
F Yao*, C Xiao*, X Wang, Z Liu, L Hou, C Tu, J Li, Y Liu, W Shen, M Sun
Findings of the Association for Computational Linguistics: ACL 2022, 183-201, 2022
632022
Exploring Universal Intrinsic Task Subspace for Few-Shot Learning via Prompt Tuning
Y Qin*, X Wang*, Y Su, Y Lin, N Ding, J Yi, W Chen, Z Liu, J Li, L Hou, P Li, ...
IEEE/ACM Transactions on Audio, Speech, and Language Processing 32, 3631-3643, 2024
58*2024
Adversarial multi-lingual neural relation extraction
X Wang*, X Han*, Y Lin, Z Liu, M Sun
Proceedings of the 27th International Conference on Computational …, 2018
482018
MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction
X Wang*, Y Chen*, N Ding, H Peng, Z Wang, Y Lin, X Han, L Hou, J Li, ...
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
412022
COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
H Peng*, X Wang*, S Hu, H Jin, L Hou, J Li, Z Liu, Q Liu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
392022
Emergent Modularity in Pre-trained Transformers
Z Zhang*, Z Zeng*, Y Lin, C Xiao, X Wang, X Han, Z Liu, R Xie, M Sun, ...
Findings of the Association for Computational Linguistics: ACL 2023, 4066--4083, 2023
212023
Sub-character tokenization for chinese pretrained language models
C Si*, Z Zhang*, Y Chen*, F Qi, X Wang, Z Liu, Y Wang, Q Liu, M Sun
Transactions of the Association for Computational Linguistics 11, 469-487, 2023
20*2023
When does in-context learning fall short and why? a study on specification-heavy tasks
H Peng, X Wang, J Chen, W Li, Y Qi, Z Wang, Z Wu, K Zeng, B Xu, L Hou, ...
arXiv preprint arXiv:2311.08993, 2023
192023
系统目前无法执行此操作,请稍后再试。
文章 1–20