关注
Xiao Liu
Xiao Liu
Tsinghua University, Department of Computer Science and Technology
在 mails.tsinghua.edu.cn 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Self-supervised learning: Generative or contrastive
X Liu, F Zhang, Z Hou, L Mian, Z Wang, J Zhang, J Tang
IEEE Transactions on Knowledge and Data Engineering, 2021
3842021
GPT Understands, Too
X Liu*, Y Zheng*, Z Du, M Ding, Y Qian, Z Yang, J Tang
arXiv preprint arXiv:2103.10385, 2021
217*2021
Pre-trained models: Past, present and future
X Han*, Z Zhang*, N Ding*, Y Gu*, X Liu*, Y Huo*, J Qiu, L Zhang, W Han, ...
AI Open, 2021
772021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
X Liu*, K Ji*, Y Fu*, Z Du, Z Yang, J Tang
Proceedings of the 60th Annual Meeting of the Association of Computational …, 2021
67*2021
Oag: Toward linking large-scale heterogeneous entity graphs
F Zhang, X Liu, J Tang, Y Dong, P Yao, J Zhang, X Gu, Y Wang, B Shao, ...
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge …, 2019
602019
Language models are open knowledge graphs
C Wang, X Liu, D Song
arXiv preprint arXiv:2010.11967, 2020
432020
GLM: General Language Model Pretraining with Autoregressive Blank Infilling
Z Du, Y Qian, X Liu, M Ding, J Qiu, Z Yang, J Tang
Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022
36*2022
OAG-BERT: Towards A Unified Backbone Language Model For Academic Knowledge Services
X Liu, D Yin, J Zheng, X Zhang, P Zhang, H Yang, Y Dong, J Tang
11*2022
Wudaocorpora: A super large-scale chinese corpora for pre-training language models
S Yuan, H Zhao, Z Du, M Ding, X Liu, Y Cen, X Zou, Z Yang, J Tang
AI Open 2, 65-68, 2021
72021
Zero-Shot Information Extraction as a Unified Text-to-Triple Translation
C Wang, X Liu, Z Chen, H Hong, J Tang, D Song
Proceedings of the 2021 Conference on Empirical Methods in Natural Language …, 2021
52021
GraphMAE: Self-Supervised Masked Graph Autoencoders
Z Hou, X Liu, Y Dong, C Wang, J Tang
arXiv preprint arXiv:2205.10803, 2022
42022
SelfKG: Self-Supervised Entity Alignment in Knowledge Graphs
X Liu*, H Hong*, X Wang, Z Chen, E Kharlamov, Y Dong, J Tang
Proceedings of the ACM Web Conference 2022, 2022, 2022
4*2022
Mask and Reason: Pre-Training Knowledge Graph Transformers for Complex Logical Queries
X Liu, S Zhao, K Su, Y Cen, J Qiu, M Zhang, W Wu, Y Dong, J Tang
KDD, 2022
12022
OAG_know: Self-supervised Learning for Linking Knowledge Graphs
X Liu, L Mian, Y Dong, F Zhang, J Zhang, J Tang, P Zhang, J Gong, ...
IEEE Transactions on Knowledge and Data Engineering, 2021
12021
Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers
WL Tam*, X Liu*, K Ji, L Xue, X Zhang, Y Dong, J Liu, M Hu, J Tang
arXiv preprint arXiv:2207.07087, 2022
2022
DeepStruct: Pretraining of Language Models for Structure Prediction
C Wang, X Liu, Z Chen, H Hong, J Tang, D Song
Findings of ACL 2022, 2022
2022
The International Workshop on Pretraining: Algorithms, Architectures, and Applications (Pretrain@ KDD 2021)
M Ding, Y Dong, X Liu, J Qiu, J Tang, Z Yang
Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data …, 2021
2021
系统目前无法执行此操作,请稍后再试。
文章 1–17