Simcse: Simple contrastive learning of sentence embeddings T Gao, X Yao, D Chen arXiv preprint arXiv:2104.08821, 2021 | 1622 | 2021 |
Making pre-trained language models better few-shot learners T Gao, A Fisch, D Chen arXiv preprint arXiv:2012.15723, 2020 | 1014 | 2020 |
KEPLER: A unified model for knowledge embedding and pre-trained language representation X Wang, T Gao, Z Zhu, Z Zhang, Z Liu, J Li, J Tang Transactions of the Association for Computational Linguistics 9, 176-194, 2021 | 426 | 2021 |
Hybrid attention-based prototypical networks for noisy few-shot relation classification T Gao, X Han, Z Liu, M Sun Proceedings of the AAAI conference on artificial intelligence 33 (01), 6407-6414, 2019 | 315 | 2019 |
FewRel 2.0: Towards More Challenging Few-Shot Relation Classification T Gao, X Han, H Zhu, Z Liu, P Li, M Sun, J Zhou Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019 | 209 | 2019 |
OpenNRE: An Open and Extensible Toolkit for Neural Relation Extraction X Han, T Gao, Y Yao, D Ye, Z Liu, M Sun Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019 | 138 | 2019 |
Learning from context or names? An empirical study on neural relation extraction H Peng, T Gao, X Han, Y Lin, P Li, Z Liu, M Sun, J Zhou arXiv preprint arXiv:2010.01923, 2020 | 137 | 2020 |
More data, more relations, more context and more openness: A review and outlook for relation extraction X Han, T Gao, Y Lin, H Peng, Y Yang, C Xiao, Z Liu, P Li, M Sun, J Zhou arXiv preprint arXiv:2004.03186, 2020 | 108 | 2020 |
Few-shot relation extraction via bayesian meta-learning on relation graphs M Qu, T Gao, LP Xhonneux, J Tang International conference on machine learning, 7867-7876, 2020 | 92 | 2020 |
Should you mask 15% in masked language modeling? A Wettig, T Gao, Z Zhong, D Chen arXiv preprint arXiv:2202.08005, 2022 | 63 | 2022 |
Neural snowball for few-shot relation learning T Gao, X Han, R Xie, Z Liu, F Lin, L Lin, M Sun Proceedings of the AAAI Conference on Artificial Intelligence 34 (05), 7772-7779, 2020 | 62 | 2020 |
Continual relation learning via episodic memory activation and reconsolidation X Han, Y Dai, T Gao, Y Lin, Z Liu, P Li, M Sun, J Zhou Proceedings of the 58th Annual Meeting of the Association for Computational …, 2020 | 58 | 2020 |
Meta-information guided meta-learning for few-shot relation classification B Dong, Y Yao, R Xie, T Gao, X Han, Z Liu, F Lin, L Lin, M Sun Proceedings of the 28th international conference on computational …, 2020 | 26 | 2020 |
Manual evaluation matters: reviewing test protocols of distantly supervised relation extraction T Gao, X Han, K Qiu, Y Bai, Z Xie, Y Lin, Z Liu, P Li, M Sun, J Zhou arXiv preprint arXiv:2105.09543, 2021 | 16 | 2021 |
Recovering private text in federated learning of language models S Gupta, Y Huang, Z Zhong, T Gao, K Li, D Chen Advances in Neural Information Processing Systems 35, 8130-8143, 2022 | 14 | 2022 |
Ditch the gold standard: Re-evaluating conversational question answering H Li, T Gao, M Goenka, D Chen arXiv preprint arXiv:2112.08812, 2021 | 12 | 2021 |
Fine-Tuning Language Models with Just Forward Passes S Malladi, T Gao, E Nichani, A Damian, JD Lee, D Chen, S Arora arXiv preprint arXiv:2305.17333, 2023 | 9 | 2023 |
Enabling Large Language Models to Generate Text with Citations T Gao, H Yen, J Yu, D Chen arXiv preprint arXiv:2305.14627, 2023 | 9 | 2023 |
Automatic Label Sequence Generation for Prompting Sequence-to-sequence Models Z Yu, T Gao, Z Zhang, Y Lin, Z Liu, M Sun, J Zhou arXiv preprint arXiv:2209.09401, 2022 | 1 | 2022 |
MoQA: Benchmarking Multi-Type Open-Domain Question Answering H Yen, T Gao, J Lee, D Chen Proceedings of the Third DialDoc Workshop on Document-grounded Dialogue and …, 2023 | | 2023 |