关注
Tao Ge
Tao Ge
Microsoft Research
在 microsoft.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Bert loses patience: Fast and robust inference with early exit
W Zhou, C Xu, T Ge, J McAuley, K Xu, F Wei
Advances in Neural Information Processing Systems 33, 18330-18341, 2020
2722020
Bert-of-theseus: Compressing bert by progressive module replacing
C Xu, W Zhou, T Ge, F Wei, M Zhou
arXiv preprint arXiv:2002.02925, 2020
2052020
Max-margin tensor neural network for Chinese word segmentation
W Pei, T Ge, B Chang
Proceedings of the 52nd Annual Meeting of the Association for Computational …, 2014
1982014
Towards time-aware knowledge graph completion
T Jiang, T Liu, T Ge, L Sha, B Chang, S Li, Z Sui
Proceedings of COLING 2016, the 26th International Conference on …, 2016
1602016
Fluency boost learning and inference for neural grammatical error correction
T Ge, F Wei, M Zhou
Proceedings of the 56th Annual Meeting of the Association for Computational …, 2018
1382018
Encoding temporal information for time-aware link prediction
T Jiang, T Liu, T Ge, L Sha, S Li, B Chang, Z Sui
Proceedings of the 2016 conference on empirical methods in natural language …, 2016
1252016
BERT-based lexical substitution
W Zhou, T Ge, K Xu, F Wei, M Zhou
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
1102019
Parallel data augmentation for formality style transfer
Y Zhang, T Ge, X Sun
arXiv preprint arXiv:2005.07522, 2020
101*2020
Unleashing the emergent cognitive synergy in large language models: A task-solving agent through multi-persona self-collaboration
Z Wang, S Mao, W Wu, T Ge, F Wei, H Ji
arXiv preprint arXiv:2307.05300, 2023
922023
Reaching human-level performance in automatic grammatical error correction: An empirical study
T Ge, F Wei, M Zhou
arXiv preprint arXiv:1807.01270, 2018
852018
Exploiting task-oriented resources to learn word embeddings for clinical abbreviation expansion
Y Liu, T Ge, KS Mathews, H Ji, DL McGuinness
arXiv preprint arXiv:1804.04225, 2018
832018
An effective neural network model for graph-based dependency parsing
W Pei, T Ge, B Chang
Proceedings of the 53rd Annual Meeting of the Association for Computational …, 2015
762015
Improving the efficiency of grammatical error correction with erroneous span detection and correction
M Chen, T Ge, X Zhang, F Wei, M Zhou
arXiv preprint arXiv:2010.03260, 2020
492020
Instantaneous grammatical error correction with shallow aggressive decoding
X Sun, T Ge, F Wei, H Wang
arXiv preprint arXiv:2106.04970, 2021
452021
Beyond preserved accuracy: Evaluating loyalty and robustness of BERT compression
C Xu, W Zhou, T Ge, K Xu, J McAuley, F Wei
arXiv preprint arXiv:2109.03228, 2021
422021
In-context autoencoder for context compression in a large language model
T Ge, J Hu, X Wang, SQ Chen, F Wei
arXiv preprint arXiv:2307.06945, 2023
402023
Formality style transfer with hybrid textual annotations
R Xu, T Ge, F Wei
arXiv preprint arXiv:1903.06353, 2019
392019
Scheduled drophead: A regularization method for transformer models
W Zhou, T Ge, K Xu, F Wei, M Zhou
arXiv preprint arXiv:2004.13342, 2020
372020
Inference with reference: Lossless acceleration of large language models
N Yang, T Ge, L Wang, B Jiao, D Jiang, L Yang, R Majumder, F Wei
arXiv preprint arXiv:2304.04487, 2023
332023
Improving grammatical error correction with machine translation pairs
W Zhou, T Ge, C Mu, K Xu, F Wei, M Zhou
arXiv preprint arXiv:1911.02825, 2019
322019
系统目前无法执行此操作,请稍后再试。
文章 1–20