关注
Tao Ge
Tao Ge
Microsoft Research
在 microsoft.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Bert loses patience: Fast and robust inference with early exit
W Zhou, C Xu, T Ge, J McAuley, K Xu, F Wei
Advances in Neural Information Processing Systems 33, 18330-18341, 2020
2842020
Bert-of-theseus: Compressing bert by progressive module replacing
C Xu, W Zhou, T Ge, F Wei, M Zhou
arXiv preprint arXiv:2002.02925, 2020
2062020
Max-margin tensor neural network for Chinese word segmentation
W Pei, T Ge, B Chang
Proceedings of the 52nd Annual Meeting of the Association for Computational …, 2014
2032014
Towards time-aware knowledge graph completion
T Jiang, T Liu, T Ge, L Sha, B Chang, S Li, Z Sui
Proceedings of COLING 2016, the 26th International Conference on …, 2016
1662016
Fluency boost learning and inference for neural grammatical error correction
T Ge, F Wei, M Zhou
Proceedings of the 56th Annual Meeting of the Association for Computational …, 2018
1392018
Encoding temporal information for time-aware link prediction
T Jiang, T Liu, T Ge, L Sha, S Li, B Chang, Z Sui
Proceedings of the 2016 conference on empirical methods in natural language …, 2016
1272016
BERT-based lexical substitution
W Zhou, T Ge, K Xu, F Wei, M Zhou
Proceedings of the 57th annual meeting of the association for computational …, 2019
1152019
Unleashing the emergent cognitive synergy in large language models: A task-solving agent through multi-persona self-collaboration
Z Wang, S Mao, W Wu, T Ge, F Wei, H Ji
arXiv preprint arXiv:2307.05300, 2023
1052023
Parallel data augmentation for formality style transfer
Y Zhang, T Ge, X Sun
arXiv preprint arXiv:2005.07522, 2020
103*2020
Reaching human-level performance in automatic grammatical error correction: An empirical study
T Ge, F Wei, M Zhou
arXiv preprint arXiv:1807.01270, 2018
862018
Exploiting task-oriented resources to learn word embeddings for clinical abbreviation expansion
Y Liu, T Ge, KS Mathews, H Ji, DL McGuinness
arXiv preprint arXiv:1804.04225, 2018
822018
An effective neural network model for graph-based dependency parsing
W Pei, T Ge, B Chang
Proceedings of the 53rd Annual Meeting of the Association for Computational …, 2015
782015
In-context autoencoder for context compression in a large language model
T Ge, J Hu, L Wang, X Wang, SQ Chen, F Wei
arXiv preprint arXiv:2307.06945, 2023
502023
Improving the efficiency of grammatical error correction with erroneous span detection and correction
M Chen, T Ge, X Zhang, F Wei, M Zhou
arXiv preprint arXiv:2010.03260, 2020
492020
Instantaneous grammatical error correction with shallow aggressive decoding
X Sun, T Ge, F Wei, H Wang
arXiv preprint arXiv:2106.04970, 2021
472021
Beyond preserved accuracy: Evaluating loyalty and robustness of BERT compression
C Xu, W Zhou, T Ge, K Xu, J McAuley, F Wei
arXiv preprint arXiv:2109.03228, 2021
422021
Formality style transfer with hybrid textual annotations
R Xu, T Ge, F Wei
arXiv preprint arXiv:1903.06353, 2019
402019
Scheduled drophead: A regularization method for transformer models
W Zhou, T Ge, K Xu, F Wei, M Zhou
arXiv preprint arXiv:2004.13342, 2020
392020
Inference with reference: Lossless acceleration of large language models
N Yang, T Ge, L Wang, B Jiao, D Jiang, L Yang, R Majumder, F Wei
arXiv preprint arXiv:2304.04487, 2023
372023
Low-code llm: Visual programming over llms
Y Cai, S Mao, W Wu, Z Wang, Y Liang, T Ge, C Wu, W You, T Song, Y Xia, ...
arXiv preprint arXiv:2304.08103 2, 2023
342023
系统目前无法执行此操作,请稍后再试。
文章 1–20