Follow
Tao Ge
Tao Ge
Microsoft Research
Verified email at microsoft.com - Homepage
Title
Cited by
Cited by
Year
Bert loses patience: Fast and robust inference with early exit
W Zhou, C Xu, T Ge, J McAuley, K Xu, F Wei
Advances in Neural Information Processing Systems 33, 18330-18341, 2020
3072020
Bert-of-theseus: Compressing bert by progressive module replacing
C Xu, W Zhou, T Ge, F Wei, M Zhou
arXiv preprint arXiv:2002.02925, 2020
2142020
Max-margin tensor neural network for Chinese word segmentation
W Pei, T Ge, B Chang
Proceedings of the 52nd Annual Meeting of the Association for Computational …, 2014
2042014
Towards time-aware knowledge graph completion
T Jiang, T Liu, T Ge, L Sha, B Chang, S Li, Z Sui
Proceedings of COLING 2016, the 26th International Conference on …, 2016
1762016
Fluency boost learning and inference for neural grammatical error correction
T Ge, F Wei, M Zhou
Proceedings of the 56th Annual Meeting of the Association for Computational …, 2018
1442018
Unleashing the emergent cognitive synergy in large language models: A task-solving agent through multi-persona self-collaboration
Z Wang, S Mao, W Wu, T Ge, F Wei, H Ji
arXiv preprint arXiv:2307.05300, 2023
1352023
Encoding temporal information for time-aware link prediction
T Jiang, T Liu, T Ge, L Sha, S Li, B Chang, Z Sui
Proceedings of the 2016 conference on empirical methods in natural language …, 2016
1352016
BERT-based lexical substitution
W Zhou, T Ge, K Xu, F Wei, M Zhou
Proceedings of the 57th annual meeting of the association for computational …, 2019
1212019
Parallel data augmentation for formality style transfer
Y Zhang, T Ge, X Sun
arXiv preprint arXiv:2005.07522, 2020
103*2020
Reaching human-level performance in automatic grammatical error correction: An empirical study
T Ge, F Wei, M Zhou
arXiv preprint arXiv:1807.01270, 2018
872018
Exploiting task-oriented resources to learn word embeddings for clinical abbreviation expansion
Y Liu, T Ge, KS Mathews, H Ji, DL McGuinness
arXiv preprint arXiv:1804.04225, 2018
852018
An effective neural network model for graph-based dependency parsing
W Pei, T Ge, B Chang
Proceedings of the 53rd Annual Meeting of the Association for Computational …, 2015
802015
In-context autoencoder for context compression in a large language model
T Ge, J Hu, L Wang, X Wang, SQ Chen, F Wei
arXiv preprint arXiv:2307.06945, 2023
702023
Improving the efficiency of grammatical error correction with erroneous span detection and correction
M Chen, T Ge, X Zhang, F Wei, M Zhou
arXiv preprint arXiv:2010.03260, 2020
522020
Instantaneous grammatical error correction with shallow aggressive decoding
X Sun, T Ge, F Wei, H Wang
arXiv preprint arXiv:2106.04970, 2021
502021
Inference with reference: Lossless acceleration of large language models
N Yang, T Ge, L Wang, B Jiao, D Jiang, L Yang, R Majumder, F Wei
arXiv preprint arXiv:2304.04487, 2023
462023
Low-code llm: Visual programming over llms
Y Cai, S Mao, W Wu, Z Wang, Y Liang, T Ge, C Wu, W You, T Song, Y Xia, ...
arXiv preprint arXiv:2304.08103 2, 2023
452023
Speculative decoding: Exploiting speculative execution for accelerating seq2seq generation
H Xia, T Ge, P Wang, SQ Chen, F Wei, Z Sui
Findings of the Association for Computational Linguistics: EMNLP 2023, 3909-3925, 2023
44*2023
Beyond preserved accuracy: Evaluating loyalty and robustness of BERT compression
C Xu, W Zhou, T Ge, K Xu, J McAuley, F Wei
arXiv preprint arXiv:2109.03228, 2021
442021
Formality style transfer with hybrid textual annotations
R Xu, T Ge, F Wei
arXiv preprint arXiv:1903.06353, 2019
402019
The system can't perform the operation now. Try again later.
Articles 1–20