Seguir
Tu Vu
Tu Vu
Research Scientist, Google DeepMind; Assistant Professor, Virginia Tech
Dirección de correo verificada de google.com - Página principal
Título
Citado por
Citado por
Año
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
21442023
The flan collection: Designing data and methods for effective instruction tuning
S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ...
ICML, 2023
6012023
Spot: Better frozen model adaptation through soft prompt transfer
T Vu, B Lester, N Constant, R Al-Rfou, D Cer
ACL, 2022
2692022
Gemini: A family of highly capable multimodal models
R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805 1, 2023
261*2023
Gemini: A family of highly capable multimodal models
R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805 1, 2023
219*2023
Exploring and predicting transferability across NLP tasks
T Vu, T Wang, T Munkhdalai, A Sordoni, A Trischler, A Mattarella-Micke, ...
EMNLP, 2020
1652020
Freshllms: Refreshing large language models with search engine augmentation
T Vu, M Iyyer, X Wang, N Constant, J Wei, J Wei, C Tar, YH Sung, D Zhou, ...
ACL, 2024
1232024
JAIST: Combining Multiple Features for Answer Selection in Community Question Answering
Q Tran, V Tran, T Vu, M Nguyen, S Pham
SemEval@NAACL, 2015
982015
Sentence Simplification with Memory-Augmented Neural Networks
T Vu, B Hu, T Munkhdalai, H Yu
NAACL, 2018
862018
Mixture-of-experts meets instruction tuning: A winning combination for large language models
S Shen, L Hou, Y Zhou, N Du, S Longpre, J Wei, HW Chung, B Zoph, ...
ICLR, 2024
612024
Overcoming catastrophic forgetting in zero-shot cross-lingual generation
T Vu, A Barua, B Lester, D Cer, M Iyyer, N Constant
EMNLP, 2022
592022
STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
T Vu, MT Luong, QV Le, G Simon, M Iyyer
EMNLP, 2021
582021
Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts
S Shen, L Hou, Y Zhou, N Du, S Longpre, J Wei, HW Chung, B Zoph, ...
ICLR, 2024
262024
Self-evaluation improves selective generation in large language models
J Ren, Y Zhao, T Vu, PJ Liu, B Lakshminarayanan
ICBINB@NeurIPS, 2023
252023
Learning to simplify children stories with limited data
T Vu, G Tran, S Pham
ACIIDS, 2014
222014
Foundational autoraters: Taming large language models for better automatic evaluation
T Vu, K Krishna, S Alzubi, C Tar, M Faruqui, YH Sung
EMNLP, 2024
182024
Dialect-robust Evaluation of Generated Text
J Sun, T Sellam, E Clark, T Vu, T Dozat, D Garrette, A Siddhant, ...
ACL, 2023
172023
Integrating Multiplicative Features into Supervised Distributional Methods for Lexical Entailment
T Vu, V Shwartz
*SEM@NAACL, 2018
152018
Leveraging QA Datasets to Improve Generative Data Augmentation
D Mekala, T Vu, T Schick, J Shang
EMNLP, 2022
142022
The Flan Collection: Designing Data and Methods for Effective Instruction Tuning 2023
S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ...
arXiv preprint arXiv:2301.13688, 2023
13*2023
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20