Follow
Yi Tay
Yi Tay
Research Scientist, Google Brain
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Palm: Scaling language modeling with pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
Journal of Machine Learning Research 24 (240), 1-113, 2023
50792023
Deep learning based recommender system: A survey and new perspectives
S Zhang, L Yao, A Sun, Y Tay
ACM Computing Surveys, 2017
36302017
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
Journal of Machine Learning Research 25 (70), 1-53, 2024
29182024
Emergent abilities of large language models
J Wei, Y Tay, R Bommasani, C Raffel, B Zoph, S Borgeaud, D Yogatama, ...
Transactions of Machine Learning Research (TMLR), 2022
2870*2022
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
14482023
Efficient Transformers: A Survey
Y Tay, M Dehghani, D Bahri, D Metzler
ACM Computing Surveys, 2022, 2022
1346*2022
Long Range Arena: A Benchmark for Efficient Transformers
Y Tay*, M Dehghani*, S Abnar, Y Shen, D Bahri, P Pham, J Rao, L Yang, ...
ICLR 2021, 2020
6532020
Quaternion Knowledge Graph Embedding
S Zhang*, Y Tay*, L Yao, Q Liu
NeurIPS 2019, 2019
6002019
The flan collection: Designing data and methods for effective instruction tuning
S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ...
International Conference on Machine Learning, 22631-22648, 2023
5812023
Challenging big-bench tasks and whether chain-of-thought can solve them
M Suzgun, N Scales, N Schärli, S Gehrmann, Y Tay, HW Chung, ...
arXiv preprint arXiv:2210.09261, 2022
5352022
Scaling vision transformers to 22 billion parameters
M Dehghani, J Djolonga, B Mustafa, P Padlewski, J Heek, J Gilmer, ...
International Conference on Machine Learning, 7480-7512, 2023
4452023
UL2: Unifying Language Learning Paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, J Wei, X Wang, HW Chung, ...
ICLR 2023, 2022
405*2022
Synthesizer: Rethinking self-attention in transformer models
Y Tay, D Bahri, D Metzler, DC Juan, Z Zhao, C Zheng
ICML 2021, 2020
3872020
Multi-Pointer Co-Attention Networks for Recommendation
Y Tay, LA Tuan, SC Hui
KDD 2018, 2018
3572018
Latent Relational Metric Learning via Memory-based Attention for Collaborative Ranking
Y Tay, LA Tuan, SC Hui
Proceedings of WWW 2018, 2018
3512018
Sparse Sinkhorn Attention
Y Tay, D Bahri, L Yang, D Metzler, DC Juan
ICML 2020, 2020
3362020
Larger language models do in-context learning differently
J Wei, J Wei, Y Tay, D Tran, A Webson, Y Lu, X Chen, H Liu, D Huang, ...
arXiv preprint arXiv:2303.03846, 2023
2662023
Next item recommendation with self-attention
S Zhang, Y Tay, L Yao, A Sun
arXiv preprint arXiv:1808.06414, 2018
265*2018
Dive into Deep Learning: Recommender Systems
S Zhang, A Zhang, Y Tay
249*2019
Language models are multilingual chain-of-thought reasoners
F Shi, M Suzgun, M Freitag, X Wang, S Srivats, S Vosoughi, HW Chung, ...
arXiv preprint arXiv:2210.03057, 2022
2232022
The system can't perform the operation now. Try again later.
Articles 1–20