Yi Tay
Yi Tay
Research Scientist, Google Brain
Verified email at - Homepage
Cited by
Cited by
Palm: Scaling language modeling with pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
Journal of Machine Learning Research 24 (240), 1-113, 2023
Deep learning based recommender system: A survey and new perspectives
S Zhang, L Yao, A Sun, Y Tay
ACM Computing Surveys, 2017
Emergent abilities of large language models
J Wei, Y Tay, R Bommasani, C Raffel, B Zoph, S Borgeaud, D Yogatama, ...
Transactions of Machine Learning Research (TMLR), 2022
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
Journal of Machine Learning Research 25 (70), 1-53, 2024
Efficient Transformers: A Survey
Y Tay, M Dehghani, D Bahri, D Metzler
ACM Computing Surveys, 2022, 2022
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
Long Range Arena: A Benchmark for Efficient Transformers
Y Tay*, M Dehghani*, S Abnar, Y Shen, D Bahri, P Pham, J Rao, L Yang, ...
ICLR 2021, 2020
Quaternion Knowledge Graph Embedding
S Zhang*, Y Tay*, L Yao, Q Liu
NeurIPS 2019, 2019
The flan collection: Designing data and methods for effective instruction tuning
S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ...
International Conference on Machine Learning, 22631-22648, 2023
Challenging big-bench tasks and whether chain-of-thought can solve them
M Suzgun, N Scales, N Schärli, S Gehrmann, Y Tay, HW Chung, ...
arXiv preprint arXiv:2210.09261, 2022
Synthesizer: Rethinking self-attention in transformer models
Y Tay, D Bahri, D Metzler, DC Juan, Z Zhao, C Zheng
ICML 2021, 2020
Multi-Pointer Co-Attention Networks for Recommendation
Y Tay, LA Tuan, SC Hui
KDD 2018, 2018
Latent Relational Metric Learning via Memory-based Attention for Collaborative Ranking
Y Tay, LA Tuan, SC Hui
Proceedings of WWW 2018, 2018
UL2: Unifying Language Learning Paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, J Wei, X Wang, HW Chung, ...
ICLR 2023, 2022
Scaling vision transformers to 22 billion parameters
M Dehghani, J Djolonga, B Mustafa, P Padlewski, J Heek, J Gilmer, ...
International Conference on Machine Learning, 7480-7512, 2023
Sparse Sinkhorn Attention
Y Tay, D Bahri, L Yang, D Metzler, DC Juan
ICML 2020, 2020
Next item recommendation with self-attention
S Zhang, Y Tay, L Yao, A Sun
arXiv preprint arXiv:1808.06414, 2018
Dive into Deep Learning: Recommender Systems
S Zhang, A Zhang, Y Tay
Learning to Attend via Word-Aspect Associative Fusion for Aspect-based Sentiment Analysis
Y Tay, AT Luu, SC Hui
Proceedings of AAAI 2018, 2018
Reasoning with Sarcasm by Reading In-between
Y Tay, LA Tuan, SC Hui, J Su
ACL 2018, 2018
The system can't perform the operation now. Try again later.
Articles 1–20