关注
Yanqi Zhou
Yanqi Zhou
在 google.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Exploring the limits of transfer learning with a unified text-to-text transformer
C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, ...
Journal of machine learning research 21 (140), 1-67, 2020
181122020
Lamda: Language models for dialog applications
R Thoppilan, D De Freitas, J Hall, N Shazeer, A Kulshreshtha, HT Cheng, ...
arXiv preprint arXiv:2201.08239, 2022
14122022
Deep learning scaling is predictable, empirically
J Hestness, S Narang, N Ardalani, G Diamos, H Jun, H Kianinejad, ...
arXiv preprint arXiv:1712.00409, 2017
7222017
Deep Voice 2: Multi-Speaker Neural Text-to-Speech
YZ Sercan Arik, Gregory Diamos, Andrew Gibiansky, John Miller, Kainan Peng ...
Neural Information Processing Systems (NIPS), 2017
642*2017
Glam: Efficient scaling of language models with mixture-of-experts
N Du, Y Huang, AM Dai, S Tong, D Lepikhin, Y Xu, M Krikun, Y Zhou, ...
International Conference on Machine Learning, 5547-5569, 2022
4742022
Neural voice cloning with a few samples
S Arik, J Chen, K Peng, W Ping, Y Zhou
Advances in neural information processing systems 31, 2018
4522018
OpenPiton: An open source manycore research framework
J Balkind, M McKeown, Y Fu, T Nguyen, Y Zhou, A Lavrov, M Shahrad, ...
ACM SIGPLAN Notices 51 (4), 217-232, 2016
2742016
Mixture-of-experts with expert choice routing
Y Zhou, T Lei, H Liu, N Du, Y Huang, V Zhao, AM Dai, QV Le, J Laudon
Advances in Neural Information Processing Systems 35, 7103-7114, 2022
1962022
Toju Duke, Lucas Dixon, Kun Zhang, Quoc V
N Du, Y Huang, AM Dai, S Tong, D Lepikhin, Y Xu, M Krikun, Y Zhou, ...
Le, Yonghui Wu, Zhifeng Chen, and Claire Cui, 2021
1432021
Atomic In-place Updates for Non-volatile Main Memories with Kamino-Tx
A Memaripour, A Badam, A Phanishayee, Y Zhou, R Alagappan, ...
EuroSys '17 Proceedings of the Twelfth European Conference on Computer …, 2017
1282017
Renelito Delos Santos
R Thoppilan, D De Freitas, J Hall, N Shazeer, A Kulshreshtha, HT Cheng, ...
972022
Do transformer modifications transfer across implementations and applications?
S Narang, HW Chung, Y Tay, W Fedus, T Fevry, M Matena, K Malkan, ...
arXiv preprint arXiv:2102.11972, 2021
942021
Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv
C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, ...
arXiv preprint arXiv:1910.10683, 2019
872019
Resource-efficient neural architect
Y Zhou, S Ebrahimi, SÖ Arık, H Yu, H Liu, G Diamos
arXiv preprint arXiv:1806.07912, 2018
782018
A learned performance model for tensor processing units
S Kaufman, P Phothilimthana, Y Zhou, C Mendis, S Roy, A Sabne, ...
Proceedings of Machine Learning and Systems 3, 387-400, 2021
762021
MITTS: Memory inter-arrival time traffic shaping
Y Zhou, D Wentzlaff
ACM SIGARCH Computer Architecture News 44 (3), 532-544, 2016
632016
Transferable graph optimizers for ml compilers
Y Zhou, S Roy, A Abdolrashidi, D Wong, P Ma, Q Xu, H Liu, ...
Advances in Neural Information Processing Systems 33, 13844-13855, 2020
562020
Power and Energy Characterization of an Open Source 25-Core Manycore Processor.
M McKeown, A Lavrov, M Shahrad, PJ Jackson, Y Fu, J Balkind, ...
HPCA, 762-775, 2018
562018
Deep learning scaling is predictable
J Hestness, S Narang, N Ardalani, G Diamos, H Jun, H Kianinejad, ...
Empirically. arXiv 1712, 2, 2017
552017
Exploring the limits of transfer learning with a unified text-to-text transformer
A Roberts, C Raffel, K Lee, M Matena, N Shazeer, PJ Liu, S Narang, W Li, ...
Google, Tech. Rep., 2019
492019
系统目前无法执行此操作,请稍后再试。
文章 1–20