关注
Alethea Power
Alethea Power
Member of Technical Staff, OpenAI
在 openai.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Evaluating large language models trained on code
M Chen, J Tworek, H Jun, Q Yuan, HPO Pinto, J Kaplan, H Edwards, ...
arXiv preprint arXiv:2107.03374, 2021
20112021
Gpt-4 technical report
J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ...
arXiv preprint arXiv:2303.08774, 2023
8752023
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
7432022
Grokking: Generalization beyond overfitting on small algorithmic datasets
A Power, Y Burda, H Edwards, I Babuschkin, V Misra
arXiv preprint arXiv:2201.02177, 2022
2082022
Evaluating large language models trained on code. arXiv 2021
M Chen, J Tworek, H Jun, Q Yuan, HPO Pinto, J Kaplan, H Edwards, ...
arXiv preprint arXiv:2107.03374 10, 2021
432021
系统目前无法执行此操作,请稍后再试。
文章 1–5