Follow
Sara Hooker
Sara Hooker
Head of Cohere For AI
Verified email at cohere.com - Homepage
Title
Cited by
Cited by
Year
A benchmark for interpretability methods in deep neural networks
S Hooker, D Erhan, PJ Kindermans, B Kim
Advances in neural information processing systems 32, 2019
882*2019
The state of sparsity in deep neural networks
T Gale, E Elsen, S Hooker
arXiv preprint arXiv:1902.09574, 2019
8552019
The (un) reliability of saliency methods
PJ Kindermans, S Hooker, J Adebayo, M Alber, KT Schütt, S Dähne, ...
Explainable AI: Interpreting, explaining and visualizing deep learning, 267-280, 2019
8162019
Toward trustworthy AI development: mechanisms for supporting verifiable claims
M Brundage, S Avin, J Wang, H Belfield, G Krueger, G Hadfield, H Khlaaf, ...
arXiv preprint arXiv:2004.07213, 2020
4332020
The hardware lottery
S Hooker
Communications of the ACM 64 (12), 58-65, 2021
2422021
What do compressed deep neural networks forget?
S Hooker, A Courville, G Clark, Y Dauphin, A Frome
WHI ICML 2019, 2019
217*2019
Moving beyond “algorithmic bias is a data problem”
S Hooker
Patterns 2 (4), 2021
2082021
Characterising bias in compressed models
S Hooker, N Moorosi, G Clark, S Bengio, E Denton
arXiv preprint arXiv:2010.03058, 2020
1982020
Frontier AI regulation: Managing emerging risks to public safety
M Anderljung, J Barnhart, A Korinek, J Leung, C O'Keefe, J Whittlestone, ...
arXiv preprint arXiv:2307.03718, 2023
1152023
Estimating example difficulty using variance of gradients
C Agarwal, D D'souza, S Hooker
IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR) 2022, 2020
1112020
Evaluating the social impact of generative ai systems in systems and society
I Solaiman, Z Talat, W Agnew, L Ahmad, D Baker, SL Blodgett, C Chen, ...
arXiv preprint arXiv:2306.05949, 2023
1082023
Efficient methods for natural language processing: A survey
M Treviso, JU Lee, T Ji, B Aken, Q Cao, MR Ciosici, M Hassid, K Heafield, ...
Transactions of the Association for Computational Linguistics 11, 826-860, 2023
1032023
Aya model: An instruction finetuned open-access multilingual language model
A Üstün, V Aryabumi, ZX Yong, WY Ko, D D'souza, G Onilude, N Bhandari, ...
arXiv preprint arXiv:2402.07827, 2024
922024
Randomness in neural network training: Characterizing the impact of tooling
D Zhuang, X Zhang, S Song, S Hooker
Proceedings of Machine Learning and Systems 4, 316-336, 2022
852022
The goldilocks of pragmatic understanding: Fine-tuning strategy matters for implicature resolution by llms
L Ruis, A Khan, S Biderman, S Hooker, T Rocktäschel, E Grefenstette
Advances in Neural Information Processing Systems 36, 2024
69*2024
Pushing mixture of experts to the limit: Extremely parameter efficient moe for instruction tuning
T Zadouri, A Üstün, A Ahmadian, B Ermiş, A Locatelli, S Hooker
arXiv preprint arXiv:2309.05444, 2023
672023
When less is more: Investigating data pruning for pretraining llms at scale
M Marion, A Üstün, L Pozzobon, A Wang, M Fadaee, S Hooker
arXiv preprint arXiv:2309.04564, 2023
622023
Back to basics: Revisiting reinforce style optimization for learning from human feedback in llms
A Ahmadian, C Cremer, M Gallé, M Fadaee, J Kreutzer, O Pietquin, ...
arXiv preprint arXiv:2402.14740, 2024
542024
Aya dataset: An open-access collection for multilingual instruction tuning
S Singh, F Vargus, D Dsouza, BF Karlsson, A Mahendiran, WY Ko, ...
arXiv preprint arXiv:2402.06619, 2024
532024
The data provenance initiative: A large scale audit of dataset licensing & attribution in ai
S Longpre, R Mahari, A Chen, N Obeng-Marnu, D Sileo, W Brannon, ...
arXiv preprint arXiv:2310.16787, 2023
49*2023
The system can't perform the operation now. Try again later.
Articles 1–20