Follow
Niklas Muennighoff
Title
Cited by
Cited by
Year
Bloom: A 176b-parameter open-access multilingual language model
BS Workshop, TL Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, ...
JMLR 2023, 2022
1544*2022
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
TMLR 2023, 2022
10422022
StarCoder: may the source be with you!
R Li, LB Allal, Y Zi, N Muennighoff, D Kocetkov, C Mou, M Marone, C Akiki, ...
TMLR 2023, 2023
729*2023
Crosslingual generalization through multitask finetuning
N Muennighoff, T Wang, L Sutawika, A Roberts, S Biderman, TL Scao, ...
ACL 2023, 2022
5612022
A framework for few-shot language model evaluation
L Gao, J Tow, S Biderman, S Black, A DiPofi, C Foster, L Golding, J Hsu, ...
GitHub, 2021
523*2021
MTEB: Massive text embedding benchmark
N Muennighoff, N Tazi, L Magne, N Reimers
EACL 2023, 2022
3342022
C-pack: Packaged resources to advance general chinese embedding
S Xiao, Z Liu, P Zhang, N Muennighoff
SIGIR 2024, 2023
2172023
SantaCoder: don't reach for the stars!
LB Allal, R Li, D Kocetkov, C Mou, C Akiki, CM Ferrandis, N Muennighoff, ...
ICLR 2023 DL4C Workshop, Best Paper Award, 2023
195*2023
SGPT: GPT sentence embeddings for semantic search
N Muennighoff
arXiv, 2022
1622022
Scaling Data-Constrained Language Models
N Muennighoff, AM Rush, B Barak, TL Scao, A Piktus, N Tazi, S Pyysalo, ...
NeurIPS 2023 Oral, Outstanding Paper Runner-Up Award, 2023
1502023
Kto: Model alignment as prospect theoretic optimization
K Ethayarajh, W Xu, N Muennighoff, D Jurafsky, D Kiela
ICML 2024 Spotlight, 2024
1372024
Octopack: Instruction tuning code large language models
N Muennighoff, Q Liu, A Zebaze, Q Zheng, B Hui, TY Zhuo, S Singh, ...
ICLR 2024 Spotlight, NeurIPS 2023 Instruction Workshop, 2023
1112023
Olmo: Accelerating the science of language models
D Groeneveld, I Beltagy, P Walsh, A Bhagia, R Kinney, O Tafjord, AH Jha, ...
ACL 2024, Best Theme Paper Award, 2024
102*2024
What Language Model to Train if You Have One Million GPU Hours?
TL Scao, T Wang, D Hesslow, L Saulnier, S Bekman, MS Bari, S Bideman, ...
EMNLP 2022 Findings, 2022
932022
Starcoder 2 and the stack v2: The next generation
A Lozhkov, R Li, LB Allal, F Cassano, J Lamy-Poirier, N Tazi, A Tang, ...
arXiv preprint arXiv:2402.19173, 2024
912024
Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ...
ACL 2024, Best Resource Paper Award, 2024
81*2024
Nl-augmenter: A framework for task-sensitive natural language augmentation
KD Dhole, V Gangal, S Gehrmann, A Gupta, Z Li, S Mahamood, ...
NEJLT 2023, 2021
722021
Vilio: state-of-the-art Visio-Linguistic models applied to hateful memes
N Muennighoff
NeurIPS 2020 Competitions, 2020
662020
The hateful memes challenge: Competition report
D Kiela, H Firooz, A Mohan, V Goswami, A Singh, CA Fitzpatrick, P Bull, ...
NeurIPS 2020 Competitions, 2021
642021
Aya model: An instruction finetuned open-access multilingual language model
A Üstün, V Aryabumi, ZX Yong, WY Ko, D D'souza, G Onilude, N Bhandari, ...
ACL 2024, Best Paper Award, 2024
562024
The system can't perform the operation now. Try again later.
Articles 1–20