关注
Chenglei Si
Chenglei Si
在 stanford.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Bloom: A 176b-parameter open-access multilingual language model
BS Workshop, TL Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, ...
arXiv preprint arXiv:2211.05100, 2022
9812022
Prompting gpt-3 to be reliable
C Si, Z Gan, Z Yang, S Wang, J Wang, J Boyd-Graber, L Wang
arXiv preprint arXiv:2210.09150, 2022
1282022
CharBERT: character-aware pre-trained language model
W Ma, Y Cui, C Si, T Liu, S Wang, G Hu
arXiv preprint arXiv:2011.01513, 2020
892020
Between words and characters: a brief history of open-vocabulary modeling and tokenization in nlp
SJ Mielke, Z Alyafeai, E Salesky, C Raffel, M Dey, M Gallé, A Raja, C Si, ...
arXiv preprint arXiv:2112.10508, 2021
88*2021
Better robustness by more coverage: Adversarial and mixup data augmentation for robust finetuning
C Si, Z Zhang, F Qi, Z Liu, Y Wang, Q Liu, M Sun
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 …, 2021
75*2021
What does bert learn from multiple-choice reading comprehension datasets?
C Si, S Wang, MY Kan, J Jiang
arXiv preprint arXiv:1910.12391, 2019
452019
Benchmarking robustness of machine reading comprehension models
C Si, Z Yang, Y Cui, W Ma, T Liu, S Wang
arXiv preprint arXiv:2004.14004, 2020
312020
Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations
C Si, D Friedman, N Joshi, S Feng, D Chen, H He
arXiv preprint arXiv:2305.13299, 2023
19*2023
What's in a Name? Answer Equivalence For Open-Domain Question Answering
C Si, C Zhao, J Boyd-Graber
arXiv preprint arXiv:2109.05289, 2021
152021
Sentiment aware neural machine translation
C Si, K Wu, A Aw, MY Kan
Proceedings of the 6th Workshop on Asian Translation, 200-206, 2019
132019
Re-Examining Calibration: The Case of Question Answering
C Si, C Zhao, S Min, J Boyd-Graber
Findings of the Association for Computational Linguistics: EMNLP 2022, 2814-2829, 2022
12*2022
Dataset mention extraction and classification
A Prasad, C Si, MY Kan
Proceedings of the Workshop on Extracting Structured Knowledge from …, 2019
102019
Sub-character tokenization for chinese pretrained language models
C Si, Z Zhang, Y Chen, F Qi, X Wang, Z Liu, Y Wang, Q Liu, M Sun
Transactions of the Association for Computational Linguistics 11, 469-487, 2023
9*2023
Getting more out of mixture of language model reasoning experts
C Si, W Shi, C Zhao, L Zettlemoyer, J Boyd-Graber
Findings of the Association for Computational Linguistics: EMNLP 2023, 8234-8249, 2023
4*2023
Large Language Models Help Humans Verify Truthfulness--Except When They Are Convincingly Wrong
C Si, N Goyal, ST Wu, C Zhao, S Feng, H Daumé III, J Boyd-Graber
arXiv preprint arXiv:2310.12558, 2023
32023
Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition
S Schulhoff, J Pinto, A Khan, LF Bouchard, C Si, S Anati, V Tagliabue, ...
arXiv preprint arXiv:2311.16119, 2023
22023
系统目前无法执行此操作,请稍后再试。
文章 1–16