Follow
Nayeon Lee
Nayeon Lee
Research Scientist, NVIDIA
Verified email at nvidia.com - Homepage
Title
Cited by
Cited by
Year
Survey of hallucination in natural language generation
Z Ji, N Lee, R Frieske, T Yu, D Su, Y Xu, E Ishii, Y Bang, A Madotto, ...
ACM Computing Surveys, 2022
1634*2022
A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity
Y Bang, S Cahyawijaya, N Lee, W Dai, D Su, B Wilie, H Lovenia, Z Ji, ...
AACL 2023, 2023
8102023
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
TMLR 2022, 2022
7252022
Towards Few-Shot Fact-Checking via Perplexity
N Lee, Y Bang, A Madotto, M Khabsa, P Fung
NAACL 2021, 2021
101*2021
Factuality Enhanced Language Models for Open-Ended Text Generation
N Lee, W Ping, P Xu, M Patwary, P Fung, M Shoeybi, B Catanzaro
NeurIPS 2022, 2022
872022
Language Models as Fact Checkers?
N Lee, BZ Li, S Wang, W Yih, H Ma, M Khabsa
Proceedings of the 3rd Workshop on Fact Extraction and Verification (FEVER), 2020
732020
Exploring social bias in chatbots using stereotype knowledge
N Lee, A Madotto, P Fung
Proceedings of the 2019 Workshop on Widening NLP, 177-180, 2019
452019
Improving large-scale fact-checking using decomposable attention models and lexical tagging
N Lee, CS Wu, P Fung
EMNLP 2018, 1133-1138, 2018
302018
Team yeon-zi at semeval-2019 task 4: Hyperpartisan news detection by de-noising weakly-labeled data
N Lee, Z Liu, P Fung
Proceedings of the 13th International Workshop on Semantic Evaluation, 1052-1056, 2019
282019
NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias
N Lee, Y Bang, T Yu, A Madotto, P Fung
NAACL 2022, 3131-3148, 2022
20*2022
Assessing Political Prudence of Open-domain Chatbots
Y Bang, N Lee, E Ishii, A Madotto, P Fung
SIGDIAL 2021 - Safety for E2E Conversational AI, 2021
202021
On Unifying Misinformation Detection
N Lee, BZ Li, S Wang, P Fung, H Ma, W Yih, M Khabsa
NAACL 2021, 2021
202021
Towards mitigating LLM hallucination via self reflection
Z Ji, T Yu, Y Xu, N Lee, E Ishii, P Fung
Findings of the Association for Computational Linguistics: EMNLP 2023, 1827-1843, 2023
19*2023
RHO (): Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding
Z Ji, Z Liu, N Lee, T Yu, B Wilie, M Zeng, P Fung
ACL2023 Findings, 2022
192022
AiSocrates: Towards Answering Ethical Quandary Questions
Y Bang, N Lee, T Yu, L Khalatbari, Y Xu, D Su, EJ Barezi, A Madotto, ...
AI for Social Good Workshop @AAAI2023, 2022
7*2022
Understanding the Shades of Sexism in Popular TV Series.
N Lee, Y Bang, J Shin, P Fung
WNLP@ ACL, 122-125, 2019
7*2019
Evaluating Parameter Efficient Learning for Generation
P Xu, M Patwary, S Prabhumoye, V Adams, RJ Prenger, W Ping, N Lee, ...
EMNLP 2022, 2022
22022
Dynamically addressing unseen rumor via continual learning
N Lee, A Madotto, Y Bang, P Fung
arXiv preprint arXiv:2104.08775, 2021
22021
Mitigating Framing Bias with Polarity Minimization Loss
Y Bang, N Lee, P Fung
EMNLP2023 Findings, 2023
12023
Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
Y Bang, D Chen, N Lee, P Fung
arXiv preprint arXiv:2403.18932, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–20