Yejin Bang
Yejin Bang
Ph.D. Candidate, HKUST
Verified email at
Cited by
Cited by
Survey of hallucination in natural language generation
Z Ji, N Lee, R Frieske, T Yu, D Su, Y Xu, E Ishii, YJ Bang, A Madotto, ...
ACM Computing Surveys 55 (12), 1-38, 2023
A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity
Y Bang, S Cahyawijaya, N Lee, W Dai, D Su, B Wilie, H Lovenia, Z Ji, ...
AACL 2023, 2023
Towards Few-Shot Fact-Checking via Perplexity
N Lee, Y Bang, A Madotto, M Khabsa, P Fung
NAACL 2021, 2021
Xpersona: Evaluating multilingual personalized chatbot
Z Lin, Z Liu, GI Winata, S Cahyawijaya, A Madotto, ...
Workshop on NLP4ConvAI @ ACL 2021, 2020
Model Generalization on COVID-19 Fake News Detection
Y Bang, E Ishii, S Cahyawijaya, Z Ji, P Fung
CONSTRAINT Workshop, Collocated with AAAI 2021, 2021
The Adapter-Bot: All-In-One Controllable Conversational Model
A Madotto, Z Lin, Y Bang, P Fung
AAAI-2021 Demo Track, 2020
NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias
N Lee, Y Bang, T Yu, A Madotto, P Fung
NAACL 2022, 2022
Assessing Political Prudence of Open-domain Chatbots
Y Bang, N Lee, E Ishii, A Madotto, P Fung
SIGDIAL 2021 - Safety for E2E Conversational AI, 2021
Weakly-supervised multi-task learning for multimodal affect recognition
W Dai, S Cahyawijaya, Y Bang, P Fung
arXiv preprint arXiv:2104.11560, 2021
Towards Answering Open-ended Ethical Quandary Questions
Y Bang, N Lee, T Yu, L Khalatbari, Y Xu, S Cahyawijaya, D Su, B Wilie, ...
AI for Social Good Workshop @AAAI 2023, 2023
Casual conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
C Hazirbas, Y Bang, T Yu, P Assar, B Porgali, V Albiero, S Hermanek, ...
arXiv preprint arXiv:2211.05809, 2022
Understanding the shades of sexism in popular TV series
N Lee, Y Bang, J Shin, P Fung
Proceedings of the 2019 Workshop on Widening NLP, 122-125, 2019
Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Y Bang, T Yu, A Madotto, Z Lin, M Diab, P Fung
TrustNLP @ACL2023, 2022
Dynamically addressing unseen rumor via continual learning
N Lee, A Madotto, Y Bang, P Fung
arXiv preprint arXiv:2104.08775, 2021
Mitigating Framing Bias with Polarity Minimization Loss
Y Bang, N Lee, P Fung
EMNLP 2023, 2023
Survey of Social Bias in Vision-Language Models
N Lee, Y Bang, H Lovenia, S Cahyawijaya, W Dai, P Fung
arXiv preprint arXiv:2309.14381, 2023
The Pyramid of Captions
D Chen, S Cahyawijaya, E Ishii, HS Chan, Y Bang, P Fung
arXiv preprint arXiv:2405.00485, 2024
High-Dimension Human Value Representation in Large Language Models
S Cahyawijaya, D Chen, Y Bang, L Khalatbari, B Wilie, Z Ji, E Ishii, ...
arXiv preprint arXiv:2404.07900, 2024
Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
Y Bang, D Chen, N Lee, P Fung
arXiv preprint arXiv:2403.18932, 2024
Learn What NOT to Learn: Towards Generative Safety in Chatbots
L Khalatbari, Y Bang, D Su, W Chung, S Ghadimi, H Sameti, P Fung
arXiv preprint arXiv:2304.11220, 2023
The system can't perform the operation now. Try again later.
Articles 1–20