Follow
Doyoung Kim
Doyoung Kim
Verified email at kaist.ac.kr
Title
Cited by
Cited by
Year
Exploring the benefits of training expert language models over instruction tuning
J Jang, S Kim, S Ye, D Kim, L Logeswaran, M Lee, K Lee, M Seo
International Conference on Machine Learning, 14702-14729, 2023
342023
Flask: Fine-grained language model evaluation based on alignment skill sets
S Ye, D Kim, S Kim, H Hwang, S Kim, Y Jo, J Thorne, J Kim, M Seo
arXiv preprint arXiv:2307.10928, 2023
252023
Selfee: Iterative self-revising llm empowered by self-feedback generation
S Ye, Y Jo, D Kim, S Kim, H Hwang, M Seo
Blog post, 2023
242023
The cot collection: Improving zero-shot and few-shot learning of language models via chain-of-thought fine-tuning
S Kim, SJ Joo, D Kim, J Jang, S Ye, J Shin, M Seo
arXiv preprint arXiv:2305.14045, 2023
222023
Guess the instruction! flipped learning makes language models stronger zero-shot learners
S Ye, D Kim, J Jang, J Shin, M Seo
arXiv preprint arXiv:2210.02969, 2022
152022
Guess the instruction! making language models stronger zero-shot learners
S Ye, D Kim, J Jang, J Shin, M Seo
arXiv preprint arXiv:2210.02969, 2022
122022
Retrieval of soft prompt enhances zero-shot task generalization
S Ye, J Jang, D Kim, Y Jo, M Seo
arXiv preprint arXiv:2210.03029, 2022
102022
How Well Do Large Language Models Truly Ground?
H Lee, S Joo, C Kim, J Jang, D Kim, KW On, M Seo
arXiv preprint arXiv:2311.09069, 2023
22023
Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards
H Hwang, D Kim, S Kim, S Ye, M Seo
arXiv preprint arXiv:2404.10346, 2024
2024
Semiparametric Token-Sequence Co-Supervision
H Lee, D Kim, J Jun, S Joo, J Jang, KW On, M Seo
arXiv preprint arXiv:2403.09024, 2024
2024
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
S Ye, J Jang, D Kim, Y Jo, M Seo
Findings of the Association for Computational Linguistics: EMNLP 2023, 12288 …, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–11