Follow
Kang Min Yoo
Kang Min Yoo
NAVER Hyperscale AI & AI Lab
Verified email at navercorp.com
Title
Cited by
Cited by
Year
Learning to compose task-specific tree structures
J Choi, KM Yoo, S Lee
Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018
220*2018
Self-guided contrastive learning for BERT sentence representations
T Kim, KM Yoo, S Lee
arXiv preprint arXiv:2106.07345, 2021
1642021
GPT3Mix: Leveraging large-scale language models for text augmentation
KM Yoo, D Park, J Kang, SW Lee, W Park
arXiv preprint arXiv:2104.08826, 2021
1302021
TaleBrush: Sketching stories with generative pretrained language models
JJY Chung, W Kim, KM Yoo, H Lee, E Adar, M Chang
Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems …, 2022
127*2022
What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers
B Kim, HS Kim, SW Lee, G Lee, D Kwak, DH Jeon, S Park, S Kim, S Kim, ...
arXiv preprint arXiv:2109.04650, 2021
862021
Data augmentation for spoken language understanding via joint variational generation
KM Yoo, Y Shin, S Lee
Proceedings of the AAAI conference on artificial intelligence 33 (01), 7402-7409, 2019
802019
Dialogbert: Discourse-aware response generation via learning to recover and rank utterances
X Gu, KM Yoo, JW Ha
Proceedings of the AAAI Conference on Artificial Intelligence 35 (14), 12911 …, 2021
742021
Ground-truth labels matter: A deeper look into input-label demonstrations
KM Yoo, J Kim, HJ Kim, H Cho, H Jo, SW Lee, S Lee, T Kim
arXiv preprint arXiv:2205.12685, 2022
532022
Memory-efficient fine-tuning of compressed large language models via sub-4-bit integer quantization
J Kim, JH Lee, S Kim, J Park, KM Yoo, SJ Kwon, D Lee
Advances in Neural Information Processing Systems 36, 2024
262024
Self-generated in-context learning: Leveraging auto-regressive language models as a demonstration generator
HJ Kim, H Cho, J Kim, T Kim, KM Yoo, S Lee
arXiv preprint arXiv:2206.08082, 2022
222022
Alphatuning: Quantization-aware parameter-efficient adaptation of large-scale pre-trained language models
SJ Kwon, J Kim, J Bae, KM Yoo, JH Kim, B Park, B Kim, JW Ha, N Sung, ...
arXiv preprint arXiv:2210.03858, 2022
212022
Response generation with context-aware prompt learning
X Gu, KM Yoo, SW Lee
arXiv preprint arXiv:2111.02643, 2021
202021
Leveraging class hierarchy in fashion classification
H Cho, C Ahn, K Min Yoo, J Seol, S Lee
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2019
192019
Aligning Large Language Models through Synthetic Feedback
S Kim, S Bae, J Shin, S Kang, D Kwak, KM Yoo, M Seo
arXiv preprint arXiv:2305.13735, 2023
182023
Prompt-augmented linear probing: Scaling beyond the limit of few-shot in-context learners
H Cho, HJ Kim, J Kim, SW Lee, S Lee, KM Yoo, T Kim
Proceedings of the AAAI Conference on Artificial Intelligence 37 (11), 12709 …, 2023
132023
Variational hierarchical dialog autoencoder for dialog state tracking data augmentation
KM Yoo, H Lee, F Dernoncourt, T Bui, W Chang, S Lee
arXiv preprint arXiv:2001.08604, 2020
132020
Utterance generation with variational auto-encoder for slot filling in spoken language understanding
Y Shin, KM Yoo, SG Lee
IEEE Signal Processing Letters 26 (3), 505-509, 2019
122019
Mutual information divergence: A unified metric for multimodal generative models
JH Kim, Y Kim, J Lee, KM Yoo, SW Lee
Advances in Neural Information Processing Systems 35, 35072-35086, 2022
102022
Generating Information-Seeking Conversations from Unlabeled Documents
G Kim, S Kim, KM Yoo, J Kang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
10*2022
Slot Filling with Delexicalized Sentence Generation.
Y Shin, KM Yoo, S Lee
INTERSPEECH, 2082-2086, 2018
102018
The system can't perform the operation now. Try again later.
Articles 1–20