Follow
Jongwoo Ko
Jongwoo Ko
Verified email at kaist.ac.kr - Homepage
Title
Cited by
Cited by
Year
Fine samples for learning with noisy labels
T Kim*, J Ko*, S Cho, JH Choi, SY Yun
Advances in Neural Information Processing Systems 34, 24137-24149, 2021
832021
CUDA: Curriculum of Data Augmentation for Long-tailed Recognition
S Ahn*, J Ko*, SY Yun
International Conference on Learning Representations 11, 2023
172023
Deep Learning-Based Cataract Detection and Grading from Slit-Lamp and Retro-Illumination Photographs: Model Development and Validation Study
KY Son*, J Ko*, E Kim, SY Lee, MJ Kim, J Han, E Shin, TY Chung, DH Lim
Ophthalmology Science 2 (2), 100147, 2022
162022
Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding
S Bae*, J Ko*, H Song, SY Yun
arXiv preprint arXiv:2310.05424, 2023
132023
Self-Contrastive Learning
S Bae*, S Kim*, J Ko, G Lee, S Noh, SY Yun
arXiv preprint arXiv:2106.15499, 2021
7*2021
A Gift from Label Smoothing: Robust Training with Adaptive Label Smoothing via Auxiliary Classifier under Label Noise
J Ko*, B Yi*, SY Yun
Proceedings of the AAAI Conference on Artificial Intelligence 37 (7), 8325-8333, 2023
3*2023
Revisiting intermediate layer distillation for compressing language models: An overfitting perspective
J Ko, S Park, M Jeong, S Hong, E Ahn, DS Chang, SY Yun
arXiv preprint arXiv:2302.01530, 2023
32023
Deep Gaussian process models for integrating multifidelity experiments with nonstationary relationships
J Ko, H Kim
IISE Transactions 54 (7), 686-698, 2022
32022
DistiLLM: Towards Streamlined Distillation for Large Language Models
J Ko, S Kim, T Chen, SY Yun
arXiv preprint arXiv:2402.03898, 2024
22024
Synergy with Translation Artifacts for Training and Inference in Multilingual Tasks
J Oh*, J Ko*, SY Yun
Empirical Methods in Natural Language Processing 2022, 6747-6754, 2022
22022
NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models
J Ko*, S Park*, Y Kim, S Ahn, DS Chang, E Ahn, SY Yun
arXiv preprint arXiv:2310.10054, 2023
12023
EFFICIENT UTILIZATION OF PRE-TRAINED MODEL FOR LEARNING WITH NOISY LABELS
J Ko*, S Ahn*, SY Yun
ICLR 2023 Workshop on Pitfalls of limited data and computation for …, 0
1*
Improving Adaptability and Generalizability of Efficient Transfer Learning for Vision-Language Models
Y Yang*, J Ko*, SY Yun
arXiv preprint arXiv:2311.15569, 2023
2023
Fine tuning Pre trained Models for Robustness Under Noisy Labels
S Ahn, S Kim, J Ko, SY Yun
arXiv preprint arXiv:2310.17668, 2023
2023
Improving Generalization in Reinforcement Learning via Distribution-Aware Batch Normalization
J Ko, S Kim, J Kim, S Park, S Bae, SY Yun
한국정보과학회 학술발표논문집, 795-797, 2022
2022
Client Sampling Algorithm in Federated Learning via Combinatorial Averaging and Multi-Armed Bandits
S Bae, T Kim, S Ahn, S Kim, J Ko, SY Yun
한국정보과학회 학술발표논문집, 1088-1090, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–16