关注
Cheng-Yu Hsieh
Cheng-Yu Hsieh
在 cs.washington.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
On the (In) fidelity and Sensitivity of Explanations
CK Yeh, CY Hsieh, A Suggala, DI Inouye, PK Ravikumar
Advances in Neural Information Processing Systems, 10965-10976, 2019
4952019
Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes
CY Hsieh, CL Li, CK Yeh, H Nakhost, Y Fujii, A Ratner, R Krishna, CY Lee, ...
arXiv preprint arXiv:2305.02301, 2023
3752023
Sugarcrepe: Fixing hackable benchmarks for vision-language compositionality
CY Hsieh, J Zhang, Z Ma, A Kembhavi, R Krishna
Advances in neural information processing systems 36, 2024
922024
A survey on programmatic weak supervision
J Zhang, CY Hsieh, Y Yu, C Zhang, A Ratner
arXiv preprint arXiv:2202.05433, 2022
902022
Evaluations and Methods for Explanation through Robustness Analysis
CY Hsieh, CK Yeh, X Liu, P Ravikumar, S Kim, S Kumar, CJ Hsieh
International Conference on Learning Representations, 2021
632021
Automatic bridge bidding using deep reinforcement learning
CK Yeh, CY Hsieh, HT Lin
IEEE Transactions on Games 10 (4), 365-377, 2018
582018
Tool documentation enables zero-shot tool-usage with large language models
CY Hsieh, SA Chen, CL Li, Y Fujii, A Ratner, CY Lee, R Krishna, T Pfister
arXiv preprint arXiv:2308.00675, 2023
552023
Outlier weighed layerwise sparsity (owl): A missing secret sauce for pruning llms to high sparsity
L Yin, Y Wu, Z Zhang, CY Hsieh, Y Wang, Y Jia, G Li, A Jaiswal, ...
arXiv preprint arXiv:2310.05175, 2023
422023
Datacomp-lm: In search of the next generation of training sets for language models
J Li, A Fang, G Smyrnis, M Ivgi, M Jordan, S Gadre, H Bansal, E Guha, ...
arXiv preprint arXiv:2406.11794, 2024
262024
How sensitive are sensitivity-based explanations
CK Yeh, CY Hsieh, AS Suggala, D Inouye, P Ravikumar
arXiv preprint arXiv:1901.09392, 52, 2019
242019
Alaaeldin El-Nouby, Hadi Pouransari, Alexander Toshev, Stephanie Wang, Dirk Groeneveld, Luca Soldaini, Pang Wei Koh, Jenia Jitsev, Thomas Kollar, Alexandros G
J Li, A Fang, G Smyrnis, M Ivgi, M Jordan, S Gadre, H Bansal, E Guha, ...
Dimakis, Yair Carmon, Achal Dave, Ludwig Schmidt, and Vaishaal Shankar …, 2024
212024
A deep model with local surrogate loss for general cost-sensitive multi-label learning
CY Hsieh, YA Lin, HT Lin
Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018
192018
Nemo: Guiding and contextualizing weak supervision for interactive data programming
CY Hsieh, J Zhang, A Ratner
Proceedings of the VLDB Endowment 15 (13), 4093 - 4105, 2022
162022
Understanding Programmatic Weak Supervision via Source-aware Influence Function
J Zhang, H Wang, CY Hsieh, A Ratner
Advances in Neural Information Processing Systems, 2022
132022
A pseudo-label method for coarse-to-fine multi-label learning with limited supervision
CY Hsieh, M Xu, G Niu, HT Lin, M Sugiyama
Learning from Limited Labeled Data Workshop @ ICLR '19, 2019
92019
Lookback lens: Detecting and mitigating contextual hallucinations in large language models using only attention maps
YS Chuang, L Qiu, CY Hsieh, R Krishna, Y Kim, J Glass
arXiv preprint arXiv:2407.07071, 2024
72024
Found in the middle: Calibrating positional attention bias improves long context utilization
CY Hsieh, YS Chuang, CL Li, Z Wang, LT Le, A Kumar, J Glass, A Ratner, ...
arXiv preprint arXiv:2406.16008, 2024
52024
Is C4 Dataset Optimal for Pruning? An Investigation of Calibration Data for LLM Pruning
A Bandari, L Yin, CY Hsieh, AK Jaiswal, T Chen, L Shen, R Krishna, S Liu
arXiv preprint arXiv:2410.07461, 2024
12024
The hard positive truth about vision-language compositionality
A Kamath, CY Hsieh, KW Chang, R Krishna
arXiv preprint arXiv:2409.17958, 2024
12024
Graph-Based Captioning: Enhancing Visual Descriptions by Interconnecting Region Captions
YG Hsieh, CY Hsieh, SY Yeh, L Béthune, HP Ansari, PKA Vasu, CL Li, ...
arXiv preprint arXiv:2407.06723, 2024
12024
系统目前无法执行此操作,请稍后再试。
文章 1–20