Entity-relation extraction as multi-turn question answering X Li, F Yin, Z Sun, X Li, A Yuan, D Chai, M Zhou, J Li arXiv preprint arXiv:1905.05529, 2019 | 391 | 2019 |
Glyce: Glyph-vectors for chinese character representations Y Meng, W Wu, F Wang, X Li, P Nie, F Yin, M Li, Q Han, X Sun, J Li Advances in Neural Information Processing Systems 32, 2019 | 207 | 2019 |
On the robustness of language encoders against grammatical errors F Yin, Q Long, T Meng, KW Chang arXiv preprint arXiv:2005.05683, 2020 | 33 | 2020 |
On the Sensitivity and Stability of Model Interpretations in NLP F Yin, Z Shi, CJ Hsieh, KW Chang arXiv preprint arXiv:2104.08782, 2021 | 30* | 2021 |
Red teaming language model detectors with language models Z Shi, Y Wang, F Yin, X Chen, KW Chang, CJ Hsieh Transactions of the Association for Computational Linguistics 12, 174-189, 2024 | 28 | 2024 |
Did You Read the Instructions? Rethinking the Effectiveness of Task Definitions in Instruction Learning F Yin, J Vig, P Laban, S Joty, C Xiong, CSJ Wu ACL 2023, 2023 | 23 | 2023 |
Cleanclip: Mitigating data poisoning attacks in multimodal contrastive learning H Bansal, N Singhi, Y Yang, F Yin, A Grover, KW Chang Proceedings of the IEEE/CVF International Conference on Computer Vision, 112-123, 2023 | 20 | 2023 |
Dynosaur: A dynamic growth paradigm for instruction-tuning data curation D Yin, X Liu, F Yin, M Zhong, H Bansal, J Han, KW Chang arXiv preprint arXiv:2305.14327, 2023 | 19 | 2023 |
Active instruction tuning: Improving cross-task generalization by training on prompt sensitive tasks PN Kung, F Yin, D Wu, KW Chang, N Peng arXiv preprint arXiv:2311.00288, 2023 | 10 | 2023 |
Prompt-driven llm safeguarding via directed representation optimization C Zheng, F Yin, H Zhou, F Meng, J Zhou, KW Chang, M Huang, N Peng arXiv preprint arXiv:2401.18018, 2024 | 7 | 2024 |
Addmu: Detection of far-boundary adversarial examples with data and model uncertainty estimation F Yin, Y Li, CJ Hsieh, KW Chang arXiv preprint arXiv:2210.12396, 2022 | 5 | 2022 |
Characterizing Truthfulness in Large Language Model Generations with Local Intrinsic Dimension F Yin, J Srinivasa, KW Chang arXiv preprint arXiv:2402.18048, 2024 | 1 | 2024 |
Contrastive Instruction Tuning T Yan, F Wang, JY Huang, W Zhou, F Yin, A Galstyan, W Yin, M Chen arXiv preprint arXiv:2402.11138, 2024 | 1 | 2024 |
Efficient shapley values estimation by amortization for text classification C Yang, F Yin, H He, KW Chang, X Ma, B Xiang arXiv preprint arXiv:2305.19998, 2023 | 1 | 2023 |
On Prompt-Driven Safeguarding for Large Language Models C Zheng, F Yin, H Zhou, F Meng, J Zhou, KW Chang, M Huang, N Peng ICLR 2024 Workshop on Secure and Trustworthy Large Language Models, 0 | | |