Complexity-based prompting for multi-step reasoning Y Fu, H Peng, A Sabharwal, P Clark, T Khot The Eleventh International Conference on Learning Representations, 2022 | 190 | 2022 |
Decomposed prompting: A modular approach for solving complex tasks T Khot, H Trivedi, M Finlayson, Y Fu, K Richardson, P Clark, A Sabharwal arXiv preprint arXiv:2210.02406, 2022 | 184 | 2022 |
C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models Y Huang, Y Bai, Z Zhu, J Zhang, J Zhang, T Su, J Liu, C Lv, Y Zhang, Y Fu, ... Advances in Neural Information Processing Systems 36, 2024 | 168* | 2024 |
Specializing smaller language models towards multi-step reasoning Y Fu, H Peng, L Ou, A Sabharwal, T Khot International Conference on Machine Learning, 10421-10430, 2023 | 96 | 2023 |
Paraphrase generation with latent bag of words Y Fu, Y Feng, JP Cunningham Advances in Neural Information Processing Systems 32, 2019 | 87 | 2019 |
Mammoth: Building math generalist models through hybrid instruction tuning X Yue, X Qu, G Zhang, Y Fu, W Huang, H Sun, Y Su, W Chen arXiv preprint arXiv:2309.05653, 2023 | 83 | 2023 |
Improving language model negotiation with self-play and in-context learning from ai feedback Y Fu, H Peng, T Khot, M Lapata arXiv preprint arXiv:2305.10142, 2023 | 71 | 2023 |
Scheduling and routing models for food rescue and delivery operations DJ Nair, H Grzybowska, Y Fu, VV Dixit Socio-Economic Planning Sciences 63, 18-32, 2018 | 57 | 2018 |
Noisy-labeled NER with confidence estimation K Liu, Y Fu, C Tan, M Chen, N Zhang, S Huang, S Gao arXiv preprint arXiv:2104.04318, 2021 | 55 | 2021 |
Prototypical representation learning for relation extraction N Ding, X Wang, Y Fu, G Xu, R Wang, P Xie, Y Shen, F Huang, HT Zheng, ... arXiv preprint arXiv:2103.11647, 2021 | 48 | 2021 |
Nested named entity recognition with partially-observed treecrfs Y Fu, C Tan, M Chen, S Huang, F Huang Proceedings of the AAAI Conference on Artificial Intelligence 35 (14), 12839 …, 2021 | 37 | 2021 |
Probing BERT in hyperbolic spaces B Chen, Y Fu, G Xu, P Xie, C Tan, M Chen, L Jing arXiv preprint arXiv:2104.03869, 2021 | 36 | 2021 |
Natural answer generation with heterogeneous memory Y Fu, Y Feng Proceedings of the 2018 Conference of the North American Chapter of the …, 2018 | 36 | 2018 |
How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources Y Fu, H Peng, T Khot https://yaofu.notion.site/How-does-GPT-Obtain-its-Ability-Tracing-Emergent …, 2022 | 32 | 2022 |
Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance Y Fu, L Ou, M Chen, Y Wan, H Peng, T Khot arXiv preprint arXiv:2305.17306, 2023 | 30* | 2023 |
To repeat or not to repeat: Insights from scaling llm under token-crisis F Xue, Y Fu, W Zhou, Z Zheng, Y You Advances in Neural Information Processing Systems 36, 2024 | 23 | 2024 |
Data-to-text generation with variational sequential planning R Puduppully, Y Fu, M Lapata Transactions of the Association for Computational Linguistics 10, 697-715, 2022 | 21 | 2022 |
Rethinking text attribute transfer: A lexical analysis Y Fu, H Zhou, J Chen, L Li arXiv preprint arXiv:1909.12335, 2019 | 18 | 2019 |
Latent template induction with Gumbel-CRFS Y Fu, C Tan, B Bi, M Chen, Y Feng, A Rush Advances in Neural Information Processing Systems 33, 20259-20271, 2020 | 16 | 2020 |
Openmoe: An early effort on open mixture-of-experts language models F Xue, Z Zheng, Y Fu, J Ni, Z Zheng, W Zhou, Y You arXiv preprint arXiv:2402.01739, 2024 | 9 | 2024 |