Dit: Self-supervised pre-training for document image transformer J Li, Y Xu, T Lv, L Cui, C Zhang, F Wei Proceedings of the 30th ACM International Conference on Multimedia, 3530-3539, 2022 | 145 | 2022 |
Markuplm: Pre-training of text and markup language for visually-rich document understanding J Li, Y Xu, L Cui, F Wei arXiv preprint arXiv:2110.08518, 2021 | 53 | 2021 |
Generative judge for evaluating alignment J Li, S Sun, W Yuan, RZ Fan, H Zhao, P Liu arXiv preprint arXiv:2310.05470, 2023 | 51 | 2023 |
Self-prompting large language models for zero-shot open-domain QA J Li, Z Zhang, H Zhao arXiv preprint arXiv:2212.08635, 2022 | 47 | 2022 |
Task-specific objectives of pre-trained language models for dialogue adaptation J Li, Z Zhang, H Zhao, X Zhou, X Zhou arXiv preprint arXiv:2009.04984, 2020 | 32* | 2020 |
Multi-turn dialogue reading comprehension with pivot turns and knowledge Z Zhang, J Li, H Zhao IEEE/ACM Transactions on Audio, Speech, and Language Processing 29, 1161-1173, 2021 | 21 | 2021 |
Self-prompted chain-of-thought on large language models for open-domain multi-hop reasoning J Wang, J Li, H Zhao arXiv preprint arXiv:2310.13552, 2023 | 14 | 2023 |
Reformatted alignment RZ Fan, X Li, H Zou, J Li, S He, E Chern, J Hu, P Liu arXiv preprint arXiv:2402.12219, 2024 | 9 | 2024 |
The critique of critique S Sun, J Li, W Yuan, R Yuan, W Li, P Liu arXiv preprint arXiv:2401.04518, 2024 | 6 | 2024 |
Extending LLMs' Context Window with 100 Samples Y Zhang, J Li, P Liu arXiv preprint arXiv:2401.07004, 2024 | 5 | 2024 |
Dissecting Human and LLM Preferences J Li, F Zhou, S Sun, Y Zhang, H Zhao, P Liu arXiv preprint arXiv:2402.11296, 2024 | 4 | 2024 |
Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale F Zhou, Z Wang, Q Liu, J Li, P Liu arXiv preprint arXiv:2409.17115, 2024 | | 2024 |