关注
Junlong Li
Junlong Li
在 sjtu.edu.cn 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Dit: Self-supervised pre-training for document image transformer
J Li, Y Xu, T Lv, L Cui, C Zhang, F Wei
Proceedings of the 30th ACM International Conference on Multimedia, 3530-3539, 2022
1692022
Generative judge for evaluating alignment
J Li, S Sun, W Yuan, RZ Fan, H Zhao, P Liu
arXiv preprint arXiv:2310.05470, 2023
872023
Markuplm: Pre-training of text and markup language for visually-rich document understanding
J Li, Y Xu, L Cui, F Wei
arXiv preprint arXiv:2110.08518, 2021
622021
Self-prompting large language models for zero-shot open-domain QA
J Li, J Wang, Z Zhang, H Zhao
arXiv preprint arXiv:2212.08635, 2022
612022
Task-specific objectives of pre-trained language models for dialogue adaptation
J Li, Z Zhang, H Zhao, X Zhou, X Zhou
arXiv preprint arXiv:2009.04984, 2020
32*2020
Deepseek-v3 technical report
A Liu, B Feng, B Xue, B Wang, B Wu, C Lu, C Zhao, C Deng, C Zhang, ...
arXiv preprint arXiv:2412.19437, 2024
282024
Multi-turn dialogue reading comprehension with pivot turns and knowledge
Z Zhang, J Li, H Zhao
IEEE/ACM Transactions on Audio, Speech, and Language Processing 29, 1161-1173, 2021
262021
Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning
D Guo, D Yang, H Zhang, J Song, R Zhang, R Xu, Q Zhu, S Ma, P Wang, ...
arXiv preprint arXiv:2501.12948, 2025
212025
Self-prompted chain-of-thought on large language models for open-domain multi-hop reasoning
J Wang, J Li, H Zhao
arXiv preprint arXiv:2310.13552, 2023
172023
Reformatted alignment
RZ Fan, X Li, H Zou, J Li, S He, E Chern, J Hu, P Liu
arXiv preprint arXiv:2402.12219, 2024
142024
The critique of critique
S Sun, J Li, W Yuan, R Yuan, W Li, P Liu
arXiv preprint arXiv:2401.04518, 2024
112024
Extending LLMs' Context Window with 100 Samples
Y Zhang, J Li, P Liu
arXiv preprint arXiv:2401.07004, 2024
92024
Dissecting Human and LLM Preferences
J Li, F Zhou, S Sun, Y Zhang, H Zhao, P Liu
arXiv preprint arXiv:2402.11296, 2024
52024
Programming every example: Lifting pre-training data quality like experts at scale
F Zhou, Z Wang, Q Liu, J Li, P Liu
arXiv preprint arXiv:2409.17115, 2024
32024
Diving into Self-Evolving Training for Multimodal Reasoning
W Liu, J Li, X Zhang, F Zhou, Y Cheng, J He
arXiv preprint arXiv:2412.17451, 2024
22024
系统目前无法执行此操作,请稍后再试。
文章 1–15