关注
Junlong Li
Junlong Li
在 sjtu.edu.cn 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Dit: Self-supervised pre-training for document image transformer
J Li, Y Xu, T Lv, L Cui, C Zhang, F Wei
Proceedings of the 30th ACM International Conference on Multimedia, 3530-3539, 2022
1452022
Markuplm: Pre-training of text and markup language for visually-rich document understanding
J Li, Y Xu, L Cui, F Wei
arXiv preprint arXiv:2110.08518, 2021
532021
Generative judge for evaluating alignment
J Li, S Sun, W Yuan, RZ Fan, H Zhao, P Liu
arXiv preprint arXiv:2310.05470, 2023
512023
Self-prompting large language models for zero-shot open-domain QA
J Li, Z Zhang, H Zhao
arXiv preprint arXiv:2212.08635, 2022
472022
Task-specific objectives of pre-trained language models for dialogue adaptation
J Li, Z Zhang, H Zhao, X Zhou, X Zhou
arXiv preprint arXiv:2009.04984, 2020
32*2020
Multi-turn dialogue reading comprehension with pivot turns and knowledge
Z Zhang, J Li, H Zhao
IEEE/ACM Transactions on Audio, Speech, and Language Processing 29, 1161-1173, 2021
212021
Self-prompted chain-of-thought on large language models for open-domain multi-hop reasoning
J Wang, J Li, H Zhao
arXiv preprint arXiv:2310.13552, 2023
142023
Reformatted alignment
RZ Fan, X Li, H Zou, J Li, S He, E Chern, J Hu, P Liu
arXiv preprint arXiv:2402.12219, 2024
92024
The critique of critique
S Sun, J Li, W Yuan, R Yuan, W Li, P Liu
arXiv preprint arXiv:2401.04518, 2024
62024
Extending LLMs' Context Window with 100 Samples
Y Zhang, J Li, P Liu
arXiv preprint arXiv:2401.07004, 2024
52024
Dissecting Human and LLM Preferences
J Li, F Zhou, S Sun, Y Zhang, H Zhao, P Liu
arXiv preprint arXiv:2402.11296, 2024
42024
Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale
F Zhou, Z Wang, Q Liu, J Li, P Liu
arXiv preprint arXiv:2409.17115, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–12