关注
jiahao liu
jiahao liu
Meituan NLP
在 meituan.com 的电子邮件经过验证
标题
引用次数
引用次数
年份
Topic-to-essay generation with neural networks.
X Feng, M Liu, J Liu, B Qin, Y Sun, T Liu
IJCAI 2018, 4078-4084, 2018
792018
VECO: Variable and flexible cross-lingual pre-training for language understanding and generation
F Luo, W Wang, J Liu, Y Liu, B Bi, S Huang, F Huang, L Si
ACL 2021 (long paper), 2020
60*2020
RankCSE: Unsupervised Sentence Representations Learning via Learning to Rank
J Liu, J Liu, Q Wang, J Wang, W Wu, Y Xian, D Zhao, K Chen, R Yan
ACL 2023 (long paper, oral), 13785–13802, 2023
152023
VIRT: Improving Representation-based Text Matching via Virtual Interaction
D Li, Y Yang, H Tang, J Liu, Q Wang, J Wang, T Xu, W Wu, E Chen
EMNLP 2022, 914-925, 2022
13*2022
A practical semi-parametric contextual bandit.
Y Peng, M Xie, J Liu, X Meng, N Li, C Yang, T Yao, R Jin
IJCAI 2019, 3246-3252, 2019
112019
A planning based framework for essay generation
B Qin, D Tang, X Geng, D Ning, J Liu, T Liu
arXiv preprint arXiv:1512.05919, 2015
82015
mCL-NER: Cross-Lingual Named Entity Recognition via Multi-view Contrastive Learning
Y Mo, J Yang, J Liu, Q Wang, R Chen, J Wang, Z Li
AAAI 2024, 2023
52023
Multi-Task Transformer with Relation-Attention and Type-Attention for Named Entity Recognition
Y Mo, H Tang, J Liu, Q Wang, Z Xu, J Wang, W Wu, Z Li
ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and …, 2023
52023
Lifting the Curse of Capacity Gap in Distilling Language Models
C Zhang, Y Yang, J Liu, J Wang, Y Xian, B Wang, D Song
ACL 2023 (long paper), 4535–4553, 2023
52023
PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language Models
Z Gong, J Liu, Q Wang, Y Yang, J Wang, W Wu, Y Xian, D Zhao, R Yan
Findings of ACL 2023, 8065–8079, 2023
42023
Minimal Distillation Schedule for Extreme Language Model Compression
C Zhang, Y Yang, Q Wang, J Liu, J Wang, W Wu, D Song
Findings of EACL 2024, 1378-1394, 2024
2*2024
GNN-encoder: Learning a Dual-encoder Architecture via Graph Neural Networks for Dense Passage Retrieval
J Liu, J Liu, Y Yang, J Wang, W Wu, D Zhao, R Yan
Findings of EMNLP2022, 2022
12022
Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration
P Wu, J Liu, Z Gong, Q Wang, J Li, J Wang, X Cai, D Zhao
arXiv preprint arXiv:2404.12022, 2024
2024
What Makes Quantization for Large Language Models Hard? An Empirical Study from the Lens of Perturbation
Z Gong, J Liu, J Wang, X Cai, D Zhao, R Yan
AAAI 2024, 2024
2024
Improving Input-label Mapping with Demonstration Replay for In-context Learning
Z Gong, J Liu, Q Wang, J Wang, X Cai, D Zhao, R Yan
Findings of EMNLP2023, 2023
2023
Retrieval-based Knowledge Transfer: An Effective Approach for Extreme Large Language Model Compression
J Liu, J Liu, Q Wang, J Wang, X Cai, D Zhao, RL Wang, R Yan
Findings of EMNLP2023, 2023
2023
Deep Learning Based Document Theme Analysis for Composition Generation
J Liu, C Sun, B Qin
Chinese Computational Linguistics and Natural Language Processing Based on …, 2017
2017
系统目前无法执行此操作,请稍后再试。
文章 1–17