关注
Junyang Lin
Junyang Lin
Qwen Team, Alibaba Group & Peking University
在 alibaba-inc.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Qwen technical report
J Bai, S Bai, Y Chu, Z Cui, K Dang, X Deng, Y Fan, W Ge, Y Han, F Huang, ...
arXiv preprint arXiv:2309.16609, 2023
15122023
Qwen-vl: A frontier large vision-language model with versatile abilities
J Bai, S Bai, S Yang, S Wang, S Tan, P Wang, J Lin, C Zhou, J Zhou
arXiv preprint arXiv:2308.12966, 2023
1247*2023
OFA: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework
P Wang, A Yang, R Men, J Lin, S Bai, Z Li, J Ma, C Zhou, J Zhou, H Yang
ICML 2022, 2022
10792022
Cogview: Mastering text-to-image generation via transformers
M Ding, Z Yang, W Hong, W Zheng, C Zhou, D Yin, J Lin, X Zou, Z Shao, ...
Advances in neural information processing systems 34, 19822-19835, 2021
7482021
Qwen2 technical report
A Yang, B Yang, B Hui, B Zheng, B Yu, C Zhou, C Li, C Li, D Liu, F Huang, ...
arXiv preprint arXiv:2407.10671, 2024
5292024
Understanding and improving layer normalization
J Xu, X Sun, Z Zhang, G Zhao, J Lin
Advances in neural information processing systems 32, 2019
3932019
Towards knowledge-based recommender dialog system
Q Chen, J Lin, Y Zhang, M Ding, Y Cen, H Yang, J Tang
arXiv preprint arXiv:1908.05391, 2019
2682019
Diversity-promoting GAN: A cross-entropy based generative adversarial network for diversified text generation
J Xu, X Ren, J Lin, X Sun
Proceedings of the 2018 conference on empirical methods in natural language …, 2018
255*2018
Global Encoding for Abstractive Summarization
J Lin, X Sun, S Ma, Q Su
Proceedings of the 56th Annual Meeting of the Association for Computational …, 2018
1952018
M6: A chinese multimodal pretrainer
J Lin, R Men, A Yang, C Zhou, M Ding, Y Zhang, P Wang, A Wang, ...
arXiv preprint arXiv:2103.00823, 2021
178*2021
Explicit sparse transformer: Concentrated attention through explicit selection
G Zhao, J Lin, Z Zhang, X Ren, Q Su, X Sun
arXiv preprint arXiv:1912.11637, 2019
1372019
Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
A Yang*, J Pan*, J Lin*, R Men, Y Zhang, J Zhou, C Zhou
arXiv preprint arXiv:2211.01335, 2022
1152022
Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution
P Wang, S Bai, S Tan, S Wang, Z Fan, J Bai, K Chen, X Liu, J Wang, W Ge, ...
arXiv preprint arXiv:2409.12191, 2024
1072024
One-peace: Exploring one general representation model toward unlimited modalities
P Wang, S Wang, J Lin, S Bai, X Zhou, J Zhou, X Wang, C Zhou
arXiv preprint arXiv:2305.11172, 2023
1062023
Imitation learning for non-autoregressive neural machine translation
B Wei, M Wang, H Zhou, J Lin, J Xie, X Sun
arXiv preprint arXiv:1906.02041, 2019
1002019
Towards knowledge-based personalized product description generation in e-commerce
Q Chen, J Lin, Y Zhang, H Yang, J Zhou, J Tang
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge …, 2019
982019
Expertprompting: Instructing large language models to be distinguished experts
B Xu, A Yang, J Lin, Q Wang, C Zhou, Y Zhang, Z Mao
arXiv preprint arXiv:2305.14688, 2023
972023
Interbert: Vision-and-language interaction for multi-modal pretraining
J Lin, A Yang, Y Zhang, J Liu, J Zhou, H Yang
arXiv preprint arXiv:2003.13198, 2020
952020
Modality competition: What makes joint training of multi-modal network fail in deep learning?(provably)
Y Huang, J Lin, C Zhou, H Yang, L Huang
International conference on machine learning, 9226-9259, 2022
902022
A deep reinforced sequence-to-set model for multi-label classification
P Yang, F Luo, S Ma, J Lin, X Sun
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
87*2019
系统目前无法执行此操作,请稍后再试。
文章 1–20