OPT: Omni-perception pre-trainer for cross-modal understanding and generation J Liu, X Zhu, F Liu, L Guo, Z Zhao, M Sun, W Wang, H Lu, S Zhou, J Zhang, ... arXiv preprint arXiv:2107.00249, 2021 | 36 | 2021 |
Vast: A vision-audio-subtitle-text omni-modality foundation model and dataset S Chen, H Li, Q Wang, Z Zhao, M Sun, X Zhu, J Liu Advances in Neural Information Processing Systems 36, 2024 | 32 | 2024 |
Chatbridge: Bridging modalities with large language model as a language catalyst Z Zhao, L Guo, T Yue, S Chen, S Shao, X Zhu, Z Yuan, J Liu arXiv preprint arXiv:2305.16103, 2023 | 29 | 2023 |
Vl-mamba: Exploring state space models for multimodal learning Y Qiao, Z Yu, L Guo, S Chen, Z Zhao, M Sun, Q Wu, J Liu arXiv preprint arXiv:2403.13600, 2024 | 6 | 2024 |
Mm21 pre-training for video understanding challenge: Video captioning with pretraining techniques S Chen, X Zhu, D Hao, W Liu, J Liu, Z Zhao, L Guo, J Liu Proceedings of the 29th ACM International Conference on Multimedia, 4853-4857, 2021 | 5 | 2021 |
MAMO: Fine-Grained Vision-Language Representations Learning with Masked Multimodal Modeling Z Zhao, L Guo, X He, S Shao, Z Yuan, J Liu Proceedings of the 46th International ACM SIGIR Conference on Research and …, 2023 | 4 | 2023 |
Mamo: masked multimodal modeling for fine-grained vision-language representation learning Z Zhao, L Guo, X He, S Shao, Z Yuan, J Liu arXiv preprint arXiv:2210.04183, 2022 | 4 | 2022 |
SC-Tune: Unleashing Self-Consistent Referential Comprehension in Large Vision Language Models T Yue, J Cheng, L Guo, X Dai, Z Zhao, X He, G Xiong, Y Lv, J Liu arXiv preprint arXiv:2403.13263, 2024 | | 2024 |