Follow
Ronghao Dang
Ronghao Dang
DAMO NLP Lab
Verified email at tongji.edu.cn - Homepage
Title
Cited by
Cited by
Year
Unbiased directed object attention graph for object navigation
R Dang, Z Shi, L Wang, Z He, C Liu, Q Chen
Proceedings of the 30th ACM International Conference on Multimedia, 3617-3627, 2022
292022
Res-sts: Referring expression speaker via self-training with scorer for goal-oriented vision-language navigation
L Wang, Z He, R Dang, H Chen, C Liu, Q Chen
IEEE Transactions on Circuits and Systems for Video Technology 33 (7), 3441-3454, 2023
232023
Search for or navigate to? dual adaptive thinking for object navigation
R Dang, L Wang, Z He, S Su, J Tang, C Liu, Q Chen
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
202023
Vision-and-language navigation via causal learning
L Wang, Z He, R Dang, M Shen, C Liu, Q Chen
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024
182024
A dual semantic-aware recurrent global-adaptive network for vision-and-language navigation
L Wang, Z He, J Tang, R Dang, N Wang, C Liu, Q Chen
arXiv preprint arXiv:2305.03602, 2023
182023
Chain of ideas: Revolutionizing research via novel idea development with llm agents
L Li, W Xu, J Guo, R Zhao, X Li, Y Yuan, B Zhang, Y Jiang, Y Xin, R Dang, ...
arXiv preprint arXiv:2410.13185, 2024
142024
Instructdet: Diversifying referring object detection with generalized instructions
R Dang, J Feng, H Zhang, C Ge, L Song, L Gong, C Liu, Q Chen, F Zhu, ...
arXiv preprint arXiv:2310.05136, 2023
122023
Fine-grained spatiotemporal motion alignment for contrastive video representation learning
M Zhu, X Lin, R Dang, C Liu, Q Chen
Proceedings of the 31st ACM International Conference on Multimedia, 4725-4736, 2023
102023
Multiple thinking achieving meta-ability decoupling for object navigation
R Dang, L Chen, L Wang, Z He, C Liu, Q Chen
International Conference on Machine Learning, 6855-6872, 2023
102023
Clipose: Category-level object pose estimation with pre-trained vision-language knowledge
X Lin, M Zhu, R Dang, G Zhou, S Shu, F Lin, C Liu, Q Chen
IEEE Transactions on Circuits and Systems for Video Technology, 2024
82024
Learning depth representation from rgb-d videos by time-aware contrastive pre-training
Z He, L Wang, R Dang, S Li, Q Yan, C Liu, Q Chen
IEEE Transactions on Circuits and Systems for Video Technology 34 (6), 4143-4158, 2023
82023
Channel attention and multi-scale graph neural networks for skeleton-based action recognition
R Dang, C Liu, M Liu, Q Chen
AI Communications 35 (3), 187-205, 2022
82022
Bionic body wave control for an eel-like robot based on segmented soft actuator array
R Dang, H Gong, Y Wang, T Huang, Z Shi, X Zhang, Y Wu, Y Sun, P Qi
2021 40th Chinese Control Conference (CCC), 4261-4266, 2021
82021
Mote: Reconciling generalization with specialization for visual-language to video knowledge transfer
M Zhu, Z Wang, M Hu, R Dang, X Lin, X Zhou, C Liu, Q Chen
arXiv preprint arXiv:2410.10589, 2024
22024
Rotation-equivariant correspondence matching based on a dual-activation mixer
S Su, R Dang, R Fan, C Liu, Q Chen
Neurocomputing 568, 127053, 2024
12024
Contrastive Feedback Vision-Language for 3D Skeleton-Based Action Recognition
Q Zeng, R Dang, X Zhou, C Liu, Q Chen
IEEE Transactions on Multimedia, 2025
2025
ECBench: Can Multi-modal Foundation Models Understand the Egocentric World? A Holistic Embodied Cognition Benchmark
R Dang, Y Yuan, W Zhang, Y Xin, B Zhang, L Li, L Wang, Q Zeng, X Li, ...
arXiv preprint arXiv:2501.05031, 2025
2025
Enhanced Language-guided Robot Navigation with Panoramic Semantic Depth Perception and Cross-modal Fusion
L Wang, J Tang, Z He, R Dang, C Liu, Q Chen
2024 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2024
2024
Causality-based Cross-Modal Representation Learning for Vision-and-Language Navigation
L Wang, Z He, R Dang, H Chen, C Liu, Q Chen
arXiv preprint arXiv:2403.03405, 2024
2024
DL-PCN: Differential learning and parallel convolutional network for action recognition
Q Zeng, R Dang, Q Fang, C Liu, Q Chen
AI Communications 36 (3), 235-249, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–20