Follow
Jiafei Lyu
Jiafei Lyu
PhD of Control Science and Engineering, Tsinghua Shenzhen International Graduate School
Verified email at mails.tsinghua.edu.cn - Homepage
Title
Cited by
Cited by
Year
Mildly conservative Q-learning for offline reinforcement learning
J Lyu, X Ma, X Li, Z Lu
NeurIPS 2022 (Spotlight), 2022
802022
Nuclear power plants with artificial intelligence in industry 4.0 era: Top-level design and current applications—A systemic review
C Lu, J Lyu, L Zhang, A Gong, Y Fan, J Yan, X Li
IEEE Access 8, 194315-194332, 2020
552020
Efficient Continuous Control with Double Actors and Regularized Critics
J Lyu, X Ma, J Yan, X Li
In proceedings of 36th AAAI Conference on Artificial Intelligence (AAAI-22 oral), 2021
462021
Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination
J Lyu, X Li, Z Lu
NeurIPS 2022 (Spotlight), 2022
162022
Uncertainty-driven Trajectory Truncation for Model-based Offline Reinforcement Learning
J Zhang, J Lyu, X Ma, J Yan, J Yang, L Wan, X Li
ECAI 2023 (Oral); ICRA 2023 L-DOD Workshop, 2023
13*2023
Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model
K Yang, J Tao, J Lyu, C Ge, J Chen, Q Li, W Shen, X Zhu, X Li
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2024), 2023
122023
Bias-reduced multi-step hindsight experience replay for efficient multi-goal reinforcement learning
R Yang, J Lyu, Y Yang, J Yan, F Luo, D Luo, L Li, X Li
arXiv preprint arXiv:2102.12962, 2021
8*2021
Normalization Enhances Generalization in Visual Reinforcement Learning
L Li, J Lyu, G Ma, Z Wang, Z Yang, X Li, Z Li
AAMAS 2024 (Oral); Generalization in Planning Workshop@NeurIPS 2023, 2023
62023
Value Activation for Bias Alleviation: Generalized-activated Deep Double Deterministic Policy Gradients
J Lyu, Y Yang, J Yan, X Li
Neurocomputing, 2021
52021
State Advantage Weighting for Offline RL
J Lyu, A Gong, L Wan, Z Lu, X Li
ICLR2023 tiny paper; 3rd Offline Reinforcement Learning Workshop at NeurIPS 2022, 2022
42022
Exploration and Anti-Exploration with Distributional Random Network Distillation
K Yang, J Tao, J Lyu, X Li
International Conference on Machine Learning (ICML 2024), 2024
32024
PEARL: Zero-shot Cross-task Preference Alignment and Robust Reward Learning for Robotic Manipulation
R Liu, Y Du, F Bai, J Lyu, X Li
International Conference on Machine Learning (ICML 2024), 2024
3*2024
Understanding what affects generalization gap in visual reinforcement learning: Theory and empirical evidence
J Lyu, L Wan, X Li, Z Lu
Journal of Artificial Intelligence Research, 2024
22024
Prag: Periodic regularized action gradient for efficient continuous control
X Li, Z Qiao, A Gong, J Lyu, C Yu, J Yan, X Li
Pacific Rim International Conference on Artificial Intelligence, 106-119, 2022
22022
Cross-Domain Policy Adaptation by Capturing Representation Mismatch
J Lyu, C Bai, J Yang, Z Lu, X Li
International Conference on Machine Learning (ICML 2024), 2024
12024
Towards understanding how to reduce generalization gap in visual reinforcement learning
J Lyu, L Wan, X Li, Z Lu
Proceedings of the 23rd International Conference on Autonomous Agents and …, 2024
12024
The primacy bias in Model-based RL
Z Qiao, J Lyu, X Li
arXiv preprint arXiv:2310.15017, 2023
12023
Multi-Step Hindsight Experience Replay with Bias Reduction for Efficient Multi-Goal Reinforcement Learning
Y Yang, R Yang, J Lyu, J Yan, F Luo, D Luo, X Li, L Li
2023 International Conference on Frontiers of Robotics and Software …, 2023
12023
Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control via Sample Multiple Reuse
J Lyu, L Wan, Z Lu, X Li
Information Sciences, 2023
12023
A two-stage reinforcement learning-based approach for multi-entity task allocation
A Gong, K Yang, J Lyu, X Li
Engineering Applications of Artificial Intelligence 136, 108906, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–20