A survey on model-based reinforcement learning FM Luo, T Xu, H Lai, XH Chen, W Zhang, Y Yu Science China Information Sciences 67 (2), 121101, 2024 | 118* | 2024 |
Error bounds of imitating policies and environments T Xu, Z Li, Y Yu Advances in Neural Information Processing Systems 33, 15737-15749, 2020 | 108 | 2020 |
Error bounds of imitating policies and environments for reinforcement learning T Xu, Z Li, Y Yu IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (10), 6968 …, 2021 | 40 | 2021 |
Remax: A simple, effective, and efficient reinforcement learning method for aligning large language models Z Li, T Xu, Y Zhang, Z Lin, Y Yu, R Sun, ZQ Luo Forty-first International Conference on Machine Learning, 2023 | 33* | 2023 |
Rethinking ValueDice: Does it really improve performance? Z Li, T Xu, Y Yu, ZQ Luo arXiv preprint arXiv:2202.02468, 2022 | 14 | 2022 |
Yang Yu. Reward-consistent dynamics models are strongly generalizable for offline reinforcement learning FM Luo, T Xu, X Cao arXiv preprint arXiv:2310.05422, 2023 | 12 | 2023 |
Provably efficient adversarial imitation learning with unknown transitions T Xu, Z Li, Y Yu, ZQ Luo Uncertainty in Artificial Intelligence, 2367-2378, 2023 | 9 | 2023 |
Policy optimization in rlhf: The impact of out-of-preference data Z Li, T Xu, Y Yu arXiv preprint arXiv:2312.10584, 2023 | 8 | 2023 |
Imitation learning from imperfection: Theoretical justifications and algorithms Z Li, T Xu, Z Qin, Y Yu, ZQ Luo Advances in Neural Information Processing Systems 36, 2024 | 7 | 2024 |
Understanding adversarial imitation learning in small sample regime: A stage-coupled analysis T Xu, Z Li, Y Yu, ZQ Luo arXiv preprint arXiv:2208.01899, 2022 | 6 | 2022 |
On generalization of adversarial imitation learning and beyond T Xu, Z Li, Y Yu, ZQ Luo arXiv preprint arXiv:2106.10424, 2021 | 5 | 2021 |
Model gradient: unified model and policy learning in model-based reinforcement learning C Jia, F Zhang, T Xu, JC Pang, Z Zhang, Y Yu Frontiers of Computer Science 18 (4), 184339, 2024 | 4 | 2024 |
Policy Rehearsing: Training Generalizable Policies for Reinforcement Learning C Jia, C Gao, H Yin, F Zhang, XH Chen, T Xu, L Yuan, Z Zhang, ZH Zhou, ... The Twelfth International Conference on Learning Representations, 2024 | 4 | 2024 |
Theoretical analysis of offline imitation with supplementary dataset Z Li, T Xu, Y Yu, ZQ Luo arXiv preprint arXiv:2301.11687, 2023 | 2 | 2023 |
Nearly Minimax Optimal Adversarial Imitation Learning with Known and Unknown Transitions T Xu, Z Li, Y Yu CoRR abs/2106.10424, 2021 | 2 | 2021 |
Offline Imitation Learning without Auxiliary High-quality Behavior Data JJ Shao, HS Shi, T Xu, LZ Guo, Y Yu, YF Li | 2 | |
A Note on Target Q-learning For Solving Finite MDPs with A Generative Oracle Z Li, T Xu, Y Yu arXiv preprint arXiv:2203.11489, 2022 | 1 | 2022 |
Sparsity prior regularized Q-learning for sparse action tasks JC Pang, T Xu, SY Jiang, YR Liu, Y Yu arXiv preprint arXiv:2105.08666, 2021 | 1 | 2021 |
When is RL better than DPO in RLHF? A Representation and Optimization Perspective Z Li, T Xu, Y Yu The Second Tiny Papers Track at ICLR 2024, 0 | 1 | |
Provably and Practically Efficient Adversarial Imitation Learning with General Function Approximation T Xu, Z Zhang, R Chen, Y Sun, Y Yu arXiv preprint arXiv:2411.00610, 2024 | | 2024 |