Benjamin Eysenbach
Benjamin Eysenbach
CMU, Google
在 google.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Diversity is all you need: Learning skills without a reward function
B Eysenbach, A Gupta, J Ibarz, S Levine
International Conference on Learning Representations, 2019
3052019
Clustervision: Visual supervision of unsupervised clustering
BC Kwon, B Eysenbach, J Verma, K Ng, C De Filippi, WF Stewart, A Perer
IEEE transactions on visualization and computer graphics 24 (1), 142-151, 2017
852017
Self-consistent trajectory autoencoder: Hierarchical reinforcement learning with trajectory embeddings
JD Co-Reyes, YX Liu, A Gupta, B Eysenbach, P Abbeel, S Levine
International Conference on Machine Learning, 2018
742018
Search on the replay buffer: Bridging planning and reinforcement learning
B Eysenbach, RR Salakhutdinov, S Levine
Advances in Neural Information Processing Systems, 15246-15257, 2019
622019
Unsupervised meta-learning for reinforcement learning
A Gupta, B Eysenbach, C Finn, S Levine
arXiv preprint arXiv:1806.04640, 2018
612018
Leave No Trace: Learning to reset for safe and autonomous reinforcement learning
B Eysenbach, S Gu, J Ibarz, S Levine
International Conference on Learning Representations, 2018
612018
Efficient exploration via state marginal matching
L Lee, B Eysenbach, E Parisotto, E Xing, S Levine, R Salakhutdinov
arXiv preprint arXiv:1906.05274, 2019
602019
Unsupervised curricula for visual meta-reinforcement learning
A Jabri, K Hsu, A Gupta, B Eysenbach, S Levine, C Finn
Advances in Neural Information Processing Systems, 2019
262019
If MaxEnt RL is the Answer, What is the Question?
B Eysenbach, S Levine
arXiv preprint arXiv:1910.01913, 2019
172019
Learning to reach goals without reinforcement learning
D Ghosh, A Gupta, J Fu, A Reddy, C Devin, B Eysenbach, S Levine
arXiv preprint arXiv:1912.06088, 2019
14*2019
Rewriting history with inverse rl: Hindsight inference for policy improvement
B Eysenbach, X Geng, S Levine, R Salakhutdinov
arXiv preprint arXiv:2002.11089, 2020
92020
Who is mistaken?
B Eysenbach, C Vondrick, A Torralba
arXiv preprint arXiv:1612.01175, 2016
62016
Learning to be Safe: Deep RL with a Safety Critic
K Srinivasan, B Eysenbach, S Ha, J Tan, C Finn
arXiv preprint arXiv:2010.14603, 2020
42020
C-Learning: Learning to Achieve Goals via Recursive Classification
B Eysenbach, R Salakhutdinov, S Levine
arXiv preprint arXiv:2011.08909, 2020
32020
F-irl: Inverse reinforcement learning via state marginal matching
T Ni, H Sikchi, Y Wang, T Gupta, L Lee, B Eysenbach
arXiv preprint arXiv:2011.04709, 2020
32020
Weakly-supervised reinforcement learning for controllable behavior
L Lee, B Eysenbach, R Salakhutdinov, SS Gu, C Finn
arXiv preprint arXiv:2004.02860, 2020
32020
Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
B Eysenbach, S Levine, R Salakhutdinov
arXiv preprint arXiv:2103.12656, 2021
12021
Maximum Entropy RL (Provably) Solves Some Robust RL Problems
B Eysenbach, S Levine
arXiv preprint arXiv:2103.06257, 2021
12021
ViNG: Learning Open-World Navigation with Visual Goals
D Shah, B Eysenbach, G Kahn, N Rhinehart, S Levine
arXiv preprint arXiv:2012.09812, 2020
12020
Interactive Visualization for Debugging RL
S Deshpande, B Eysenbach, J Schneider
arXiv preprint arXiv:2008.07331, 2020
12020
系统目前无法执行此操作,请稍后再试。
文章 1–20