关注
John Schulman
John Schulman
Research Scientist, OpenAI
在 openai.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Proximal policy optimization algorithms
J Schulman, F Wolski, P Dhariwal, A Radford, O Klimov
arXiv preprint arXiv:1707.06347, 2017
94802017
Trust region policy optimization
J Schulman, S Levine, P Abbeel, M Jordan, P Moritz
International conference on machine learning, 1889-1897, 2015
56542015
OpenAI Gym
G Brockman, V Cheung, L Pettersson, J Schneider, J Schulman, J Tang, ...
arXiv preprint arXiv:1606.01540, 2016
47982016
Infogan: Interpretable representation learning by information maximizing generative adversarial nets
X Chen, Y Duan, R Houthooft, J Schulman, I Sutskever, P Abbeel
Advances in neural information processing systems 29, 2016
40852016
High-dimensional continuous control using generalized advantage estimation
J Schulman, P Moritz, S Levine, M Jordan, P Abbeel
arXiv preprint arXiv:1506.02438, 2015
22982015
On first-order meta-learning algorithms
A Nichol, J Achiam, J Schulman
arXiv preprint arXiv:1803.02999, 2018
1778*2018
Concrete problems in AI safety
D Amodei, C Olah, J Steinhardt, P Christiano, J Schulman, D Mané
arXiv preprint arXiv:1606.06565, 2016
17542016
Benchmarking deep reinforcement learning for continuous control
Y Duan, X Chen, R Houthooft, J Schulman, P Abbeel
International conference on machine learning, 1329-1338, 2016
16112016
OpenAI Baselines
P Dhariwal, C Hesse, M Plappert, A Radford, J Schulman, S Sidor, Y Wu
8842017
Theano: A Python framework for fast computation of mathematical expressions
R Al-Rfou, G Alain, A Almahairi, C Angermueller, D Bahdanau, N Ballas, ...
arXiv e-prints, arXiv: 1605.02688, 2016
8172016
RL^2: Fast Reinforcement Learning via Slow Reinforcement Learning
Y Duan, J Schulman, X Chen, PL Bartlett, I Sutskever, P Abbeel
arXiv preprint arXiv:1611.02779, 2016
8142016
Vime: Variational information maximizing exploration
R Houthooft, X Chen, Y Duan, J Schulman, F De Turck, P Abbeel
Advances in neural information processing systems 29, 2016
7362016
Stable baselines
A Hill, A Raffin, M Ernestus, A Gleave, A Kanervisto, R Traore, P Dhariwal, ...
6962018
Variational lossy autoencoder
X Chen, DP Kingma, T Salimans, Y Duan, P Dhariwal, J Schulman, ...
arXiv preprint arXiv:1611.02731, 2016
6362016
Motion planning with sequential convex optimization and convex collision checking
J Schulman, Y Duan, J Ho, A Lee, I Awwal, H Bradlow, J Pan, S Patil, ...
The International Journal of Robotics Research 33 (9), 1251-1270, 2014
6292014
Spike sorting for large, dense electrode arrays
C Rossant, SN Kadir, DFM Goodman, J Schulman, MLD Hunter, ...
Nature neuroscience 19 (4), 634-641, 2016
6232016
#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
H Tang, R Houthooft, D Foote, A Stooke, OAIX Chen, Y Duan, J Schulman, ...
Advances in Neural Information Processing Systems, 2750-2759, 2017
6212017
Learning complex dexterous manipulation with deep reinforcement learning and demonstrations
A Rajeswaran, V Kumar, A Gupta, G Vezzani, J Schulman, E Todorov, ...
arXiv preprint arXiv:1709.10087, 2017
6162017
Proceedings of the 32nd International Conference on Neural Information Processing Systems
S Bengio, HM Wallach, H Larochelle, K Grauman, N Cesa-Bianchi
Curran Associates Inc., 2018
577*2018
Finding locally optimal, collision-free trajectories with sequential convex optimization.
J Schulman, J Ho, AX Lee, I Awwal, H Bradlow, P Abbeel
Robotics: science and systems 9 (1), 1-10, 2013
4992013
系统目前无法执行此操作,请稍后再试。
文章 1–20