关注
Joey Hong
Joey Hong
UC Berkeley, Google Research
在 berkeley.edu 的电子邮件经过验证
标题
引用次数
引用次数
年份
Rules of the road: Predicting driving behavior with a convolutional model of semantic interactions
J Hong, B Sapp, J Philbin
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2019
2742019
When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?
A Kumar, J Hong, A Singh, S Levine
International Conference on Learning Representations, 2021
120*2021
Trajectory prediction on top-down scenes
XJ Hong, BJ Sapp
US Patent 11,169,531, 2021
692021
Latent bandits revisited
J Hong, B Kveton, M Zaheer, Y Chow, A Ahmed, C Boutilier
Advances in Neural Information Processing Systems 33, 13423-13433, 2020
492020
Hierarchical bayesian bandits
J Hong, B Kveton, M Zaheer, M Ghavamzadeh
International Conference on Artificial Intelligence and Statistics, 7724-7741, 2022
392022
Trajectory prediction on top-down scenes and associated model
XJ Hong, BJ Sapp, JWV Philbin, KZ Wang
US Patent 11,195,418, 2021
322021
On the sensitivity of reward inference to misspecified human models
J Hong, K Bhatia, A Dragan
arXiv preprint arXiv:2212.04717, 2022
192022
Latent programmer: Discrete latent codes for program synthesis
J Hong, D Dohan, R Singh, C Sutton, M Zaheer
International Conference on Machine Learning, 4308-4318, 2021
192021
Confidence-conditioned value functions for offline reinforcement learning
J Hong, A Kumar, S Levine
arXiv preprint arXiv:2212.04607, 2022
182022
Deep Hierarchy in Bandits
J Hong, B Kveton, S Katariya, M Zaheer, M Ghavamzadeh
International Conference on Machine Learning, 2022
172022
Thompson sampling with a mixture prior
J Hong, B Kveton, M Zaheer, M Ghavamzadeh, C Boutilier
International Conference on Artificial Intelligence and Statistics, 7565-7586, 2022
162022
Lmrl gym: Benchmarks for multi-turn reinforcement learning with language models
M Abdulhai, I White, C Snell, C Sun, J Hong, Y Zhai, K Xu, S Levine
arXiv preprint arXiv:2311.18232, 2023
152023
Learning to influence human behavior with offline reinforcement learning
J Hong, S Levine, A Dragan
Advances in Neural Information Processing Systems 36, 2024
112024
Zero-shot goal-directed dialogue via rl on imagined conversations
J Hong, S Levine, A Dragan
arXiv preprint arXiv:2311.05584, 2023
102023
Non-stationary latent bandits
J Hong, B Kveton, M Zaheer, Y Chow, A Ahmed, M Ghavamzadeh, ...
arXiv preprint arXiv:2012.00386, 2020
102020
Non-stationary off-policy optimization
J Hong, B Kveton, M Zaheer, Y Chow, A Ahmed
International Conference on Artificial Intelligence and Statistics, 2494-2502, 2021
82021
Ensemble Maximum Entropy Classification and Linear Regression for Author Age Prediction
J Hong, C Mattmann, P Ramirez
2017 IEEE International Conference on Information Reuse and Integration (IRI), 2017
72017
Exedec: Execution decomposition for compositional generalization in neural program synthesis
K Shi, J Hong, M Zaheer, P Yin, C Sutton
arXiv preprint arXiv:2307.13883, 2023
62023
Trajectory prediction on top-down scenes and associated model
XJ Hong, BJ Sapp, JWV Philbin, KZ Wang
US Patent App. 17/542,880, 2022
62022
Multi-task off-policy learning from bandit feedback
J Hong, B Kveton, M Zaheer, S Katariya, M Ghavamzadeh
International Conference on Machine Learning, 13157-13173, 2023
52023
系统目前无法执行此操作,请稍后再试。
文章 1–20