Segui
Justin Fu
Justin Fu
Email verificata su berkeley.edu - Home page
Titolo
Citata da
Citata da
Anno
Offline reinforcement learning: Tutorial, review, and perspectives on open problems
S Levine, A Kumar, G Tucker, J Fu
arXiv preprint arXiv:2005.01643, 2020
20192020
D4rl: Datasets for deep data-driven reinforcement learning
J Fu, A Kumar, O Nachum, G Tucker, S Levine
arXiv preprint arXiv:2004.07219, 2020
11962020
Stabilizing off-policy q-learning via bootstrapping error reduction
A Kumar, J Fu, M Soh, G Tucker, S Levine
Advances in Neural Information Processing Systems, 11761-11771, 2019
11242019
Learning Robust Rewards with Adversarial Inverse Reinforcement Learning
J Fu, K Luo, S Levine
International Conference on Learning Representations (ICLR), 2017
10912017
When to trust your model: Model-based policy optimization
M Janner, J Fu, M Zhang, S Levine
Advances in Neural Information Processing Systems, 12498-12509, 2019
10432019
Offline reinforcement learning: Tutorial, review
S Levine, A Kumar, G Tucker, J Fu
and Perspectives on Open Problems 5, 2020
2252020
One-shot learning of manipulation skills with online dynamics adaptation and neural network priors
J Fu, S Levine, P Abbeel
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS …, 2016
189*2016
Ex2: Exploration with exemplar models for deep reinforcement learning
J Fu, J Co-Reyes, S Levine
Advances in Neural Information Processing Systems, 2577-2587, 2017
1862017
Learning to reach goals via iterated supervised learning
D Ghosh, A Gupta, A Reddy, J Fu, C Devin, B Eysenbach, S Levine
arXiv preprint arXiv:1912.06088, 2019
1802019
Diagnosing Bottlenecks in Deep Q-learning Algorithms
J Fu, A Kumar, M Soh, S Levine
International Conference on Machine Learning (ICML), 2019
1622019
Variational inverse control with events: A general framework for data-driven reward definition
J Fu, A Singh, D Ghosh, L Yang, S Levine
Advances in Neural Information Processing Systems, 8538-8547, 2018
1532018
From Language to Goals: Inverse Reinforcement Learning for Vision-Based Instruction Following
J Fu, A Korattikara, S Levine, S Guadarrama
International Conference on Learning Representations (ICLR), 2019
1392019
Benchmarks for deep off-policy evaluation
J Fu, M Norouzi, O Nachum, G Tucker, Z Wang, A Novikov, M Yang, ...
arXiv preprint arXiv:2103.16596, 2021
992021
Generalizing Skills with Semi-Supervised Reinforcement Learning
C Finn, T Yu, J Fu, P Abbeel, S Levine
International Conference on Learning Representations (ICLR), 2017
882017
Imitation is not enough: Robustifying imitation with reinforcement learning for challenging driving scenarios
Y Lu, J Fu, G Tucker, X Pan, E Bronstein, R Roelofs, B Sapp, B White, ...
2023 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2023
692023
Waymax: An accelerated, data-driven simulator for large-scale autonomous driving research
C Gulino, J Fu, W Luo, G Tucker, E Bronstein, Y Lu, J Harb, X Pan, ...
Advances in Neural Information Processing Systems 36, 2024
672024
Offline model-based optimization via normalized maximum likelihood estimation
J Fu, S Levine
arXiv preprint arXiv:2102.07970, 2021
582021
Hierarchical model-based imitation learning for planning in autonomous driving
E Bronstein, M Palatucci, D Notz, B White, A Kuefler, Y Lu, S Paul, ...
2022 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2022
482022
Chai: A chatbot ai for task-oriented dialogue with offline reinforcement learning
S Verma, J Fu, M Yang, S Levine
arXiv preprint arXiv:2204.08426, 2022
432022
Learning to reach goals without reinforcement learning
D Ghosh, A Gupta, J Fu, A Reddy, C Devin, B Eysenbach, S Levine
282019
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–20