Segui
Jack Parker-Holder
Jack Parker-Holder
Google DeepMind, UCL
Email verificata su google.com - Home page
Titolo
Citata da
Citata da
Anno
Effective diversity in population based reinforcement learning
J Parker-Holder, A Pacchiano, KM Choromanski, SJ Roberts
Advances in Neural Information Processing Systems 33, 18050-18062, 2020
1662020
Evolving curricula with regret-based environment design
J Parker-Holder, M Jiang, M Dennis, M Samvelyan, J Foerster, ...
International Conference on Machine Learning, 17473-17498, 2022
1182022
Human-timescale adaptation in an open-ended task space
AA Team, J Bauer, K Baumli, S Baveja, F Behbahani, A Bhoopchand, ...
ICML 2023 (Oral), 2023
107*2023
Replay-guided adversarial environment design
M Jiang, M Dennis, J Parker-Holder, J Foerster, E Grefenstette, ...
Advances in Neural Information Processing Systems 34, 1884-1897, 2021
992021
Automated reinforcement learning (autorl): A survey and open problems
J Parker-Holder, R Rajan, X Song, A Biedenkapp, Y Miao, T Eimer, ...
Journal of Artificial Intelligence Research 74, 517-568, 2022
982022
Minihack the planet: A sandbox for open-ended reinforcement learning research
M Samvelyan, R Kirk, V Kurin, J Parker-Holder, M Jiang, E Hambro, ...
arXiv preprint arXiv:2109.13202, 2021
922021
Provably efficient online hyperparameter optimization with population-based bandits
J Parker-Holder, V Nguyen, SJ Roberts
Advances in neural information processing systems 33, 17200-17211, 2020
852020
Genie: Generative Interactive Environments
J Bruce, M Dennis, A Edwards, J Parker-Holder, Y Shi, E Hughes, M Lai, ...
ICML 2024 (Best Paper Award), 2024
782024
Revisiting Design Choices in Offline Model-Based Reinforcement Learning
C Lu, PJ Ball, J Parker-Holder, MA Osborne, SJ Roberts
arXiv preprint arXiv:2110.04135, 2021
65*2021
Tactical optimism and pessimism for deep reinforcement learning
T Moskovitz, J Parker-Holder, A Pacchiano, M Arbel, M Jordan
Advances in Neural Information Processing Systems 34, 12849-12863, 2021
572021
Ready policy one: World building through active learning
P Ball, J Parker-Holder, A Pacchiano, K Choromanski, S Roberts
International Conference on Machine Learning, 591-601, 2020
562020
From complexity to simplicity: Adaptive es-active subspaces for blackbox optimization
KM Choromanski, A Pacchiano, J Parker-Holder, Y Tang, V Sindhwani
Advances in Neural Information Processing Systems 32, 2019
532019
Same state, different task: Continual reinforcement learning without interference
S Kessler, J Parker-Holder, P Ball, S Zohren, SJ Roberts
Proceedings of the AAAI Conference on Artificial Intelligence 36 (7), 7143-7151, 2022
51*2022
Synthetic Experience Replay
C Lu, PJ Ball, YW Teh, J Parker-Holder
NeurIPS 2023, 2023
472023
Learning to score behaviors for guided policy optimization
A Pacchiano, J Parker-Holder, Y Tang, K Choromanski, A Choromanska, ...
International Conference on Machine Learning, 7445-7454, 2020
452020
Provably robust blackbox optimization for reinforcement learning
K Choromanski, A Pacchiano, J Parker-Holder, Y Tang, D Jain, Y Yang, ...
Conference on Robot Learning, 683-696, 2020
432020
Challenges and opportunities in offline reinforcement learning from visual observations
C Lu, PJ Ball, TGJ Rudner, J Parker-Holder, MA Osborne, YW Teh
arXiv preprint arXiv:2206.04779, 2022
422022
Towards tractable optimism in model-based reinforcement learning
A Pacchiano, P Ball, J Parker-Holder, K Choromanski, S Roberts
Uncertainty in Artificial Intelligence, 1413-1423, 2021
40*2021
Augmented world models facilitate zero-shot dynamics generalization from a single offline environment
PJ Ball, C Lu, J Parker-Holder, S Roberts
International Conference on Machine Learning, 619-629, 2021
392021
Ridge rider: Finding diverse solutions by following eigenvectors of the hessian
J Parker-Holder, L Metz, C Resnick, H Hu, A Lerer, A Letcher, ...
Advances in Neural Information Processing Systems 33, 753-765, 2020
322020
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–20