팔로우
Shixiang Shane Gu
Shixiang Shane Gu
다른 이름Shane Gu, Shixiang Gu
Google DeepMind
google.com의 이메일 확인됨 - 홈페이지
제목
인용
인용
연도
Categorical reparameterization with gumbel-softmax
E Jang, S Gu, B Poole
arXiv preprint arXiv:1611.01144, 2016
65082016
Gpt-4 technical report
J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ...
arXiv preprint arXiv:2303.08774, 2023
57592023
Large language models are zero-shot reasoners
T Kojima, SS Gu, M Reid, Y Matsuo, Y Iwasawa
Advances in neural information processing systems 35, 22199-22213, 2022
34182022
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
Journal of Machine Learning Research 25 (70), 1-53, 2024
28742024
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
20842023
Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates
S Gu, E Holly, T Lillicrap, S Levine
2017 IEEE international conference on robotics and automation (ICRA), 3389-3396, 2017
19802017
Continuous deep q-learning with model-based acceleration
S Gu, T Lillicrap, I Sutskever, S Levine
International conference on machine learning, 2829-2838, 2016
13262016
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
11432022
Data-efficient hierarchical reinforcement learning
O Nachum, SS Gu, H Lee, S Levine
Advances in neural information processing systems 31, 2018
10482018
Towards deep neural network architectures robust to adversarial examples
S Gu, L Rigazio
arXiv preprint arXiv:1412.5068, 2014
10472014
A minimalist approach to offline reinforcement learning
S Fujimoto, SS Gu
Advances in neural information processing systems 34, 20132-20145, 2021
7852021
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ...
arXiv preprint arXiv:2403.05530, 2024
6172024
Dynamics-aware unsupervised discovery of skills
A Sharma, S Gu, S Levine, V Kumar, K Hausman
arXiv preprint arXiv:1907.01657, 2019
4682019
Human-centric dialog training via offline reinforcement learning
N Jaques, JH Shen, A Ghandeharioun, C Ferguson, A Lapedriza, ...
arXiv preprint arXiv:2010.05848, 2020
425*2020
Large language models can self-improve
J Huang, SS Gu, L Hou, Y Wu, X Wang, H Yu, J Han
arXiv preprint arXiv:2210.11610, 2022
4242022
Q-prop: Sample-efficient policy gradient with an off-policy critic
S Gu, T Lillicrap, Z Ghahramani, RE Turner, S Levine
arXiv preprint arXiv:1611.02247, 2016
4152016
A divergence minimization perspective on imitation learning methods
SKS Ghasemipour, R Zemel, S Gu
Conference on robot learning, 1259-1277, 2020
3082020
Temporal difference models: Model-free deep rl for model-based control
V Pong, S Gu, M Dalal, S Levine
arXiv preprint arXiv:1802.09081, 2018
3012018
Sequence tutor: Conservative fine-tuning of sequence generation models with kl-control
N Jaques, S Gu, D Bahdanau, JM Hernández-Lobato, RE Turner, D Eck
International Conference on Machine Learning, 1645-1654, 2017
272*2017
Language as an abstraction for hierarchical deep reinforcement learning
Y Jiang, SS Gu, KP Murphy, C Finn
Advances in Neural Information Processing Systems 32, 2019
2512019
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–20