Подписаться
Andrew Dai
Andrew Dai
Google DeepMind
Подтвержден адрес электронной почты в домене google.com - Главная страница
Название
Процитировано
Процитировано
Год
Palm: Scaling language modeling with pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
Journal of Machine Learning Research 24 (240), 1-113, 2023
58102023
Finetuned language models are zero-shot learners
J Wei, M Bosma, VY Zhao, K Guu, AW Yu, B Lester, N Du, AM Dai, QV Le
arXiv preprint arXiv:2109.01652, 2021
36162021
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
Journal of Machine Learning Research 25 (70), 1-53, 2024
34682024
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
34172023
Natural questions: a benchmark for question answering research
T Kwiatkowski, J Palomaki, O Redfield, M Collins, A Parikh, C Alberti, ...
Transactions of the Association for Computational Linguistics 7, 453-466, 2019
31632019
Generating sentences from a continuous space
SR Bowman, L Vilnis, O Vinyals, AM Dai, R Jozefowicz, S Bengio
Proceedings of the 20th SIGNLL Conference on Computational Natural Language …, 2016
29552016
Scalable and accurate deep learning with electronic health records
A Rajkomar, E Oren, K Chen, AM Dai, N Hajaj, M Hardt, PJ Liu, X Liu, ...
NPJ digital medicine 1 (1), 18, 2018
25932018
HyperNetworks
D Ha, A Dai, QV Le
Proceedings of the International Conference on Learning Representations, 2017
18982017
Semi-supervised sequence learning
AM Dai, QV Le
Advances in neural information processing systems 28, 2015
17102015
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
16192023
Adversarial Training Methods for Semi-Supervised Text Classification
T Miyato, AM Dai, I Goodfellow
Proceedings of the International Conference on Learning Representations, 2017
13972017
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
13932022
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
G Team, P Georgiev, VI Lei, R Burnell, L Bai, A Gulati, G Tanzer, ...
arXiv preprint arXiv:2403.05530, 2024
13052024
Glam: Efficient scaling of language models with mixture-of-experts
N Du, Y Huang, AM Dai, S Tong, D Lepikhin, Y Xu, M Krikun, Y Zhou, ...
International conference on machine learning, 5547-5569, 2022
6602022
Maskgan: better text generation via filling in the_
W Fedus, I Goodfellow, AM Dai
arXiv preprint arXiv:1801.07736, 2018
6542018
Document embedding with paragraph vectors
AM Dai, C Olah, QV Le
NIPS 2014 Deep learning workshop, 2015
5822015
Mixture-of-experts with expert choice routing
Y Zhou, T Lei, H Liu, N Du, Y Huang, V Zhao, AM Dai, QV Le, J Laudon
Advances in Neural Information Processing Systems 35, 7103-7114, 2022
2952022
Many paths to equilibrium: GANs do not need to decrease a divergence at every step
W Fedus, M Rosca, B Lakshminarayanan, AM Dai, S Mohamed, ...
arXiv preprint arXiv:1710.08446, 2017
2642017
Who said what: Modeling individual labelers improves classification
M Guan, V Gulshan, A Dai, G Hinton
Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018
2602018
Gmail smart compose: Real-time assisted writing
MX Chen, BN Lee, G Bansal, Y Cao, S Zhang, J Lu, J Tsay, Y Wang, ...
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge …, 2019
2592019
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20