Seguir
Xuezhi Wang
Xuezhi Wang
Research Scientist, Google DeepMind
Dirección de correo verificada de google.com - Página principal
Título
Citado por
Citado por
Año
Chain of thought prompting elicits reasoning in large language models
J Wei, X Wang, D Schuurmans, M Bosma, E Chi, Q Le, D Zhou
Neural Information Processing Systems (NeurIPS), 2022, 2022
9853*2022
Palm: Scaling language modeling with pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
Journal of Machine Learning Research (JMLR), 2023, 2023
51652023
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
JMLR 2024, 2024
30442024
Self-consistency improves chain of thought reasoning in language models
X Wang, J Wei, D Schuurmans, Q Le, E Chi, S Narang, A Chowdhery, ...
ICLR 2023, 2023
2387*2023
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
20842023
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
14122023
Least-to-most prompting enables complex reasoning in large language models
D Zhou, N Schärli, L Hou, J Wei, N Scales, X Wang, D Schuurmans, ...
ICLR 2023, 2023
10672023
Underspecification presents challenges for credibility in modern machine learning
A D'Amour, K Heller, D Moldovan, B Adlam, B Alipanahi, A Beutel, ...
Journal of Machine Learning Research 23 (226), 1-61, 2022
8152022
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ...
arXiv preprint arXiv:2403.05530, 2024
6172024
Large language models as optimizers
C Yang, X Wang, Y Lu, H Liu, QV Le, D Zhou, X Chen
ICLR 2024, 2024
512*2024
Large language models can self-improve
J Huang, SS Gu, L Hou, Y Wu, X Wang, H Yu, J Han
EMNLP 2023, 2023
4242023
Fairness without demographics through adversarially reweighted learning
P Lahoti, A Beutel, J Chen, K Lee, F Prost, N Thain, X Wang, EH Chi
34th Conference on Neural Information Processing Systems (NeurIPS 2020), 2020
3612020
ToTTo: A Controlled Table-To-Text Generation Dataset
AP Parikh, X Wang, S Gehrmann, M Faruqui, B Dhingra, D Yang, D Das
EMNLP 2020, 2020
3582020
Unifying Language Learning Paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, J Wei, X Wang, HW Chung, ...
ICLR 2023, 2023
274*2023
ESCAPES: evacuation simulation with children, authorities, parents, emotions, and social comparison.
J Tsai, N Fridman, E Bowring, M Brown, S Epstein, GA Kaminka, ...
AAMAS 11, 457-464, 2011
2562011
Language models are multilingual chain-of-thought reasoners
F Shi, M Suzgun, M Freitag, X Wang, S Srivats, S Vosoughi, HW Chung, ...
ICLR 2023, 2023
2362023
Measuring and reducing gendered correlations in pre-trained models
K Webster, X Wang, I Tenney, A Beutel, E Pitler, E Pavlick, J Chen, E Chi, ...
arXiv preprint arXiv:2010.06032, 2020
1522020
Large language models as tool makers
T Cai, X Wang, T Ma, X Chen, D Zhou
ICLR 2024, 2024
1372024
Measure and Improve Robustness in NLP Models: A Survey
X Wang, H Wang, D Yang
NAACL 2022, 2022
1222022
Freshllms: Refreshing large language models with search engine augmentation
T Vu, M Iyyer, X Wang, N Constant, J Wei, J Wei, C Tar, YH Sung, D Zhou, ...
ACL 2024 Findings, 2024
1202024
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20