Követés
Yao Zhao
Yao Zhao
Google Brain
E-mail megerősítve itt: google.com
Cím
Hivatkozott rá
Hivatkozott rá
Év
Pegasus: Pre-training with extracted gap-sentences for abstractive summarization
J Zhang, Y Zhao, M Saleh, P Liu
International conference on machine learning, 11328-11339, 2020
22242020
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
20842023
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ...
arXiv preprint arXiv:2403.05530, 2024
6172024
Adversarial attacks and defences competition
A Kurakin, I Goodfellow, S Bengio, Y Dong, F Liao, M Liang, T Pang, ...
The NIPS'17 Competition: Building Intelligent Systems, 195-231, 2018
3612018
Paragraph-level neural question generation with maxout pointer and gated self-attention networks
Y Zhao, X Ni, Y Ding, Q Ke
Proceedings of the 2018 conference on empirical methods in natural language …, 2018
3352018
The tethering of chromatin to the nuclear envelope supports nuclear mechanics
SM Schreiner, PK Koo, Y Zhao, SGJ Mochrie, MC King
Nature communications 6 (1), 7159, 2015
2282015
Slic-hf: Sequence likelihood calibration with human feedback
Y Zhao, R Joshi, T Liu, M Khalman, M Saleh, PJ Liu
arXiv preprint arXiv:2305.10425, 2023
2032023
Talm: Tool augmented language models
A Parisi, Y Zhao, N Fiedel
arXiv preprint arXiv:2205.12255, 2022
1712022
Statistical rejection sampling improves preference optimization
T Liu, Y Zhao, R Joshi, M Khalman, M Saleh, PJ Liu, J Liu
arXiv preprint arXiv:2309.06657, 2023
1422023
Planning with learned entity prompts for abstractive summarization
S Narayan, Y Zhao, J Maynez, G Simões, V Nikolaev, R McDonald
Transactions of the Association for Computational Linguistics 9, 1475-1492, 2021
127*2021
Calibrating sequence likelihood improves conditional language generation
Y Zhao, M Khalman, R Joshi, S Narayan, M Saleh, PJ Liu
arXiv preprint arXiv:2210.00045, 2022
1082022
Direct language model alignment from online ai feedback
S Guo, B Zhang, T Liu, T Liu, M Khalman, F Llinares, A Rame, T Mesnard, ...
arXiv preprint arXiv:2402.04792, 2024
762024
Out-of-distribution detection and selective generation for conditional language models
J Ren, J Luo, Y Zhao, K Krishna, M Saleh, B Lakshminarayanan, PJ Liu
The Eleventh International Conference on Learning Representations, 2022
692022
Investigating efficiently extending transformers for long input summarization
J Phang, Y Zhao, PJ Liu
arXiv preprint arXiv:2208.04347, 2022
592022
A well-composed text is half done! composition sampling for diverse conditional generation
S Narayan, G Simões, Y Zhao, J Maynez, D Das, M Collins, M Lapata
arXiv preprint arXiv:2203.15108, 2022
312022
Lipo: Listwise preference optimization through learning-to-rank
T Liu, Z Qin, J Wu, J Shen, M Khalman, R Joshi, Y Zhao, M Saleh, ...
arXiv preprint arXiv:2402.01878, 2024
292024
Seal: Segment-wise extractive-abstractive long-form text summarization
Y Zhao, M Saleh, PJ Liu
arXiv preprint arXiv:2006.10213, 2020
282020
Self-evaluation improves selective generation in large language models
J Ren, Y Zhao, T Vu, PJ Liu, B Lakshminarayanan
Proceedings on, 49-64, 2023
252023
ForumSum: A multi-speaker conversation summarization dataset
M Khalman, Y Zhao, M Saleh
Findings of the Association for Computational Linguistics: EMNLP 2021, 4592-4599, 2021
222021
Smart: Sentences as basic units for text evaluation
RK Amplayo, PJ Liu, Y Zhao, S Narayan
arXiv preprint arXiv:2208.01030, 2022
202022
A rendszer jelenleg nem tudja elvégezni a műveletet. Próbálkozzon újra később.
Cikkek 1–20