Ikuti
Vinh Q. Tran
Vinh Q. Tran
Research Scientist, Google DeepMind
Email yang diverifikasi di google.com - Beranda
Judul
Dikutip oleh
Dikutip oleh
Tahun
UL2: Unifying Language Learning Paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, J Wei, X Wang, HW Chung, ...
ICLR 2023, 2022
4592022
Transformer memory as a differentiable search index
Y Tay, VQ Tran, M Dehghani, J Ni, D Bahri, H Mehta, Z Qin, K Hui, Z Zhao, ...
NeurIPS 2022, 2022
2732022
Confident adaptive language modeling
T Schuster, A Fisch, J Gupta, M Dehghani, D Bahri, VQ Tran, Y Tay, ...
NeurIPS 2022, 2022
2142022
ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
V Aribandi, Y Tay, T Schuster, J Rao, HS Zheng, SV Mehta, H Zhuang, ...
ICLR 2022, 2021
2142021
A new generation of perspective api: Efficient multilingual character-level transformers
A Lees, VQ Tran, Y Tay, J Sorensen, J Gupta, D Metzler, L Vasserman
KDD'22 ADS, 2022
2032022
Charformer: Fast character transformers via gradient-based subword tokenization
Y Tay, VQ Tran, S Ruder, J Gupta, HW Chung, D Bahri, Z Qin, ...
ICLR 2022, 2021
1552021
Recommender Systems with Generative Retrieval
S Rajput, N Mehta, A Singh, RH Keshavan, T Vu, L Heldt, L Hong, Y Tay, ...
NeurIPS 2023, 2023
1522023
Attributed question answering: Evaluation and modeling for attributed large language models
B Bohnet, VQ Tran, P Verga, R Aharoni, D Andor, LB Soares, M Ciaramita, ...
arXiv preprint arXiv:2212.08037, 2022
1202022
Scaling laws vs model architectures: How does inductive bias influence scaling?
Y Tay, M Dehghani, S Abnar, HW Chung, W Fedus, J Rao, S Narang, ...
EMNLP 2023 Findings, 2022
932022
Transcending scaling laws with 0.1% extra compute
Y Tay, J Wei, HW Chung, VQ Tran, DR So, S Shakeri, X Garcia, HS Zheng, ...
EMNLP 2023, 2022
832022
Making the case for Query-by-Voice with EchoQuery
G Lyons, V Tran, C Binnig, U Cetintemel, T Kraska
SIGMOD 2016, 2129-2132, 2016
632016
How Does Generative Retrieval Scale to Millions of Passages?
R Pradeep, K Hui, J Gupta, AD Lelkes, H Zhuang, J Lin, D Metzler, ...
EMNLP 2023, 2023
582023
Quiz-Style Question Generation for News Stories
AD Lelkes, VQ Tran, C Yu
WWW '21: Proceedings of the Web Conference 2021, Pages 2501–2511, 2021
562021
DSI++: Updating Transformer Memory with New Documents
SV Mehta, J Gupta, Y Tay, M Dehghani, VQ Tran, J Rao, M Najork, ...
EMNLP 2023, 2022
512022
Smaller, weaker, yet better: Training llm reasoners via compute-optimal sampling
H Bansal, A Hosseini, R Agarwal, VQ Tran, M Kazemi
arXiv preprint arXiv:2408.16737, 2024
292024
AgreeSum: Agreement-Oriented Multi-Document Summarization
RY Pang, AD Lelkes, VQ Tran, C Yu
ACL-IJCNLP 2021 Findings, 3377–3391, 2021
182021
Fractal Patterns May Illuminate the Success of Next-Token Prediction
I Alabdulmohsin, VQ Tran, M Dehghani
NeurIPS 2024, 2024
5*2024
BIG-Bench Extra Hard
M Kazemi, B Fatemi, H Bansal, J Palowitch, C Anastasiou, SV Mehta, ...
arXiv preprint arXiv:2502.19187, 2025
2025
Character-level attention neural networks
Y Tay, D Bahri, DA Metzler, HW Chung, JP Gupta, SN Ruder, ...
US Patent App. 18/564,859, 2024
2024
Efficient Decoding of Output Sequences Using Adaptive Early Exiting
T Schuster, AJ Fisch, JP Gupta, M Dehghani, D Bahri, VQ Tran, Y Tay, ...
US Patent App. 18/222,395, 2024
2024
Sistem tidak dapat melakukan operasi ini. Coba lagi nanti.
Artikel 1–20