Seguir
Niki Parmar
Niki Parmar
Co-Founder at Essential AI
Email confirmado em essential.ai
Título
Citado por
Citado por
Ano
Attention is all you need
A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, AN Gomez, ...
Advances in neural information processing systems 30, 2017
1709692017
Conformer: Convolution-augmented transformer for speech recognition
A Gulati, J Qin, CC Chiu, N Parmar, Y Zhang, J Yu, W Han, S Wang, ...
arXiv preprint arXiv:2005.08100, 2020
36542020
Image transformer
N Parmar, A Vaswani, J Uszkoreit, L Kaiser, N Shazeer, A Ku, D Tran
International conference on machine learning, 4055-4064, 2018
21912018
Stand-alone self-attention in vision models
P Ramachandran, N Parmar, A Vaswani, I Bello, A Levskaya, J Shlens
Advances in neural information processing systems 32, 2019
14482019
Bottleneck transformers for visual recognition
A Srinivas, TY Lin, N Parmar, J Shlens, P Abbeel, A Vaswani
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2021
14092021
Advances in neural information processing systems 30
A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, AN Gomez, ...
Curran Associates Inc, 2017
10732017
Gomez Aidan N., Kaiser Łukasz, Polosukhin Illia, Attention is all you need
V Ashish, S Noam, P Niki, U Jakob, J Llion
Adv. Neural Inf. Process. Syst 30, 1-11, 2017
7912017
Tensor2tensor for neural machine translation
A Vaswani, S Bengio, E Brevdo, F Chollet, AN Gomez, S Gouws, L Jones, ...
arXiv preprint arXiv:1803.07416, 2018
6742018
The best of both worlds: Combining recent advances in neural machine translation
MX Chen, O Firat, A Bapna, M Johnson, W Macherey, G Foster, L Jones, ...
arXiv preprint arXiv:1804.09849, 2018
5462018
Scaling local self-attention for parameter efficient visual backbones
A Vaswani, P Ramachandran, A Srinivas, N Parmar, B Hechtman, ...
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2021
5102021
Attention is all you need. arXiv 2023
A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, AN Gomez, ...
arXiv preprint arXiv:1706.03762, 2023
449*2023
Mesh-tensorflow: Deep learning for supercomputers
N Shazeer, Y Cheng, N Parmar, D Tran, A Vaswani, P Koanantakool, ...
Advances in neural information processing systems 31, 2018
4372018
One model to learn them all
L Kaiser, AN Gomez, N Shazeer, A Vaswani, N Parmar, L Jones, ...
arXiv preprint arXiv:1706.05137, 2017
4072017
Attention is all you need. CoRR abs/1706.03762 (2017)
A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, AN Gomez, ...
3472017
Attention is all you need, 2023
A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, AN Gomez, ...
arXiv preprint arXiv:1706.03762, 2023
319*2023
Purity homophily in social networks.
M Dehghani, K Johnson, J Hoover, E Sagi, J Garten, NJ Parmar, S Vaisey, ...
Journal of Experimental Psychology: General 145 (3), 366, 2016
2352016
Attention is all you need
A Waswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A Gomez, ...
NIPS, 2017
2092017
& Polosukhin, I.(2017)
A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, AN Gomez, ...
Attention is all you need. In: Advances in neural information processing …, 2017
1892017
Stand-alone self-attention in vision models
N Parmar, P Ramachandran, A Vaswani, I Bello, A Levskaya, J Shlens
1692019
Corpora generation for grammatical error correction
J Lichtarge, C Alberti, S Kumar, N Shazeer, N Parmar, S Tong
arXiv preprint arXiv:1904.05780, 2019
1662019
O sistema não pode efectuar a operação agora. Tente mais tarde.
Artigos 1–20