Llama 2: Open foundation and fine-tuned chat models H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ... arXiv preprint arXiv:2307.09288, 2023 | 10612 | 2023 |
Dense passage retrieval for open-domain question answering V Karpukhin, B Oğuz, S Min, P Lewis, L Wu, S Edunov, D Chen, W Yih arXiv preprint arXiv:2004.04906, 2020 | 3314 | 2020 |
fairseq: A fast, extensible toolkit for sequence modeling M Ott arXiv preprint arXiv:1904.01038, 2019 | 3262 | 2019 |
Multilingual denoising pre-training for neural machine translation Y Liu arXiv preprint arXiv:2001.08210, 2020 | 1860 | 2020 |
Understanding back-translation at scale S Edunov arXiv preprint arXiv:1808.09381, 2018 | 1367 | 2018 |
The llama 3 herd of models A Dubey, A Jauhri, A Pandey, A Kadian, A Al-Dahle, A Letman, A Mathur, ... arXiv preprint arXiv:2407.21783, 2024 | 1109 | 2024 |
Beyond english-centric multilingual machine translation A Fan, S Bhosale, H Schwenk, Z Ma, A El-Kishky, S Goyal, M Baines, ... Journal of Machine Learning Research 22 (107), 1-48, 2021 | 815 | 2021 |
No Language Left Behind: Scaling Human-Centered Machine Translation N Team, MR Costa-jussà, J Cross, O Çelebi, M Elbayad, K Heafield, ... arXiv e-prints, arXiv: 2207.04672, 2022 | 715* | 2022 |
Scaling Neural Machine Translation. arXiv e-prints, page M Ott, S Edunov, D Grangier, M Auli arXiv preprint arXiv:1806.00187, 2018 | 674 | 2018 |
One trillion edges: Graph processing at facebook-scale A Ching, S Edunov, M Kabiljo, D Logothetis, S Muthukrishnan Proceedings of the VLDB Endowment 8 (12), 1804-1815, 2015 | 579 | 2015 |
Facebook FAIR's WMT19 news translation task submission N Ng, K Yee, A Baevski, M Ott, M Auli, S Edunov arXiv preprint arXiv:1907.06616, 2019 | 432 | 2019 |
Cloze-driven pretraining of self-attention networks A Baevski, S Edunov, Y Liu, L Zettlemoyer, M Auli arXiv preprint arXiv:1903.07785, 2019 | 275 | 2019 |
CCMatrix: Mining billions of high-quality parallel sentences on the web H Schwenk, G Wenzek, S Edunov, E Grave, A Joulin arXiv preprint arXiv:1911.04944, 2019 | 230 | 2019 |
Classical structured prediction losses for sequence to sequence learning S Edunov, M Ott, M Auli, D Grangier, MA Ranzato arXiv preprint arXiv:1711.04956, 2017 | 210 | 2017 |
Pre-trained language model representations for language generation S Edunov, A Baevski, M Auli arXiv preprint arXiv:1903.09722, 2019 | 170 | 2019 |
Three and a half degrees of separation S Edunov, C Diuk, IO Filiz, S Bhagat, M Burke Research at Facebook 694, 2016 | 167* | 2016 |
Playing the lottery with rewards and multiple languages: lottery tickets in rl and nlp H Yu, S Edunov, Y Tian, AS Morcos arXiv preprint arXiv:1906.02768, 2019 | 144 | 2019 |
Effective long-context scaling of foundation models W Xiong, J Liu, I Molybog, H Zhang, P Bhargava, R Hou, L Martin, ... arXiv preprint arXiv:2309.16039, 2023 | 137 | 2023 |
On the evaluation of machine translation systems trained with back-translation S Edunov, M Ott, MA Ranzato, M Auli arXiv preprint arXiv:1908.05204, 2019 | 101 | 2019 |
Facebook ai wmt21 news translation task submission C Tran, S Bhosale, J Cross, P Koehn, S Edunov, A Fan arXiv preprint arXiv:2108.03265, 2021 | 97 | 2021 |