Roberta: A robustly optimized bert pretraining approach Y Liu arXiv preprint arXiv:1907.11692 364, 2019 | 29566* | 2019 |
Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension M Lewis arXiv preprint arXiv:1910.13461, 2019 | 11382 | 2019 |
Retrieval-augmented generation for knowledge-intensive nlp tasks P Lewis, E Perez, A Piktus, F Petroni, V Karpukhin, N Goyal, H Küttler, ... Advances in Neural Information Processing Systems 33, 9459-9474, 2020 | 4688 | 2020 |
Multilingual denoising pre-training for neural machine translation Y Liu arXiv preprint arXiv:2001.08210, 2020 | 1854 | 2020 |
Hierarchical neural story generation A Fan, M Lewis, Y Dauphin arXiv preprint arXiv:1805.04833, 2018 | 1738 | 2018 |
The llama 3 herd of models A Dubey, A Jauhri, A Pandey, A Kadian, A Al-Dahle, A Letman, A Mathur, ... arXiv preprint arXiv:2407.21783, 2024 | 1388* | 2024 |
End-to-end neural coreference resolution K Lee, L He, M Lewis, L Zettlemoyer arXiv preprint arXiv:1707.07045, 2017 | 1187 | 2017 |
Rethinking the role of demonstrations: What makes in-context learning work? S Min, X Lyu, A Holtzman, M Artetxe, M Lewis, H Hajishirzi, L Zettlemoyer arXiv preprint arXiv:2202.12837, 2022 | 1167 | 2022 |
Gpt3. int8 (): 8-bit matrix multiplication for transformers at scale T Dettmers, M Lewis, Y Belkada, L Zettlemoyer Advances in Neural Information Processing Systems 35, 30318-30332, 2022 | 822 | 2022 |
Generalization through memorization: Nearest neighbor language models U Khandelwal, O Levy, D Jurafsky, L Zettlemoyer, M Lewis arXiv preprint arXiv:1911.00172, 2019 | 794 | 2019 |
Lima: Less is more for alignment C Zhou, P Liu, P Xu, S Iyer, J Sun, Y Mao, X Ma, A Efrat, P Yu, L Yu, ... Advances in Neural Information Processing Systems 36, 2024 | 779 | 2024 |
Deep semantic role labeling: What works and what’s next L He, K Lee, M Lewis, L Zettlemoyer Proceedings of the 55th Annual Meeting of the Association for Computational …, 2017 | 586 | 2017 |
Incoder: A generative model for code infilling and synthesis D Fried, A Aghajanyan, J Lin, S Wang, E Wallace, F Shi, R Zhong, W Yih, ... arXiv preprint arXiv:2204.05999, 2022 | 568 | 2022 |
Train short, test long: Attention with linear biases enables input length extrapolation O Press, NA Smith, M Lewis arXiv preprint arXiv:2108.12409, 2021 | 561 | 2021 |
Deal or no deal? end-to-end learning for negotiation dialogues M Lewis, D Yarats, YN Dauphin, D Parikh, D Batra arXiv preprint arXiv:1706.05125, 2017 | 516 | 2017 |
Asking and answering questions to evaluate the factual consistency of summaries A Wang, K Cho, M Lewis arXiv preprint arXiv:2004.04228, 2020 | 445 | 2020 |
Metaicl: Learning to learn in context S Min, M Lewis, L Zettlemoyer, H Hajishirzi arXiv preprint arXiv:2110.15943, 2021 | 409 | 2021 |
Factscore: Fine-grained atomic evaluation of factual precision in long form text generation S Min, K Krishna, X Lyu, M Lewis, W Yih, PW Koh, M Iyyer, L Zettlemoyer, ... arXiv preprint arXiv:2305.14251, 2023 | 397 | 2023 |
Measuring and narrowing the compositionality gap in language models O Press, M Zhang, S Min, L Schmidt, NA Smith, M Lewis arXiv preprint arXiv:2210.03350, 2022 | 386* | 2022 |
Replug: Retrieval-augmented black-box language models W Shi, S Min, M Yasunaga, M Seo, R James, M Lewis, L Zettlemoyer, ... arXiv preprint arXiv:2301.12652, 2023 | 338* | 2023 |