Linformer: Self-Attention with Linear Complexity S Wang, B Li, M Khabsa, H Fang, H Ma arXiv preprint arXiv:2006.04768, 2020 | 1925 | 2020 |
Self-attention with linear complexity S Wang, BZ Li, M Khabsa, H Fang, HL Ma arXiv preprint arXiv:2006.04768 175, 176, 2020 | 195 | 2020 |
Implicit representations of meaning in neural language models BZ Li, M Nye, J Andreas arXiv preprint arXiv:2106.00737, 2021 | 184 | 2021 |
Efficient One-Pass End-to-End Entity Linking for Questions BZ Li, S Min, S Iyer, Y Mehdad, W Yih Proceedings of the 2020 Conference on Empirical Methods in Natural Language …, 2020 | 152 | 2020 |
Inspecting and editing knowledge representations in language models E Hernandez, BZ Li, J Andreas arXiv preprint arXiv:2304.00740, 2023 | 117 | 2023 |
Language Models as Fact Checkers? N Lee, BZ Li, S Wang, W Yih, H Ma, M Khabsa Proceedings of the Third Workshop on Fact Extraction and VERification (FEVER …, 2020 | 92 | 2020 |
Eliciting human preferences with language models BZ Li, A Tamkin, N Goodman, J Andreas arXiv preprint arXiv:2310.11589, 2023 | 53 | 2023 |
Active Learning for Coreference Resolution using Discrete Annotation BZ Li, G Stanovsky, L Zettlemoyer Proceedings of the 58th Annual Meeting of the Association for Computational …, 2020 | 32 | 2020 |
On unifying misinformation detection N Lee, BZ Li, S Wang, P Fung, H Ma, W Yih, M Khabsa arXiv preprint arXiv:2104.05243, 2021 | 26 | 2021 |
Bayesian preference elicitation with language models K Handa, Y Gal, E Pavlick, N Goodman, J Andreas, A Tamkin, BZ Li arXiv preprint arXiv:2403.05534, 2024 | 15 | 2024 |
Quantifying adaptability in pre-trained language models with 500 tasks BZ Li, J Yu, M Khabsa, L Zettlemoyer, A Halevy, J Andreas arXiv preprint arXiv:2112.03204, 2021 | 15 | 2021 |
Learning with language-guided state abstractions A Peng, I Sucholutsky, BZ Li, TR Sumers, TL Griffiths, J Andreas, JA Shah arXiv preprint arXiv:2402.18759, 2024 | 12 | 2024 |
On the influence of masking policies in intermediate pre-training Q Ye, BZ Li, S Wang, B Bolte, H Ma, W Yih, X Ren, M Khabsa arXiv preprint arXiv:2104.08840, 2021 | 12 | 2021 |
Studying strategically: Learning to mask for closed-book QA Q Ye, BZ Li, S Wang, B Bolte, H Ma, W Yih, X Ren, M Khabsa arXiv preprint arXiv:2012.15856, 2020 | 10 | 2020 |
Language modeling with latent situations BZ Li, M Nye, J Andreas arXiv preprint arXiv:2212.10012, 2022 | 9 | 2022 |
Preference-conditioned language-guided abstraction A Peng, A Bobu, BZ Li, TR Sumers, I Sucholutsky, N Kumar, TL Griffiths, ... Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot …, 2024 | 8 | 2024 |
LaMPP: language models as probabilistic priors for perception and action BZ Li, W Chen, P Sharma, J Andreas arXiv preprint arXiv:2302.02801, 2023 | 7 | 2023 |
Adaptive language-guided abstraction from contrastive explanations A Peng, BZ Li, I Sucholutsky, N Kumar, JA Shah, J Andreas, A Bobu arXiv preprint arXiv:2409.08212, 2024 | 4 | 2024 |
Language Modeling with Editable External Knowledge BZ Li, E Liu, A Ross, A Zeitoun, G Neubig, J Andreas arXiv preprint arXiv:2406.11830, 2024 | 2 | 2024 |
Toward Interactive Dictation BZ Li, J Eisner, A Pauls, S Thomson arXiv preprint arXiv:2307.04008, 2023 | 2 | 2023 |