On the opportunities and risks of foundation models R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ... arXiv preprint arXiv:2108.07258, 2021 | 4909 | 2021 |
Prefix-tuning: Optimizing continuous prompts for generation XL Li, P Liang arXiv preprint arXiv:2101.00190, 2021 | 4357 | 2021 |
Diffusion-lm improves controllable text generation X Li, J Thickstun, I Gulrajani, PS Liang, TB Hashimoto Advances in neural information processing systems 35, 4328-4343, 2022 | 754 | 2022 |
Contrastive decoding: Open-ended text generation as optimization XL Li, A Holtzman, D Fried, P Liang, J Eisner, T Hashimoto, L Zettlemoyer, ... arXiv preprint arXiv:2210.15097, 2022 | 291 | 2022 |
Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp O Khattab, K Santhanam, XL Li, D Hall, P Liang, C Potts, M Zaharia arXiv preprint arXiv:2212.14024, 2022 | 231 | 2022 |
Learning to compress prompts with gist tokens J Mu, X Li, N Goodman Advances in Neural Information Processing Systems 36, 19327-19352, 2023 | 175 | 2023 |
Evaluating human-language model interaction M Lee, M Srivastava, A Hardy, J Thickstun, E Durmus, A Paranjape, ... arXiv preprint arXiv:2212.09746, 2022 | 125 | 2022 |
Specializing word embeddings (for parsing) by information bottleneck XL Li, J Eisner arXiv preprint arXiv:1910.00163, 2019 | 79 | 2019 |
On the learnability of watermarks for language models C Gu, XL Li, P Liang, T Hashimoto arXiv preprint arXiv:2312.04469, 2023 | 39 | 2023 |
s1: Simple test-time scaling N Muennighoff, Z Yang, W Shi, XL Li, L Fei-Fei, H Hajishirzi, L Zettlemoyer, ... arXiv preprint arXiv:2501.19393, 2025 | 35 | 2025 |
Decoding methods for neural narrative generation A DeLucia, A Mueller, XL Li, J Sedoc arXiv preprint arXiv:2010.07375, 2020 | 31 | 2020 |
Posterior control of blackbox generation XL Li, AM Rush arXiv preprint arXiv:2005.04560, 2020 | 28 | 2020 |
Benchmarking and improving generator-validator consistency of language models XL Li, V Shrivastava, S Li, T Hashimoto, P Liang arXiv preprint arXiv:2310.01846, 2023 | 27 | 2023 |
Ensembles and cocktails: Robust finetuning for natural language generation J Hewitt, XL Li, SM Xie, B Newman, P Liang | 10 | 2021 |
Autobencher: Creating salient, novel, difficult datasets for language models XL Li, EZ Liu, P Liang, T Hashimoto arXiv preprint arXiv:2407.08351, 2024 | 7 | 2024 |
TempLM: Distilling language models into template-based generators T Zhang, M Lee, L Li, E Shen, TB Hashimoto arXiv preprint arXiv:2205.11055, 2022 | 6 | 2022 |
A generative model for punctuation in dependency trees XL Li, D Wang, J Eisner Transactions of the Association for Computational Linguistics 7, 357-373, 2019 | 6 | 2019 |
Few-shot recalibration of language models XL Li, U Khandelwal, K Guu arXiv preprint arXiv:2403.18286, 2024 | 5 | 2024 |
Eliciting language model behaviors with investigator agents XL Li, N Chowdhury, DD Johnson, T Hashimoto, P Liang, S Schwettmann, ... arXiv preprint arXiv:2502.01236, 2025 | 2 | 2025 |
Auditing Prompt Caching in Language Model APIs C Gu, XL Li, R Kuditipudi, P Liang, T Hashimoto arXiv preprint arXiv:2502.07776, 2025 | | 2025 |