Követés
Xiang Lisa Li
Xiang Lisa Li
E-mail megerősítve itt: stanford.edu - Kezdőlap
Cím
Hivatkozott rá
Hivatkozott rá
Év
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
49092021
Prefix-tuning: Optimizing continuous prompts for generation
XL Li, P Liang
arXiv preprint arXiv:2101.00190, 2021
43572021
Diffusion-lm improves controllable text generation
X Li, J Thickstun, I Gulrajani, PS Liang, TB Hashimoto
Advances in neural information processing systems 35, 4328-4343, 2022
7542022
Contrastive decoding: Open-ended text generation as optimization
XL Li, A Holtzman, D Fried, P Liang, J Eisner, T Hashimoto, L Zettlemoyer, ...
arXiv preprint arXiv:2210.15097, 2022
2912022
Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp
O Khattab, K Santhanam, XL Li, D Hall, P Liang, C Potts, M Zaharia
arXiv preprint arXiv:2212.14024, 2022
2312022
Learning to compress prompts with gist tokens
J Mu, X Li, N Goodman
Advances in Neural Information Processing Systems 36, 19327-19352, 2023
1752023
Evaluating human-language model interaction
M Lee, M Srivastava, A Hardy, J Thickstun, E Durmus, A Paranjape, ...
arXiv preprint arXiv:2212.09746, 2022
1252022
Specializing word embeddings (for parsing) by information bottleneck
XL Li, J Eisner
arXiv preprint arXiv:1910.00163, 2019
792019
On the learnability of watermarks for language models
C Gu, XL Li, P Liang, T Hashimoto
arXiv preprint arXiv:2312.04469, 2023
392023
s1: Simple test-time scaling
N Muennighoff, Z Yang, W Shi, XL Li, L Fei-Fei, H Hajishirzi, L Zettlemoyer, ...
arXiv preprint arXiv:2501.19393, 2025
352025
Decoding methods for neural narrative generation
A DeLucia, A Mueller, XL Li, J Sedoc
arXiv preprint arXiv:2010.07375, 2020
312020
Posterior control of blackbox generation
XL Li, AM Rush
arXiv preprint arXiv:2005.04560, 2020
282020
Benchmarking and improving generator-validator consistency of language models
XL Li, V Shrivastava, S Li, T Hashimoto, P Liang
arXiv preprint arXiv:2310.01846, 2023
272023
Ensembles and cocktails: Robust finetuning for natural language generation
J Hewitt, XL Li, SM Xie, B Newman, P Liang
102021
Autobencher: Creating salient, novel, difficult datasets for language models
XL Li, EZ Liu, P Liang, T Hashimoto
arXiv preprint arXiv:2407.08351, 2024
72024
TempLM: Distilling language models into template-based generators
T Zhang, M Lee, L Li, E Shen, TB Hashimoto
arXiv preprint arXiv:2205.11055, 2022
62022
A generative model for punctuation in dependency trees
XL Li, D Wang, J Eisner
Transactions of the Association for Computational Linguistics 7, 357-373, 2019
62019
Few-shot recalibration of language models
XL Li, U Khandelwal, K Guu
arXiv preprint arXiv:2403.18286, 2024
52024
Eliciting language model behaviors with investigator agents
XL Li, N Chowdhury, DD Johnson, T Hashimoto, P Liang, S Schwettmann, ...
arXiv preprint arXiv:2502.01236, 2025
22025
Auditing Prompt Caching in Language Model APIs
C Gu, XL Li, R Kuditipudi, P Liang, T Hashimoto
arXiv preprint arXiv:2502.07776, 2025
2025
A rendszer jelenleg nem tudja elvégezni a műveletet. Próbálkozzon újra később.
Cikkek 1–20