Ikuti
Katherine Lee
Katherine Lee
Researcher, Google DeepMind
Email yang diverifikasi di google.com - Beranda
Judul
Dikutip oleh
Dikutip oleh
Tahun
Exploring the limits of transfer learning with a unified text-to-text transformer
C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, ...
The Journal of Machine Learning Research 21 (1), 5485-5551, 2020
219342020
Palm: Scaling language modeling with pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
Journal of Machine Learning Research 24 (240), 1-113, 2023
58672023
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
34052023
Extracting Training Data from Large Language Models.
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
USENIX Security Symposium 6, 2021
20522021
PaLM 2 Technical Report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
16172023
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
G Team, P Georgiev, VI Lei, R Burnell, L Bai, A Gulati, G Tanzer, ...
arXiv preprint arXiv:2403.05530, 2024
12552024
Gemma: Open models based on gemini research and technology
G Team, T Mesnard, C Hardin, R Dadashi, S Bhupatiraju, S Pathak, ...
arXiv preprint arXiv:2403.08295, 2024
11002024
Quantifying memorization across neural language models
N Carlini, D Ippolito, M Jagielski, K Lee, F Tramer, C Zhang
The Eleventh International Conference on Learning Representations, 2022
7152022
Deduplicating training data makes language models better
K Lee, D Ippolito, A Nystrom, C Zhang, D Eck, C Callison-Burch, N Carlini
arXiv preprint arXiv:2107.06499, 2021
6242021
Scalable Extraction of Training Data from (Production) Language Models
M Nasr, N Carlini, J Hayase, M Jagielski, AF Cooper, D Ippolito, ...
arXiv preprint arXiv:2311.17035, 2023
3212023
Are aligned neural networks adversarially aligned?
N Carlini, M Nasr, CA Choquette-Choo, M Jagielski, I Gao, PWW Koh, ...
Advances in Neural Information Processing Systems 36, 61478-61500, 2023
2862023
What Does it Mean for a Language Model to Preserve Privacy?
H Brown, K Lee, F Mireshghallah, R Shokri, F Tramèr
2022 ACM Conference on Fairness, Accountability, and Transparency, 2280-2292, 2022
2592022
WT5?! Training Text-to-Text Models to Explain their Predictions
S Narang, C Raffel, K Lee, A Roberts, N Fiedel, K Malkan
arXiv preprint arXiv:2004.14546, 2020
2112020
Counterfactual memorization in neural language models
C Zhang, D Ippolito, K Lee, M Jagielski, F Tramèr, N Carlini
Advances in Neural Information Processing Systems 36, 39321-39362, 2023
1592023
Hallucinations in neural machine translation
K Lee, O Firat, A Agarwal, C Fannjiang, D Sussillo
1572018
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
D Ippolito, F Tramèr, M Nasr, C Zhang, M Jagielski, K Lee, ...
arXiv preprint arXiv:2210.17546, 2022
1512022
A pretrainer’s guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity
S Longpre, G Yauney, E Reif, K Lee, A Roberts, B Zoph, D Zhou, J Wei, ...
Proceedings of the 2024 Conference of the North American Chapter of the …, 2024
1462024
Palm: Scaling language modeling with pathways, 2022
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
arXiv preprint arXiv:2204.02311, 2022
1372022
Madlad-400: A multilingual and document-level large audited dataset
S Kudugunta, I Caswell, B Zhang, X Garcia, D Xin, A Kusupati, R Stella, ...
Advances in Neural Information Processing Systems 36, 67284-67296, 2023
1112023
Measuring Forgetting of Memorized Training Examples
M Jagielski, O Thakkar, F Tramèr, D Ippolito, K Lee, N Carlini, E Wallace, ...
arXiv preprint arXiv:2207.00099, 2022
1062022
Sistem tidak dapat melakukan operasi ini. Coba lagi nanti.
Artikel 1–20