On the opportunities and risks of foundation models R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ... arXiv preprint arXiv:2108.07258, 2021 | 4190 | 2021 |
Certified defenses against adversarial examples A Raghunathan, J Steinhardt, P Liang arXiv preprint arXiv:1801.09344, 2018 | 1129 | 2018 |
Unlabeled data improves adversarial robustness Y Carmon, A Raghunathan, L Schmidt, JC Duchi, PS Liang Advances in neural information processing systems 32, 2019 | 816 | 2019 |
Fine-tuning can distort pretrained features and underperform out-of-distribution A Kumar, A Raghunathan, R Jones, T Ma, P Liang arXiv preprint arXiv:2202.10054, 2022 | 661 | 2022 |
An explanation of in-context learning as implicit bayesian inference SM Xie, A Raghunathan, P Liang, T Ma arXiv preprint arXiv:2111.02080, 2021 | 625 | 2021 |
Just train twice: Improving group robustness without training group information EZ Liu, B Haghgoo, AS Chen, A Raghunathan, PW Koh, S Sagawa, ... International Conference on Machine Learning, 6781-6792, 2021 | 513 | 2021 |
Semidefinite relaxations for certifying robustness to adversarial examples A Raghunathan, J Steinhardt, PS Liang Advances in neural information processing systems 31, 2018 | 506 | 2018 |
An investigation of why overparameterization exacerbates spurious correlations S Sagawa, A Raghunathan, PW Koh, P Liang International Conference on Machine Learning, 8346-8356, 2020 | 387 | 2020 |
The pitfalls of simplicity bias in neural networks H Shah, K Tamuly, A Raghunathan, P Jain, P Netrapalli Advances in Neural Information Processing Systems 33, 9573-9585, 2020 | 384 | 2020 |
Certified robustness to adversarial word substitutions R Jia, A Raghunathan, K Göksel, P Liang arXiv preprint arXiv:1909.00986, 2019 | 327 | 2019 |
Accuracy on the line: on the strong correlation between out-of-distribution and in-distribution generalization JP Miller, R Taori, A Raghunathan, S Sagawa, PW Koh, V Shankar, ... International conference on machine learning, 7721-7735, 2021 | 290 | 2021 |
Adversarial training can hurt generalization A Raghunathan, SM Xie, F Yang, JC Duchi, P Liang arXiv preprint arXiv:1906.06032, 2019 | 277 | 2019 |
Understanding and mitigating the tradeoff between robustness and accuracy A Raghunathan, SM Xie, F Yang, J Duchi, P Liang arXiv preprint arXiv:2002.10716, 2020 | 260 | 2020 |
DROCC: Deep robust one-class classification S Goyal, A Raghunathan, M Jain, HV Simhadri, P Jain International conference on machine learning, 3711-3721, 2020 | 190 | 2020 |
Finetune like you pretrain: Improved finetuning of zero-shot vision models S Goyal, A Kumar, S Garg, Z Kolter, A Raghunathan Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 123 | 2023 |
Automatically auditing large language models via discrete optimization E Jones, A Dragan, A Raghunathan, J Steinhardt International Conference on Machine Learning, 15307-15329, 2023 | 118 | 2023 |
Robust encodings: A framework for combating adversarial typos E Jones, R Jia, A Raghunathan, P Liang arXiv preprint arXiv:2005.01229, 2020 | 118 | 2020 |
Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming S Dathathri, K Dvijotham, A Kurakin, A Raghunathan, J Uesato, RR Bunel, ... Advances in Neural Information Processing Systems 33, 5318-5331, 2020 | 118 | 2020 |
On the opportunities and risks of foundation models. arXiv 2021 R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ... arXiv preprint arXiv:2108.07258, 2023 | 92 | 2023 |
Test time adaptation via conjugate pseudo-labels S Goyal, M Sun, A Raghunathan, JZ Kolter Advances in Neural Information Processing Systems 35, 6204-6218, 2022 | 81 | 2022 |