Follow
Jacob Steinhardt
Jacob Steinhardt
Verified email at cs.stanford.edu - Homepage
Title
Cited by
Cited by
Year
Concrete problems in AI safety
D Amodei, C Olah, J Steinhardt, P Christiano, J Schulman, D Mané
arXiv preprint arXiv:1606.06565, 2016
29902016
Measuring massive multitask language understanding
D Hendrycks, C Burns, S Basart, A Zou, M Mazeika, D Song, J Steinhardt
arXiv preprint arXiv:2009.03300, 2020
25032020
The many faces of robustness: A critical analysis of out-of-distribution generalization
D Hendrycks, S Basart, N Mu, S Kadavath, F Wang, E Dorundo, R Desai, ...
Proceedings of the IEEE/CVF international conference on computer vision …, 2021
16502021
Natural adversarial examples
D Hendrycks, K Zhao, S Basart, J Steinhardt, D Song
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2021
14912021
The malicious use of artificial intelligence: Forecasting, prevention, and mitigation
M Brundage, S Avin, J Clark, H Toner, P Eckersley, B Garfinkel, A Dafoe, ...
arXiv preprint arXiv:1802.07228, 2018
11342018
Certified defenses against adversarial examples
A Raghunathan, J Steinhardt, P Liang
arXiv preprint arXiv:1801.09344, 2018
11292018
Measuring mathematical problem solving with the math dataset
D Hendrycks, C Burns, S Kadavath, A Arora, S Basart, E Tang, D Song, ...
arXiv preprint arXiv:2103.03874, 2021
10252021
Certified defenses for data poisoning attacks
J Steinhardt, PWW Koh, PS Liang
Advances in neural information processing systems 30, 2017
9062017
Jailbroken: How does llm safety training fail?
A Wei, N Haghtalab, J Steinhardt
Advances in Neural Information Processing Systems 36, 2024
6222024
Semidefinite relaxations for certifying robustness to adversarial examples
A Raghunathan, J Steinhardt, PS Liang
Advances in neural information processing systems 31, 2018
5062018
Measuring coding challenge competence with apps
D Hendrycks, S Basart, S Kadavath, M Mazeika, A Arora, E Guo, C Burns, ...
arXiv preprint arXiv:2105.09938, 2021
4822021
Scaling out-of-distribution detection for real-world settings
D Hendrycks, S Basart, M Mazeika, A Zou, J Kwon, M Mostajabi, ...
arXiv preprint arXiv:1911.11132, 2019
4322019
Aligning ai with shared human values
D Hendrycks, C Burns, S Basart, A Critch, J Li, D Song, J Steinhardt
arXiv preprint arXiv:2008.02275, 2020
4302020
Troubling Trends in Machine Learning Scholarship: Some ML papers suffer from flaws that could mislead the public and stymie future research.
ZC Lipton, J Steinhardt
Queue 17 (1), 45-77, 2019
3812019
Sonyc: A system for monitoring, analyzing, and mitigating urban noise pollution
JP Bello, C Silva, O Nov, RL Dubois, A Arora, J Salamon, C Mydlarz, ...
Communications of the ACM 62 (2), 68-77, 2019
3412019
Learning from untrusted data
M Charikar, J Steinhardt, G Valiant
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing …, 2017
3392017
Sever: A robust meta-algorithm for stochastic optimization
I Diakonikolas, G Kamath, D Kane, J Li, J Steinhardt, A Stewart
International Conference on Machine Learning, 1596-1606, 2019
3372019
Interpretability in the wild: a circuit for indirect object identification in gpt-2 small
K Wang, A Variengien, A Conmy, B Shlegeris, J Steinhardt
arXiv preprint arXiv:2211.00593, 2022
3282022
Unsolved problems in ml safety
D Hendrycks, N Carlini, J Schulman, J Steinhardt
arXiv preprint arXiv:2109.13916, 2021
3162021
Progress measures for grokking via mechanistic interpretability
N Nanda, L Chan, T Lieberum, J Smith, J Steinhardt
arXiv preprint arXiv:2301.05217, 2023
2842023
The system can't perform the operation now. Try again later.
Articles 1–20