Подписаться
Jerry Li
Jerry Li
Подтвержден адрес электронной почты в домене cs.washington.edu - Главная страница
Название
Процитировано
Процитировано
Год
QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding
D Alistarh, D Grubic, J Li, R Tomioka, M Vojnovic
Advances in Neural Information Processing Systems, 1707-1718, 2017
20002017
Spectral signatures in backdoor attacks
B Tran, J Li, A Madry
Advances in neural information processing systems 31, 2018
8782018
Provably robust deep learning via adversarially trained smoothed classifiers
H Salman, J Li, I Razenshteyn, P Zhang, H Zhang, S Bubeck, G Yang
Advances in neural information processing systems 32, 2019
6032019
Robust estimators in high-dimensions without the computational intractability
I Diakonikolas, G Kamath, D Kane, J Li, A Moitra, A Stewart
SIAM Journal on Computing 48 (2), 742-864, 2019
5422019
Quantum advantage in learning from experiments
HY Huang, M Broughton, J Cotler, S Chen, J Li, M Mohseni, H Neven, ...
Science 376 (6598), 1182-1186, 2022
5092022
Aligning ai with shared human values
D Hendrycks, C Burns, S Basart, A Critch, J Li, D Song, J Steinhardt
arXiv preprint arXiv:2008.02275, 2020
4312020
Byzantine stochastic gradient descent
D Alistarh, Z Allen-Zhu, J Li
Advances in neural information processing systems 31, 2018
3462018
Sever: A robust meta-algorithm for stochastic optimization
I Diakonikolas, G Kamath, D Kane, J Li, J Steinhardt, A Stewart
International Conference on Machine Learning, 1596-1606, 2019
3372019
Being robust (in high dimensions) can be practical
I Diakonikolas, G Kamath, DM Kane, J Li, A Moitra, A Stewart
International Conference on Machine Learning, 999-1008, 2017
2732017
Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions
S Chen, S Chewi, J Li, Y Li, A Salim, AR Zhang
arXiv preprint arXiv:2209.11215, 2022
2562022
ZipML: Training linear models with end-to-end low precision, and a little bit of deep learning
H Zhang, J Li, K Kara, D Alistarh, J Liu, C Zhang
International Conference on Machine Learning, 4035-4043, 2017
252*2017
Randomized smoothing of all shapes and sizes
G Yang, T Duan, JE Hu, H Salman, I Razenshteyn, J Li
International Conference on Machine Learning, 10693-10705, 2020
2252020
Automatic prompt optimization with" gradient descent" and beam search
R Pryzant, D Iter, J Li, YT Lee, C Zhu, M Zeng
arXiv preprint arXiv:2305.03495, 2023
2152023
Mixture models, robustness, and sum of squares proofs
SB Hopkins, J Li
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing …, 2018
2012018
Privately learning high-dimensional distributions
G Kamath, J Li, V Singhal, J Ullman
Conference on Learning Theory, 1853-1902, 2019
1642019
The spraylist: A scalable relaxed priority queue
D Alistarh, J Kopinsky, J Li, N Shavit
Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of …, 2015
1512015
Robustly learning a gaussian: Getting optimal error, efficiently
I Diakonikolas, G Kamath, DM Kane, J Li, A Moitra, A Stewart
Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete …, 2018
1472018
Computationally efficient robust sparse estimation in high dimensions
S Balakrishnan, SS Du, J Li, A Singh
Conference on Learning Theory, 169-212, 2017
1412017
Exponential separations between learning with and without quantum memory
S Chen, J Cotler, HY Huang, J Li
2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS …, 2022
1292022
Quantum entropy scoring for fast robust mean estimation and improved outlier detection
Y Dong, S Hopkins, J Li
Advances in Neural Information Processing Systems 32, 2019
1122019
В данный момент система не может выполнить эту операцию. Повторите попытку позднее.
Статьи 1–20