Articoli con mandati relativi all'accesso pubblico - Jason D. LeeUlteriori informazioni
Disponibili pubblicamente: 61
Gradient descent finds global minima of deep neural networks
SS Du, JD Lee, H Li, L Wang, X Zhai
arXiv preprint arXiv:1811.03804, 2018
Mandati: US Department of Defense, National Natural Science Foundation of China, UK …
Exact post-selection inference, with application to the lasso
JD Lee, DL Sun, Y Sun, JE Taylor
The Annals of Statistics 44 (3), 907-927, 2016
Mandati: US National Science Foundation, US National Institutes of Health
Gradient descent only converges to minimizers
JD Lee, M Simchowitz, MI Jordan, B Recht
Conference on learning theory, 1246-1257, 2016
Mandati: US National Science Foundation, US Department of Energy
On the theory of policy gradient methods: Optimality, approximation, and distribution shift
A Agarwal, SM Kakade, JD Lee, G Mahajan
Journal of Machine Learning Research 22 (98), 1-76, 2021
Mandati: US Department of Defense, UK Engineering and Physical Sciences Research Council
Matrix completion and low-rank SVD via fast alternating least squares
T Hastie, R Mazumder, J Lee, R Zadeh
Journal of Machine Learning Research, 2014
Mandati: US National Institutes of Health
A kernelized Stein discrepancy for goodness-of-fit tests
Q Liu, J Lee, M Jordan
International conference on machine learning, 276-284, 2016
Mandati: US National Science Foundation
Theoretical insights into the optimization landscape of over-parameterized shallow neural networks
M Soltanolkotabi, A Javanmard, JD Lee
IEEE Transactions on Information Theory 65 (2), 742-769, 2018
Mandati: US National Science Foundation, US Department of Defense
Characterizing implicit bias in terms of optimization geometry
S Gunasekar, J Lee, D Soudry, N Srebro
International Conference on Machine Learning, 1832-1841, 2018
Mandati: US National Science Foundation
Proximal Newton-type methods for minimizing composite functions
JD Lee, Y Sun, MA Saunders
SIAM Journal on Optimization 24 (3), 1420-1443, 2014
Mandati: US National Institutes of Health
Kernel and rich regimes in overparametrized models
B Woodworth, S Gunasekar, JD Lee, E Moroshko, P Savarese, I Golan, ...
Conference on Learning Theory, 3635-3673, 2020
Mandati: US National Science Foundation
Gradient descent can take exponential time to escape saddle points
SS Du, C Jin, JD Lee, MI Jordan, A Singh, B Poczos
Advances in neural information processing systems 30, 2017
Mandati: US National Science Foundation, US Department of Energy, US Department of …
On the power of over-parametrization in neural networks with quadratic activation
S Du, J Lee
International conference on machine learning, 1329-1338, 2018
Mandati: US National Science Foundation, US Department of Defense, UK Engineering and …
Stochastic subgradient method converges on tame functions
D Davis, D Drusvyatskiy, S Kakade, JD Lee
Foundations of computational mathematics 20 (1), 119-154, 2020
Mandati: US National Science Foundation, US Department of Defense
Gradient descent learns one-hidden-layer cnn: Don’t be afraid of spurious local minima
S Du, J Lee, Y Tian, A Singh, B Poczos
International Conference on Machine Learning, 1339-1348, 2018
Mandati: US National Science Foundation, US Department of Defense, UK Engineering and …
Learning the structure of mixed graphical models
JD Lee, TJ Hastie
Journal of Computational and Graphical Statistics 24 (1), 230-253, 2015
Mandati: US National Institutes of Health
Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced
SS Du, W Hu, JD Lee
Advances in neural information processing systems 31, 2018
Mandati: US Department of Defense
Regularization matters: Generalization and optimization of neural nets vs their induced kernel
C Wei, JD Lee, Q Liu, T Ma
Advances in Neural Information Processing Systems, 9712-9724, 2019
Mandati: US National Science Foundation, US Department of Defense, UK Engineering and …
Predicting what you already know helps: Provable self-supervised learning
JD Lee, Q Lei, N Saunshi, J Zhuo
Advances in Neural Information Processing Systems 34, 309-323, 2021
Mandati: US National Science Foundation, US Department of Defense
On the convergence and robustness of training gans with regularized optimal transport
M Sanjabi, J Ba, M Razaviyayn, JD Lee
Advances in Neural Information Processing Systems 31, 2018
Mandati: US Department of Defense
Convergence of gradient descent on separable data
MS Nacson, J Lee, S Gunasekar, PHP Savarese, N Srebro, D Soudry
The 22nd International Conference on Artificial Intelligence and Statistics …, 2019
Mandati: US National Science Foundation
Le informazioni sulla pubblicazione e sul finanziamento vengono stabilite automaticamente da un software