Các bài viết có thể truy cập công khai - Simon Shaolei DuTìm hiểu thêm
Có tại một số nơi: 63
Gradient descent finds global minima of deep neural networks
SS Du, JD Lee, H Li, L Wang, X Zhai
International Conference on Machine Learning 2019, 2018
Các cơ quan ủy nhiệm: US Department of Defense, National Natural Science Foundation of China, UK …
Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks
S Arora, SS Du, W Hu, Z Li, R Wang
International Conference on Machine Learning 2019, 2019
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Defense
On exact computation with an infinitely wide neural net
S Arora, SS Du, W Hu, Z Li, RR Salakhutdinov, R Wang
Advances in neural information processing systems 32, 2019
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Defense
Gradient descent can take exponential time to escape saddle points
SS Du, C Jin, JD Lee, MI Jordan, A Singh, B Poczos
Advances in neural information processing systems 30, 2017
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Energy, US Department of …
Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph Kernels
SS Du, K Hou, B Póczos, R Salakhutdinov, R Wang, K Xu
Advances in Neural Information Processing Systems 2019, 2019
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Defense
Understanding the acceleration phenomenon via high-resolution differential equations
B Shi, SS Du, MI Jordan, WJ Su
Mathematical Programming, 1-70, 2022
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Defense
On the power of over-parametrization in neural networks with quadratic activation
SS Du, JD Lee
International Conference on Machine Learning 2018, 2018
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Defense, UK Engineering and …
Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced
SS Du, W Hu, JD Lee
Advances in neural information processing systems 31, 2018
Các cơ quan ủy nhiệm: US Department of Defense
Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima
SS Du, JD Lee, Y Tian, B Poczos, A Singh
International Conference on Machine Learning 2018, 2017
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Defense, UK Engineering and …
Computationally efficient robust estimation of sparse functionals
SS Du, S Balakrishnan, A Singh
Conference on Learning Theory, 2017, 2017
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Energy
Linear convergence of the primal-dual gradient method for convex-concave saddle point problems without strong convexity
SS Du, W Hu
International Conference on Artificial Intelligence and Statistics 2019, 2018
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Defense
Stochastic zeroth-order optimization in high dimensions
Y Wang, S Du, S Balakrishnan, A Singh
International Conference on Artificial Intelligence and Statistics 2018, 2017
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Defense
On reward-free reinforcement learning with linear function approximation
R Wang, SS Du, L Yang, RR Salakhutdinov
Advances in neural information processing systems 33, 17816-17826, 2020
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Defense
Width provably matters in optimization for deep linear neural networks
SS Du, W Hu
International Conference on Machine Learning 2019, 2019
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Defense
How Many Samples are Needed to Estimate a Convolutional or Recurrent Neural Network?
SS Du, Y Wang, X Zhai, S Balakrishnan, R Salakhutdinov, A Singh
Advances in Neural Information Processing Systems 2018, 2018
Các cơ quan ủy nhiệm: US Department of Defense
Adaloss: A computationally-efficient and provably convergent adaptive gradient method
X Wu, Y Xie, SS Du, R Ward
Proceedings of the AAAI Conference on Artificial Intelligence 36 (8), 8691-8699, 2022
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Defense
Provable Representation Learning for Imitation Learning via Bi-level Optimization
S Arora, SS Du, S Kakade, Y Luo, N Saunshi
International Conference on Machine Learning 2020, 2020
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Defense
Hypothesis Transfer Learning via Transformation Functions
SS Du, J Koushik, A Singh, B Poczos
Advances in Neural Information Processing Systems, 2017, 2016
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Energy
Is long horizon rl more difficult than short horizon rl?
R Wang, SS Du, L Yang, S Kakade
Advances in Neural Information Processing Systems 33, 9075-9085, 2020
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Defense
Impact of representation learning in linear bandits
J Yang, W Hu, JD Lee, SS Du
International Conference on Learning Representations, 2021
Các cơ quan ủy nhiệm: US National Science Foundation, US Department of Defense
Chương trình máy tính sẽ tự động xác định thông tin xuất bản và thông tin về nhà tài trợ