Folgen
Haihao (Sean) Lu
Haihao (Sean) Lu
Assistant Professor, MIT
Bestätigte E-Mail-Adresse bei mit.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Relatively smooth convex optimization by first-order methods, and applications
H Lu, RM Freund, Y Nesterov
SIAM Journal on Optimization 28 (1), 333-354, 2018
3992018
The best of many worlds: Dual mirror descent for online allocation problems
SR Balseiro, H Lu, V Mirrokni
Operations Research 71 (1), 101-119, 2023
155*2023
Depth creates no bad local minima
H Lu, K Kawaguchi
arXiv preprint arXiv:1702.08580, 2017
1292017
“relative continuity” for non-lipschitz nonsmooth convex optimization using stochastic (or deterministic) mirror descent
H Lu
INFORMS Journal on Optimization 1 (4), 288-303, 2019
802019
Practical large-scale linear programming using primal-dual hybrid gradient
D Applegate, M Díaz, O Hinder, H Lu, M Lubin, B O'Donoghue, W Schudy
Advances in Neural Information Processing Systems 34, 20243-20257, 2021
772021
Ordered sgd: A new stochastic optimization framework for empirical risk minimization
K Kawaguchi, H Lu
International Conference on Artificial Intelligence and Statistics, 669-679, 2020
732020
Regularized online allocation problems: Fairness and beyond
S Balseiro, H Lu, V Mirrokni
International Conference on Machine Learning, 630-639, 2021
542021
Faster first-order primal-dual methods for linear programming using restarts and sharpness
D Applegate, O Hinder, H Lu, M Lubin
Mathematical Programming 201 (1), 133-184, 2023
532023
Accelerating gradient boosting machines
H Lu, SP Karimireddy, N Ponomareva, V Mirrokni
International conference on artificial intelligence and statistics, 516-526, 2020
472020
The landscape of the proximal point method for nonconvex–nonconcave minimax optimization
B Grimmer, H Lu, P Worah, V Mirrokni
Mathematical Programming 201 (1), 373-407, 2023
43*2023
Randomized gradient boosting machine
H Lu, R Mazumder
SIAM Journal on Optimization 30 (4), 2780-2808, 2020
422020
Accelerating Greedy Coordinate Descent Methods
H Lu, R Freund, V Mirrokni
International Conference on Machine Learning, 3257-3266, 2018
372018
An -Resolution ODE Framework for Discrete-Time Optimization Algorithms and Applications to the Linear Convergence of Minimax Problems
H Lu
Mathematical Programming 194, 1061-1112, 2022
35*2022
Generalized stochastic frank–wolfe algorithm with stochastic “substitute” gradient for structured convex optimization
H Lu, RM Freund
Mathematical Programming 187 (1), 317-349, 2021
352021
New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure
RM Freund, H Lu
Mathematical Programming 170, 445-477, 2018
352018
Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions
S Wang, W Zhou, H Lu, A Maleki, V Mirrokni
International Conference on Machine Learning, 5228-5237, 2018
332018
Infeasibility detection with primal-dual hybrid gradient for large-scale linear programming
D Applegate, M Díaz, H Lu, M Lubin
SIAM Journal on Optimization 34 (1), 459-484, 2024
262024
Approximate leave-one-out for high-dimensional non-differentiable learning problems
S Wang, W Zhou, A Maleki, H Lu, V Mirrokni
arXiv preprint arXiv:1810.02716, 2018
212018
cuPDLP. jl: A GPU implementation of restarted primal-dual hybrid gradient for linear programming in Julia
H Lu, J Yang
arXiv preprint arXiv:2311.12180, 2023
122023
On the Infimal Sub-differential Size of Primal-Dual Hybrid Gradient Method and Beyond
H Lu, J Yang
arXiv preprint arXiv:2206.12061, 2022
122022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20