Optimal rates for multi-pass stochastic gradient methods J Lin, L Rosasco
Journal of Machine Learning Research 18 (97), 1-47, 2017
140 * 2017 Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces J Lin, A Rudi, L Rosasco, V Cevher
Applied and Computational Harmonic Analysis 48 (3), 868-890, 2020
108 2020 Generalization properties and implicit regularization for multiple passes SGM J Lin, R Camoriano, L Rosasco
International Conference on Machine Learning, 2340-2348, 2016
84 2016 Optimal convergence for distributed learning with stochastic gradient methods and spectral algorithms J Lin, V Cevher
Journal of Machine Learning Research 21 (1), 5852-5914, 2020
74 * 2020 New bounds for restricted isometry constants with coherent tight frames J Lin, S Li, Y Shen
IEEE Transactions on Signal Processing 61 (3), 611-621, 2013
54 2013 Sparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSO J Lin, S Li
Applied and Computational Harmonic Analysis 37 (1), 126-139, 2014
50 2014 Iterative regularization for learning with convex loss functions J Lin, L Rosasco, DX Zhou
Journal of Machine Learning Research 17 (1), 2718-2755, 2016
44 2016 Block sparse recovery via mixed l 2/l 1 minimization JH Lin, S Li
Acta Mathematica Sinica, English Series 29 (7), 1401-1412, 2013
42 2013 Online learning algorithms can converge comparably fast as batch learning J Lin, DX Zhou
IEEE Transactions on Neural Networks and Learning Systems 29 (6), 2367-2378, 2017
41 2017 Compressed sensing with coherent tight frame via lq minimization S Li, J Lin
Inverse Probl Imaging 8 (3), 761-777, 2014
40 * 2014 Learning theory of randomized Kaczmarz algorithm J Lin, DX Zhou
Journal of Machine Learning Research 16 (1), 3341-3365, 2015
39 2015 Compressed data separation with redundant dictionaries J Lin, S Li, Y Shen
IEEE Transactions on Information Theory 59 (7), 4309-4315, 2013
26 2013 Restricted -Isometry Properties Adapted to Frames for Nonconvex -Analysis J Lin, S Li
IEEE Transactions on Information Theory 62 (8), 4733-4747, 2016
24 2016 Online pairwise learning algorithms with convex loss functions J Lin, Y Lei, B Zhang, DX Zhou
Information Sciences 406, 57-70, 2017
23 2017 Nonuniform support recovery from noisy random measurements by orthogonal matching pursuit J Lin, S Li
Journal of Approximation Theory 165 (1), 20-40, 2013
16 2013 Convergences of regularized algorithms and stochastic gradient methods with random projections J Lin, V Cevher
Journal of Machine Learning Research 21, 1-44, 2020
15 2020 Convergence of projected Landweber iteration for matrix rank minimization J Lin, S Li
Applied and Computational Harmonic Analysis 36 (2), 316-325, 2014
14 2014 Optimal Rates for Learning with Nystr\" om Stochastic Gradient Methods J Lin, L Rosasco
arXiv preprint arXiv:1710.07797, 2017
13 2017 Revisiting Convergence of AdaGrad with Relaxed Assumptions Y Hong, J Lin
Uncertainty in Artificial Intelligence, 2024
12 2024 Iterative hard thresholding for compressed data separation S Li, J Lin, D Liu, W Sun
Journal of Complexity 59, 101469, 2020
12 2020