Articles with public access mandates - Konstantin MishchenkoLearn more
Available somewhere: 7
Stochastic distributed learning with gradient quantization and double-variance reduction
S Horváth, D Kovalev, K Mishchenko, P Richtárik, S Stich
Optimization Methods and Software, 1-16, 2022
Mandates: Swiss National Science Foundation, Helmholtz Association
ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!
K Mishchenko, G Malinovsky, S Stich, P Richtárik
International Conference on Machine Learning, 15750-15769, 2022
Mandates: Agence Nationale de la Recherche
Asynchronous SGD Beats Minibatch SGD under Arbitrary Delays
K Mishchenko, F Bach, M Even, B Woodworth
Advances in Neural Information Processing Systems 35, 420-433, 2022
Mandates: European Commission, Agence Nationale de la Recherche
Super-universal regularized newton method
N Doikov, K Mishchenko, Y Nesterov
SIAM Journal on Optimization 34 (1), 27-56, 2024
Mandates: European Commission, Agence Nationale de la Recherche
DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate
S Soori, K Mischenko, A Mokhtari, MM Dehnavi, M Gurbuzbalaban
AISTATS 2020, 2019
Mandates: US National Science Foundation, Natural Sciences and Engineering Research …
Adaptive proximal gradient method for convex optimization
Y Malitsky, K Mishchenko
Advances in Neural Information Processing Systems 37, 2024
Mandates: Austrian Science Fund
Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy
B Woodworth, K Mishchenko, F Bach
International Conference on Machine Learning, 2023
Mandates: European Commission, Agence Nationale de la Recherche
Publication and funding information is determined automatically by a computer program