Seguir
Mark Schmidt
Mark Schmidt
Professor of Computer Science, University of British Columbia
Email confirmado em cs.ubc.ca - Página inicial
Título
Citado por
Citado por
Ano
Minimizing finite sums with the stochastic average gradient
M Schmidt, N Le Roux, F Bach
Mathematical Programming (MAPR), 2017
15152017
Linear Convergence of Gradient and Proximal-Gradient Methods under the Polyak-Łojasiewicz Condition
H Karimi, J Nutini, M Schmidt
European Conference on Machine Learning (ECML), 2016
14432016
A stochastic gradient method with an exponential convergence rate for finite training sets
N Le Roux, M Schmidt, FR Bach
Advances in Neural Information Processing Systems (NeurIPS), 2012
11152012
Convergence rates of inexact proximal-gradient methods for convex optimization
M Schmidt, N Le Roux, FR Bach
Advances in Neural Information Processing Systems (NeurIPS), 2011
7022011
Fast optimization methods for l1 regularization: A comparative study and two new approaches
M Schmidt, G Fung, R Rosales
European Conference on Machine Learning (ECML), 2007
4832007
Hybrid deterministic-stochastic methods for data fitting
MP Friedlander, M Schmidt
SIAM Journal on Scientific Computing (SISC), 2012
4792012
Fast patch-based style transfer of arbitrary style
TQ Chen, M Schmidt
NeurIPS Workshop on Constructive Machine Learning, 2016
4632016
Block-coordinate Frank-Wolfe optimization for structural SVMs
S Lacoste-Julien, M Jaggi, M Schmidt, P Pletscher
International Conference on Machine Learning (ICML), 2013
4582013
Accelerated training of conditional random fields with stochastic gradient methods
SVN Vishwanathan, NN Schraudolph, MW Schmidt, KP Murphy
International Conference on Machine Learning (ICML), 2006
4172006
Convex optimization for big data: Scalable, randomized, and parallel algorithms for big data analytics
V Cevher, S Becker, M Schmidt
IEEE Signal Processing Magazine, 2014
3752014
Fast and faster convergence of SGD for over-parameterized models and an accelerated perceptron
S Vaswani, F Bach, M Schmidt
International Conference on Artificial Intelligence and Statistics (AISTATS), 2019
3722019
Optimizing costly functions with simple constraints: A limited-memory projected quasi-newton algorithm
MW Schmidt, E Berg, MP Friedlander, KP Murphy
International Conference on Artificial Intelligence and Statistics (AISTATS), 2009
3442009
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
S Lacoste-Julien, M Schmidt, F Bach
arXiv preprint arXiv:1212.2002, 2012
2972012
Online Learning Rate Adaptation with Hypergradient Descent
AG Baydin, R Cornish, DM Rubio, M Schmidt, F Wood
International Conference on Learning Representations (ICLR), 2018
2962018
Learning graphical model structure using L1-regularization paths
M Schmidt, A Niculescu-Mizil, K Murphy
National Conference on Artificial Intelligence (AAAI), 2007
2962007
Coordinate Descent Converges Faster with the Gauss-Southwell Rule Than Random Selection
J Nutini, M Schmidt, IH Laradji, M Friedlander, H Koepke
International Conference on Machine Learning (ICML), 2015
2822015
minFunc: unconstrained differentiable multivariate optimization in Matlab
M Schmidt
http://www.cs.ubc.ca/~schmidtm/Software/minFunc.html, 2005
269*2005
Modeling annotator expertise: Learning when everybody knows a bit of something
Y Yan, R Rosales, G Fung, MW Schmidt, GH Valadez, L Bogoni, L Moy, ...
International Conference on Artificial Intelligence and Statistics (AISTATS), 2010
2592010
Least squares optimization with l1-norm regularization
M Schmidt
CPSC 542B Course Project Report, 2005
2532005
Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates
S Vaswani, A Mishkin, I Laradji, M Schmidt, G Gidel, S Lacoste-Julien
Advances in Neural Information Processing Systems (NeurIPS), 2019
2512019
O sistema não pode efectuar a operação agora. Tente mais tarde.
Artigos 1–20