دنبال کردن
Atsushi Nitanda
Atsushi Nitanda
CFAR, A*STAR / Nanyang Technological University
ایمیل تأیید شده در cfar.a-star.edu.sg - صفحهٔ اصلی
عنوان
نقل شده توسط
نقل شده توسط
سال
Stochastic proximal gradient descent with acceleration techniques
A Nitanda
Advances in neural information processing systems 27, 2014
3222014
Data cleansing for models trained with SGD
S Hara, A Nitanda, T Maehara
Advances in Neural Information Processing Systems 32 (NeurIPS2019), 4213-4222, 2019
972019
Stochastic particle gradient descent for infinite ensembles
A Nitanda, T Suzuki
arXiv preprint arXiv:1712.05438, 2017
912017
Deep learning is adaptive to intrinsic dimensionality of model smoothness in anisotropic Besov space
T Suzuki, A Nitanda
Advances in Neural Information Processing Systems 34, 3609-3621, 2021
832021
Convex Analysis of the Mean Field Langevin Dynamics
A Nitanda, D Wu, T Suzuki
International Conference on Artificial Intelligence and Statistics, 2022
752022
Gradient descent can learn less over-parameterized two-layer neural networks on classification problems
A Nitanda, G Chinot, T Suzuki
arXiv preprint arXiv:1905.09870, 2019
54*2019
Optimal rates for averaged stochastic gradient descent under neural tangent kernel regime
A Nitanda, T Suzuki
International Conference on Learning Representations, 2020
532020
When Does Preconditioning Help or Hurt Generalization?
S Amari, J Ba, R Grosse, X Li, A Nitanda, T Suzuki, D Wu, J Xu
International Conference on Learning Representations, 2020
492020
Accelerated Stochastic Gradient Descent for Minimizing Finite Sums
A Nitanda
Proceedings of International Conference on Artificial Intelligence and …, 2015
382015
Particle dual averaging: Optimization of mean field neural network with global convergence rate analysis
A Nitanda, D Wu, T Suzuki
Advances in Neural Information Processing Systems 34, 19608-19621, 2021
362021
Stochastic difference of convex algorithm and its application to training deep Boltzmann machines
A Nitanda, T Suzuki
Proceedings of International Conference on Artificial Intelligence and …, 2017
362017
Functional gradient boosting based on residual network perception
A Nitanda, T Suzuki
International Conference on Machine Learning, 3819-3828, 2018
322018
Mean-field langevin dynamics: Time-space discretization, stochastic gradient, and variance reduction
T Suzuki, D Wu, A Nitanda
NeurIPS, 2023
24*2023
A novel global spatial attention mechanism in convolutional neural network for medical image classification
L Xu, J Huang, A Nitanda, R Asaoka, K Yamanishi
arXiv preprint arXiv:2007.15897, 2020
212020
Uniform-in-time propagation of chaos for the mean-field gradient Langevin dynamics
T Suzuki, A Nitanda, D Wu
The Eleventh International Conference on Learning Representations, 2023
172023
Feature learning via mean-field langevin dynamics: classifying sparse parities and beyond
T Suzuki, D Wu, K Oko, A Nitanda
Advances in Neural Information Processing Systems 36, 34536-34556, 2023
152023
Particle stochastic dual coordinate ascent: Exponential convergent algorithm for mean field neural network optimization
K Oko, T Suzuki, A Nitanda, D Wu
International Conference on Learning Representations, 2022
152022
Gradient Layer: Enhancing the Convergence of Adversarial Training for Generative Models
A Nitanda, T Suzuki
Proceedings of International Conference on Artificial Intelligence and …, 2018
142018
Generalization error bound for hyperbolic ordinal embedding
A Suzuki, A Nitanda, J Wang, L Xu, K Yamanishi, M Cavazza
International Conference on Machine Learning, 10011-10021, 2021
132021
Generalization bounds for graph embedding using negative sampling: Linear vs hyperbolic
A Suzuki, A Nitanda, L Xu, K Yamanishi, M Cavazza
Advances in Neural Information Processing Systems 34, 1243-1255, 2021
122021
سیستم در حال حاضر قادر به انجام عملکرد نیست. بعداً دوباره امتحان کنید.
مقاله‌ها 1–20