Cyclical learning rates for training neural networks LN Smith 2017 IEEE winter conference on applications of computer vision (WACV), 464-472, 2017 | 3787 | 2017 |
Super-convergence: Very fast training of neural networks using large learning rates LN Smith, N Topin Artificial intelligence and machine learning for multi-domain operations …, 2019 | 1790 | 2019 |
A disciplined approach to neural network hyper-parameters: Part 1--learning rate, batch size, momentum, and weight decay LN Smith arXiv preprint arXiv:1803.09820, 2018 | 1477 | 2018 |
A disciplined approach to neural network hyper-parameters: Part 1—Learning rate, batch size, momentum, and weight decay. arXiv 2018 LN Smith arXiv preprint arXiv:1803.09820, 1803 | 209 | 1803 |
Super-convergence: Very fast training of residual networks using large learning rates LN Smith, N Topin arXiv preprint arXiv:1708.07120 5, 2017 | 189 | 2017 |
Improving dictionary learning: Multiple dictionary updates and coefficient reuse LN Smith, M Elad IEEE Signal Processing Letters 20 (1), 79-82, 2012 | 162 | 2012 |
Rotational compound state resonances for an argon and methane scattering system LN Smith, DJ Malik, D Secrest The Journal of Chemical Physics 71 (11), 4502-4514, 1979 | 105 | 1979 |
Deep convolutional neural network design patterns LN Smith, N Topin arXiv preprint arXiv:1611.00847, 2016 | 83 | 2016 |
2017 IEEE winter conference on applications of computer vision (WACV) LN Smith IEEE, 2017 | 64 | 2017 |
Close‐coupling and coupled state calculations of argon scattering from normal methane LN Smith, D Secrest The Journal of Chemical Physics 74 (7), 3882-3897, 1981 | 61 | 1981 |
Super-convergence: very fast training of neural networks using large learning rates, arXiv LN Smith, N Topin arXiv preprint arXiv:1708.07120 6, 2017 | 57 | 2017 |
An approach to explainable deep learning using fuzzy inference D Bonanno, K Nock, L Smith, P Elmore, F Petry Next-Generation Analyst V 10207, 132-136, 2017 | 50 | 2017 |
A Disciplined Approach to Neural Network Hyper-Parameters: Part 1–Learning Rate LN Smith Batch size, Momentum, and Weight decay 8, 1803, 2018 | 39 | 2018 |
Gradual dropin of layers to train very deep neural networks LN Smith, EM Hand, T Doster Proceedings of the IEEE conference on computer vision and pattern …, 2016 | 38 | 2016 |
Restoration of turbulence degraded underwater images AV Kanaev, W Hou, S Woods, LN Smith Optical Engineering 51 (5), 057007-057007, 2012 | 38 | 2012 |
Exploring loss function topology with cyclical learning rates LN Smith, N Topin arXiv preprint arXiv:1702.04283, 2017 | 30 | 2017 |
Disambiguation protocols based on risk simulation DE Fishkind, CE Priebe, KE Giles, LN Smith, V Aksakalli IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and …, 2007 | 28 | 2007 |
Cyclical focal loss LN Smith arXiv preprint arXiv:2202.08978, 2022 | 21 | 2022 |
Selecting subgoals using deep learning in minecraft: A preliminary report D Bonanno, M Roberts, L Smith, DW Aha IJCAI workshop on deep learning for artificial intelligence 32, 2016 | 17 | 2016 |
General cyclical training of neural networks LN Smith arXiv preprint arXiv:2202.08835, 2022 | 11 | 2022 |