Spremljaj
Jihun Yun
Jihun Yun
KRAFTON, Researcher
Preverjeni e-poštni naslov na kaist.ac.kr - Domača stran
Naslov
Navedeno
Navedeno
Leto
Trimming the Regularizer: Statistical Analysis, Optimization, and Applications to Deep Learning
J Yun, P Zheng, E Yang, A Lozano, A Aravkin
International Conference on Machine Learning, 7242-7251, 2019
312019
Adaptive proximal gradient methods for structured neural networks
J Yun, AC Lozano, E Yang
Advances in Neural Information Processing Systems 34, 24365-24378, 2021
252021
A general family of stochastic proximal gradient methods for deep learning
J Yun, AC Lozano, E Yang
arXiv preprint arXiv:2007.07484, 2020
152020
Cluster-promoting quantization with bit-drop for minimizing network quantization loss
JH Lee, J Yun, SJ Hwang, E Yang
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2021
142021
Lantern: Accelerating visual autoregressive models with relaxed speculative decoding
D Jang, S Park, JY Yang, Y Jung, J Yun, S Kundu, SY Kim, E Yang
arXiv preprint arXiv:2410.03355, 2024
72024
Riemannian SAM: sharpness-aware minimization on riemannian manifolds
J Yun, E Yang
Advances in Neural Information Processing Systems 36, 65784-65800, 2023
62023
Adablock: SGD with practical block diagonal matrix adaptation for deep learning
J Yun, A Lozano, E Yang
International Conference on Artificial Intelligence and Statistics, 2574-2606, 2022
42022
Stochastic gradient methods with block diagonal matrix adaptation
J Yun, AC Lozano, E Yang
arXiv preprint arXiv:1905.10757, 2019
42019
M-estimation with the trimmed l1 penalty
J Yun, P Zheng, E Yang, A Lozano, A Aravkin
arXiv preprint arXiv:1805.07495, 2018
32018
TEDDY: Trimming edges with degree-based discrimination strategy
H Seo, J Yun, E Yang
arXiv preprint arXiv:2402.01261, 2024
12024
Trimming the l-1 Regularizer: Statistical Analysis, Optimization, and Applications to Deep Learning
J Yun, P Zheng, E Yang, A Lozano, A Aleksandr
Thirty-sixth International Conference on Machine Learning, 2019
12019
Unraveling Zeroth-Order Optimization through the Lens of Low-Dimensional Structured Perturbations
S Park, J Yun, SY Kim, S Kundu, E Yang
arXiv preprint arXiv:2501.19099, 2025
2025
Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bitwise Regularization
JH Lee, J Yun, SJ Hwang, E Yang
2019
MeZO-Adam: Memory-efficient Zeroth-order Adam with Adaptivity Adjustments for Fine-tuning LLMs
S Park, J Yun, SY Kim, JY Yang, Y Jung, S Kundu, K Kim, E Yang
Revised NTK Analysis of Optimization and Generalization with Its Extensions to Arbitrary Initialization
J Yun, K Kim, E Yang
GradientMix: A Simple yet Effective Regularization for Large Batch Training
J Yun, JH Lee, E Yang
Sistem trenutno ne more izvesti postopka. Poskusite znova pozneje.
Članki 1–16