フォロー
Alexandra Peste
Alexandra Peste
確認したメール アドレス: ist.ac.at
タイトル
引用先
引用先
Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks
T Hoefler, D Alistarh, T Ben-Nun, N Dryden, A Peste
The Journal of Machine Learning Research 22 (1), 10882-11005, 2021
9062021
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks
A Peste, E Iofinova, A Vladu, D Alistarh
Advances in Neural Information Processing Systems 34, 8557-8570, 2021
752021
How Well Do Sparse ImageNet Models Transfer?
E Iofinova, A Peste, M Kurtz, D Alistarh
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
512022
SSSE: Efficiently Erasing Samples from Trained Machine Learning Models
A Peste, D Alistarh, CH Lampert
arXiv preprint arXiv:2107.03860, 2021
262021
Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures
E Iofinova, A Peste, D Alistarh
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
142023
CrAM: A Compression-Aware Minimizer
A Peste, A Vladu, E Kurtic, CH Lampert, D Alistarh
ICLR 2023, 2022
102022
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
D Kuznedelev, E Kurtic, E Iofinova, E Frantar, A Peste, D Alistarh
arXiv preprint arXiv:2308.02060, 2023
82023
Knowledge Distillation Performs Partial Variance Reduction
M Safaryan, A Peste, D Alistarh
Advances in Neural Information Processing Systems (NeurIPS) 2023, 2023
42023
Learning in Variational Autoencoders with Kullback-Leibler and Renyi Integral Bounds
S Sârbu, R Volpi, A Peşte, L Malagò
arXiv preprint arXiv:1807.01889, 2018
22018
An Explanatory Analysis of the Geometry of Latent Variables Learned by Variational Auto-Encoders
A Peste, L Malagò, S Sârbu
NIPS, Bayesian Deep Learning Workshop, 2017
22017
ELSA: Partial Weight Freezing for Overhead-Free Sparse Network Deployment
P Halvachi, A Peste, D Alistarh, CH Lampert
arXiv preprint arXiv:2312.06872, 2023
2023
Efficiency and generalization of sparse neural networks
EA Peste
2023
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–12