フォロー
Nathan Hubens
Nathan Hubens
PhD, Faculty of Engineering of Mons | Télécom SudParis
確認したメール アドレス: umons.ac.be - ホームページ
タイトル
引用先
引用先
Deep inside: Autoencoders
N Hubens
Towards Data Science, https://medium.com/p/7e41f319999f, 2019
33*2019
An experimental study of the impact of pre-training on the pruning of a convolutional neural network
N Hubens, M Mancas, M Decombas, M Preda, T Zaharia, B Gosselin, ...
Proceedings of the 3rd International Conference on Applications of …, 2020
112020
One-Cycle Pruning: Pruning Convnets With Tight Training Budget
N Hubens, M Mancas, B Gosselin, M Preda, T Zaharia
2022 IEEE International Conference on Image Processing (ICIP), 4128-4132, 2022
92022
Towards lightweight neural animation: Exploration of neural network pruning in mixture of experts-based animation models
A Maiorca, N Hubens, S Laraba, T Dutoit
arXiv preprint arXiv:2201.04042, 2022
42022
FasterAI: a library to make smaller and faster neural networks
N Hubens
GitHub, https://github.com/nathanhubens/fasterai, 2020
4*2020
Fake-buster: A lightweight solution for deepfake detection
N Hubens, M Mancas, B Gosselin, M Preda, T Zaharia
Applications of Digital Image Processing XLIV 11842, 146-154, 2021
32021
Build a simple Image Retrieval System with an Autoencoder
N Hubens
Medium, 2018
32018
Improve convolutional neural network pruning by maximizing filter variety
N Hubens, M Mancas, B Gosselin, M Preda, T Zaharia
International Conference on Image Analysis and Processing, 379-390, 2022
22022
FasterAI: A Lightweight Library for Neural Networks Compression
N Hubens, M Mancas, B Gosselin, M Preda, T Zaharia
Electronics 11 (22), 3789, 2022
12022
Where is my mind (Looking at)? A study of the EEG–visual attention relationship
V Delvigne, N Tits, L La Fisca, N Hubens, A Maiorca, H Wannous, T Dutoit, ...
Informatics 9 (1), 26, 2022
12022
Modulated self-attention convolutional network for VQA
JB Delbrouck, A Maiorca, N Hubens, S Dupont
arXiv preprint arXiv:1910.03343, 2019
12019
A Recipe for Efficient SBIR Models: Combining Relative Triplet Loss with Batch Normalization and Knowledge Distillation
O Seddati, N Hubens, S Dupont, T Dutoit
arXiv preprint arXiv:2305.18988, 2023
2023
Induced Feature Selection by Structured Pruning
N Hubens, V Delvigne, M Mancas, B Gosselin, M Preda, T Zaharia
arXiv preprint arXiv:2303.10999, 2023
2023
Towards lighter and faster deep neural networks with parameter pruning
N Hubens
Institut Polytechnique de Paris; Université de Mons, 2022
2022
FasterAI: A Lightweight Library for Creating Sparse Neural Networks
N Hubens
arXiv preprint arXiv:2207.01088, 2022
2022
Towards lighter and faster deep neural networks with parameter pruning.(Compression et accélération de réseaux de neurones profonds par élagage synaptique).
N Hubens
University of Mons, Belgium, 2022
2022
Winning the Lottery with fastai
N Hubens
https://nathanhubens.github.io/posts/deep%20learning/2022/02/16/Lottery.html, 2022
2022
Which Pruning Schedule Should I Use ?
N Hubens
https://nathanhubens.github.io/posts/deep%20learning/2021/06/15/OneCycle.html, 2021
2021
FasterAI: a library to make smaller and faster neural networks
N Hubens
Github, https://github.com/nathanhubens/fasterai, 2020
2020
FasterAI
N Hubens
https://nathanhubens.github.io/posts/deep%20learning/2020/08/17/FasterAI.html, 2020
2020
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–20