Folgen
Thomas Fel
Thomas Fel
Kempner Institute, Harvard
Bestätigte E-Mail-Adresse bei seas.harvard.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods
T Fel, J Colin, R Cadène, T Serre
NeurIPS 2022, Advances in Neural Information Processing Systems, 2021
962021
Craft: Concept recursive activation factorization for explainability
T Fel, A Picard, L Bethune, T Boissin, D Vigouroux, J Colin, R Cadène, ...
CVPR 2023, Proceedings of the IEEE/CVF Conference on Computer Vision and …, 2023
772023
Harmonizing the object recognition strategies of deep neural networks with humans
T Fel, I Felipe, D Linsley, T Serre
NeurIPS 2022, Advances in Neural Information Processing Systems, 2022
702022
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
T Fel, R Cadene, M Chalvidal, M Cord, D Vigouroux, T Serre
NeurIPS 2021, Advances in Neural Information Processing Systems, 2021
692021
How good is your explanation? algorithmic stability measures to assess the quality of explanations for deep neural networks
T Fel, D Vigouroux, R Cadène, T Serre
WACV 2022, Proceedings of the IEEE/CVF Winter Conference on Applications of …, 2022
67*2022
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
T Fel, M Ducoffe, D Vigouroux, R Cadène, M Capelle, C Nicodème, ...
CVPR 2023, Proceedings of the IEEE/CVF Conference on Computer Vision and …, 2022
372022
Xplique: A Deep Learning Explainability Toolbox
T Fel, L Hervier, D Vigouroux, A Poche, J Plakoo, R Cadene, M Chalvidal, ...
CVPR 2022, Workshop on Explainable Artificial Intelligence for Computer …, 2022
342022
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation
T Fel, V Boutin, M Moayeri, R Cadène, L Bethune, M Chalvidal, T Serre
NeurIPS 2023, Advances in Neural Information Processing Systems, 2023
252023
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure
P Novello, T Fel, D Vigouroux
NeurIPS 2022, Advances in Neural Information Processing Systems, 2022
242022
On the Foundations of Shortcut Learning
KL Hermann, H Mobahi, T Fel, MC Mozer
ICLR 2024, International Conference on Learning Representations, 2023
202023
Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex
D Linsley, IF Rodriguez, T Fel, M Arcaro, S Sharma, M Livingstone, ...
NeurIPS 2023, Advances in Neural Information Processing Systems, 2023
152023
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective
M Serrurier, F Mamalet, T Fel, L Béthune, T Boissin
NeurIPS 2023, Advances in Neural Information Processing Systems, 2023
14*2023
Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization
T Fel, T Boissin, V Boutin, A Picard, P Novello, J Colin, D Linsley, ...
NeurIPS 2023, Advances in Neural Information Processing Systems, 2023
102023
Diffusion Models as Artists: Are we Closing the Gap between Humans and Machines?
V Boutin, T Fel, L Singhal, R Mukherji, A Nagaraj, J Colin, T Serre
ICML 2023, Proceedings of the International Conference on Machine Learning, 2023
102023
COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks
F Jourdan, A Picard, T Fel, L Risser, JM Loubes, N Asher
ACL 2023, Proceedings of the Annual Meeting of the Association for …, 2023
92023
Can we reconcile safety objectives with machine learning performances?
L Alecu, H Bonnin, T Fel, L Gardes, S Gerchinovitz, L Ponsolle, F Mamalet, ...
ERTS 2022, 2022
92022
Influenciæ: A library for tracing the influence back to the data-points
A Picard, L Hervier, T Fel, D Vigouroux
World Conference on Explainable Artificial Intelligence, 193-204, 2024
72024
Adversarial alignment: Breaking the trade-off between the strength of an attack and its relevance to human perception
D Linsley, P Feng, T Boissin, AK Ashok, T Fel, S Olaiya, T Serre
arXiv preprint arXiv:2306.03229, 2023
42023
Confident Object Detection via Conformal Prediction and Conformal Risk Control: an Application to Railway Signaling
L Andéol, T Fel, F De Grancey, L Mossina
Conformal and Probabilistic Prediction with Applications, 36-55, 2023
32023
Conviformers: Convolutionally guided vision transformer
M Vaishnav, T Fel, IF Rodríguez, T Serre
arXiv preprint arXiv:2208.08900, 2022
32022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20