Interpretable Machine Learning - A Guide for Making Black Box Models Explainable C Molnar https://christophm.github.io/interpretable-ml-book/, 2018 | 7532* | 2018 |
Interpretable Machine Learning - A Brief History, State-of-the-Art and Challenges C Molnar, G Casalicchio, B Bischl ECML PKDD 2020 Workshops, 417--431, 2020 | 681 | 2020 |
iml: An R package for Interpretable Machine Learning C Molnar, G Casalicchio, B Bischl Journal of Open Source Software 3, 786, 2018 | 411 | 2018 |
Explainable AI methods-a brief overview A Holzinger, A Saranti, C Molnar, P Biecek, W Samek International workshop on extending explainable AI beyond deep models and …, 2022 | 374 | 2022 |
Multi-objective counterfactual explanations S Dandl, C Molnar, M Binder, B Bischl Bäck T. et al. (eds) Parallel Problem Solving from Nature – PPSN XVI. PPSN …, 2020 | 349 | 2020 |
TNF blockers inhibit spinal radiographic progression in ankylosing spondylitis by reducing disease activity: results from the Swiss Clinical Quality Management cohort C Molnar, A Scherer, X Baraliakos, M de Hooge, R Micheroli, P Exer, ... Annals of the rheumatic diseases 77 (1), 63-69, 2018 | 288 | 2018 |
Visualizing the feature importance for black box models G Casalicchio, C Molnar, B Bischl Machine Learning and Knowledge Discovery in Databases: European Conference …, 2019 | 281 | 2019 |
General pitfalls of model-agnostic interpretation methods for machine learning models C Molnar, G König, J Herbinger, T Freiesleben, S Dandl, CA Scholbeck, ... International Workshop on Extending Explainable AI Beyond Deep Models and …, 2020 | 268* | 2020 |
Model-agnostic feature importance and effects with dependent features: a conditional subgroup approach C Molnar, G König, B Bischl, G Casalicchio Data Mining and Knowledge Discovery 38 (5), 2903-2941, 2024 | 103 | 2024 |
Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc Interpretability C Molnar, G Casalicchio, B Bischl Joint European Conference on Machine Learning and Knowledge Discovery in …, 2019 | 103* | 2019 |
Relating the partial dependence plot and permutation feature importance to the data generating process C Molnar, T Freiesleben, G König, J Herbinger, T Reisinger, ... World Conference on Explainable Artificial Intelligence, 456-479, 2023 | 86 | 2023 |
Relative Feature Importance G König, C Molnar, B Bischl, M Grosse-Wentrup 2020 25th International Conference on Pattern Recognition (ICPR), 9318--9325, 2021 | 78 | 2021 |
Estimation of voter transitions based on ecological inference: An empirical assessment of different approaches A Klima, PW Thurner, C Molnar, T Schlesinger, H Küchenhoff AStA Advances in Statistical Analysis 100, 133-159, 2016 | 49 | 2016 |
Sampling, intervention, prediction, aggregation: a generalized framework for model-agnostic interpretations CA Scholbeck, C Molnar, C Heumann, B Bischl, G Casalicchio Machine Learning and Knowledge Discovery in Databases: International …, 2020 | 46 | 2020 |
Beyond prediction: methods for interpreting complex models of soil variation AMJC Wadoux, C Molnar Geoderma 422, 115953, 2022 | 41 | 2022 |
Errors in palliative care: kinds, causes, and consequences: a pilot survey of experiences and attitudes of palliative care professionals I Dietz, GD Borasio, C Molnar, C Müller-Busch, A Plog, G Schneider, ... Journal of palliative medicine 16 (1), 74-81, 2013 | 40 | 2013 |
Scientific inference with interpretable machine learning: Analyzing models to learn about real-world phenomena T Freiesleben, G König, C Molnar, A Tejero-Cantero arXiv preprint arXiv:2206.05487, 2022 | 21 | 2022 |
Marginal effects for non-linear prediction functions CA Scholbeck, G Casalicchio, C Molnar, B Bischl, C Heumann Data Mining and Knowledge Discovery, 1-46, 2024 | 14 | 2024 |
Interpreting Machine Learning Models with SHAP: A Guide with Python Examples and Theory on Shapley Values C Molnar | 11 | 2023 |
Recursive partitioning by conditional inference C Molnar Department of Statistics, University of Munich: Munich, Germany, 2013 | 9 | 2013 |