A unified approach to interpreting model predictions S Lundberg arXiv preprint arXiv:1705.07874, 2017 | 28309 | 2017 |
From local explanations to global understanding with explainable AI for trees SM Lundberg, G Erion, H Chen, A DeGrave, JM Prutkin, B Nair, R Katz, ... Nature machine intelligence 2 (1), 56-67, 2020 | 5168 | 2020 |
Sparks of artificial general intelligence: Early experiments with gpt-4 S Bubeck, V Chandrasekaran, R Eldan, J Gehrke, E Horvitz, E Kamar, ... arXiv preprint arXiv:2303.12712, 2023 | 3175 | 2023 |
Consistent individualized feature attribution for tree ensembles SM Lundberg, GG Erion, SI Lee arXiv preprint arXiv:1802.03888, 2018 | 2117 | 2018 |
Explainable machine-learning predictions for the prevention of hypoxaemia during surgery SM Lundberg, B Nair, MS Vavilala, M Horibe, MJ Eisses, T Adams, ... Nature biomedical engineering 2 (10), 749-760, 2018 | 1593 | 2018 |
Explainable AI for trees: From local explanations to global understanding SM Lundberg, G Erion, H Chen, A DeGrave, JM Prutkin, B Nair, R Katz, ... arXiv preprint arXiv:1905.04610, 2019 | 391 | 2019 |
Understanding global feature contributions with additive importance measures I Covert, SM Lundberg, SI Lee Advances in Neural Information Processing Systems 33, 17212-17223, 2020 | 366 | 2020 |
A machine learning approach to integrate big data for precision medicine in acute myeloid leukemia SI Lee, S Celik, BA Logsdon, SM Lundberg, TJ Martins, VG Oehler, ... Nature communications 9 (1), 42, 2018 | 364 | 2018 |
Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv S Bubeck, V Chandrasekaran, R Eldan, J Gehrke, E Horvitz, E Kamar, ... arXiv preprint arXiv:2303.12712, 2023 | 359 | 2023 |
Explaining by removing: A unified framework for model explanation I Covert, S Lundberg, SI Lee Journal of Machine Learning Research 22 (209), 1-90, 2021 | 279 | 2021 |
Improving performance of deep learning models with axiomatic attribution priors and expected gradients G Erion, JD Janizek, P Sturmfels, SM Lundberg, SI Lee Nature machine intelligence 3 (7), 620-631, 2021 | 240 | 2021 |
Visualizing the impact of feature attribution baselines P Sturmfels, S Lundberg, SI Lee Distill 5 (1), e22, 2020 | 236 | 2020 |
Consistent feature attribution for tree ensembles SM Lundberg, SI Lee arXiv preprint arXiv:1706.06060, 2017 | 185 | 2017 |
True to the model or true to the data? H Chen, JD Janizek, S Lundberg, SI Lee arXiv preprint arXiv:2006.16234, 2020 | 179 | 2020 |
Art: Automatic multi-step reasoning and tool-use for large language models B Paranjape, S Lundberg, S Singh, H Hajishirzi, L Zettlemoyer, ... arXiv preprint arXiv:2303.09014, 2023 | 172 | 2023 |
An unexpected unity among methods for interpreting model predictions S Lundberg, SI Lee arXiv preprint arXiv:1611.07478, 2016 | 165 | 2016 |
Algorithms to estimate Shapley value feature attributions H Chen, IC Covert, SM Lundberg, SI Lee Nature Machine Intelligence 5 (6), 590-601, 2023 | 162 | 2023 |
Explaining models by propagating Shapley values of local components H Chen, S Lundberg, SI Lee Explainable AI in Healthcare and Medicine: Building a Culture of …, 2021 | 135 | 2021 |
Consistent individualized feature attribution for tree ensembles. arXiv 2018 SM Lundberg, GG Erion, SI Lee arXiv preprint arXiv:1802.03888 10, 1802 | 132 | 1802 |
Shapley flow: A graph-based approach to interpreting model predictions J Wang, J Wiens, S Lundberg International Conference on Artificial Intelligence and Statistics, 721-729, 2021 | 117 | 2021 |