Freiesleben, Timo (2023): What does explainable AI explain?. Dissertation, LMU München: Graduate School of Systemic Neurosciences (GSN) |
Vorschau |
Lizenz: Creative Commons: Namensnennung 4.0 (CC-BY)
Freiesleben_Timo.pdf 12MB |
Abstract
Machine Learning (ML) models are increasingly used in industry, as well as in scientific research and social contexts. Unfortunately, ML models provide only partial solutions to real-world problems, focusing on predictive performance in static environments. Problem aspects beyond prediction, such as robustness in employment, knowledge generation in science, or providing recourse recommendations to end-users, cannot be directly tackled with ML models. Explainable Artificial Intelligence (XAI) aims to solve, or at least highlight, problem aspects beyond predictive performance through explanations. However, the field is still in its infancy, as fundamental questions such as “What are explanations?”, “What constitutes a good explanation?”, or “How relate explanation and understanding?” remain open. In this dissertation, I combine philosophical conceptual analysis and mathematical formalization to clarify a prerequisite of these difficult questions, namely what XAI explains: I point out that XAI explanations are either associative or causal and either aim to explain the ML model or the modeled phenomenon. The thesis is a collection of five individual research papers that all aim to clarify how different problems in XAI are related to these different “whats”. In Paper I, my co-authors and I illustrate how to construct XAI methods for inferring associational phenomenon relationships. Paper II directly relates to the first; we formally show how to quantify uncertainty of such scientific inferences for two XAI methods – partial dependence plots (PDP) and permutation feature importance (PFI). Paper III discusses the relationship between counterfactual explanations and adversarial examples; I argue that adversarial examples can be described as counterfactual explanations that alter the prediction but not the underlying target variable. In Paper IV, my co-authors and I argue that algorithmic recourse recommendations should help data-subjects improve their qualification rather than to game the predictor. In Paper V, we address general problems with model agnostic XAI methods and identify possible solutions.
Dokumententyp: | Dissertationen (Dissertation, LMU München) |
---|---|
Keywords: | Explainable Artificial Intelligence, Interpretable Machine Learning, Explanandum |
Themengebiete: | 000 Allgemeines, Informatik, Informationswissenschaft
000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik |
Fakultäten: | Graduate School of Systemic Neurosciences (GSN) |
Sprache der Hochschulschrift: | Englisch |
Datum der mündlichen Prüfung: | 30. Mai 2023 |
1. Berichterstatter:in: | Hartmann, Stephan |
MD5 Prüfsumme der PDF-Datei: | c89e3978321a3e0bb78f4adfa8a972ad |
Signatur der gedruckten Ausgabe: | 0001/UMC 29662 |
ID Code: | 31933 |
Eingestellt am: | 19. Jun. 2023 11:58 |
Letzte Änderungen: | 19. Jun. 2023 11:58 |