Logo Logo
Hilfe
Kontakt
Switch language to English
Towards explainable automated machine learning
Towards explainable automated machine learning
Automated machine learning (AutoML) aims at increasing the efficiency and accessibility of machine learning (ML) techniques by automating complex tasks like model and hyperparameter selection through optimization. While AutoML systems and hyperparameter optimization (HPO) frameworks have demonstrated efficiency gains compared to manual expert-driven processes, the requirement for explainability is currently hardly satisfied. The work connects the two emerging fields of AutoML and explainable AI in a systematic manner. The thesis includes diverse methodological approaches towards increased explainability in the context of AutoML. Throughout this thesis, three levels of explainability requirements are distinguished: (1) explainability of the final model, (2) explainability of the leraning algorithm, that found the model, and (3) explainability of the AutoML system, that configured the learning algorithm, that found the model. The first part of this thesis addresses the requirement of explainability of models returned by AutoML systems. The first contributing article reviews multi-objective hyperparameter optimization in general and motivates interpretability and sparsity as further objectives of HPO. In the second contributing article, we develop an efficient multi-objective HPO method that optimizes for both predictive performance and model sparsity as one criterion of model explainability. The second part of the thesis deals with the explainability of the learning algorithms or inducing mechanisms experimented with during an AutoML process. A third contributing article presents a novel method that extends partial dependence plots to generate insights on the effects of hyperparameters of learning algorithms on performance. The technique accounts in a post-hoc manner for sampling bias typically present in experimental data generated by AutoML systems. The fourth contributing article introduces a method to address the problem of a sampling bias ex-ante by adapting the search strategy of an optimizer to account for the reliability of explanations in terms of hyperparameter effects. The final part of the thesis deals with the explainability of the mechanisms and components of AutoML systems. The aim is to explain why specific optimizers and their components used within AutoML systems work better than others. A fifth contributing article presents a thorough evaluation of multi-fidelity HPO algorithms. An expressive and flexible framework of multi-fidelity HPO algorithms is introduced and automatically configured through optimization, and the outcomes are analyzed through an ablation analysis. This work has been facilitated by an efficient multi-fidelity benchmarking suite (YAHPO gym), which is the sixth contribution of this thesis.
Not available
Moosbauer, Julia
2023
Englisch
Universitätsbibliothek der Ludwig-Maximilians-Universität München
Moosbauer, Julia (2023): Towards explainable automated machine learning. Dissertation, LMU München: Fakultät für Mathematik, Informatik und Statistik
[thumbnail of Moosbauer_Julia.pdf]
Vorschau
PDF
Moosbauer_Julia.pdf

8MB

Abstract

Automated machine learning (AutoML) aims at increasing the efficiency and accessibility of machine learning (ML) techniques by automating complex tasks like model and hyperparameter selection through optimization. While AutoML systems and hyperparameter optimization (HPO) frameworks have demonstrated efficiency gains compared to manual expert-driven processes, the requirement for explainability is currently hardly satisfied. The work connects the two emerging fields of AutoML and explainable AI in a systematic manner. The thesis includes diverse methodological approaches towards increased explainability in the context of AutoML. Throughout this thesis, three levels of explainability requirements are distinguished: (1) explainability of the final model, (2) explainability of the leraning algorithm, that found the model, and (3) explainability of the AutoML system, that configured the learning algorithm, that found the model. The first part of this thesis addresses the requirement of explainability of models returned by AutoML systems. The first contributing article reviews multi-objective hyperparameter optimization in general and motivates interpretability and sparsity as further objectives of HPO. In the second contributing article, we develop an efficient multi-objective HPO method that optimizes for both predictive performance and model sparsity as one criterion of model explainability. The second part of the thesis deals with the explainability of the learning algorithms or inducing mechanisms experimented with during an AutoML process. A third contributing article presents a novel method that extends partial dependence plots to generate insights on the effects of hyperparameters of learning algorithms on performance. The technique accounts in a post-hoc manner for sampling bias typically present in experimental data generated by AutoML systems. The fourth contributing article introduces a method to address the problem of a sampling bias ex-ante by adapting the search strategy of an optimizer to account for the reliability of explanations in terms of hyperparameter effects. The final part of the thesis deals with the explainability of the mechanisms and components of AutoML systems. The aim is to explain why specific optimizers and their components used within AutoML systems work better than others. A fifth contributing article presents a thorough evaluation of multi-fidelity HPO algorithms. An expressive and flexible framework of multi-fidelity HPO algorithms is introduced and automatically configured through optimization, and the outcomes are analyzed through an ablation analysis. This work has been facilitated by an efficient multi-fidelity benchmarking suite (YAHPO gym), which is the sixth contribution of this thesis.