Eugster, Manuel J. A. (2011): Benchmark Experiments: A Tool for Analyzing Statistical Learning Algorithms. Dissertation, LMU München: Faculty of Mathematics, Computer Science and Statistics 

PDF
Eugster_Manuel_J_A.pdf 4MB 
Abstract
Benchmark experiments nowadays are the method of choice to evaluate learning algorithms in most research fields with applications related to statistical learning. Benchmark experiments are an empirical tool to analyze statistical learning algorithms on one or more data sets: to compare a set of algorithms, to find the best hyperparameters for an algorithm, or to make a sensitivity analysis of an algorithm. In the main part, this dissertation focus on the comparison of candidate algorithms and introduces a comprehensive toolbox for analyzing such benchmark experiments. A systematic approach is introduced  from exploratory analyses with specialized visualizations static and interactive) via formal investigations and their interpretation as preference relations through to a consensus order of the algorithms, based on one or more performance measures and data sets. The performance of learning algorithms is determined by data set characteristics, this is common knowledge. Not exactly known is the concrete relationship between characteristics and algorithms. A formal framework on top of benchmark experiments is presented for investigation on this relationship. Furthermore, benchmark experiments are commonly treated as fixedsample experiments, but their nature is sequential. First thoughts on a sequential framework are presented and its advantages are discussed. Finally, this main part of the dissertation is concluded with a discussion on future research topics in the field of benchmark experiments. The second part of the dissertation is concerned with archetypal analysis. Archetypal analysis has the aim to represent observations in a data set as convex combinations of a few extremal points. This is used as an analysis approach for benchmark experiments  the identification and interpretation of the extreme performances of candidate algorithms. In turn, benchmark experiments are used to analyze the general framework for archetypal analyses worked out in this second part of the dissertation. Using its generalizability, the weighted and robust archetypal problems are introduced and solved; and in the outlook a generalization towards prototypes is discussed. The two freely available R packages  benchmark and archetypes  make the introduced methods generally applicable.
Item Type:  Theses (Dissertation, LMU Munich) 

Subjects:  500 Natural sciences and mathematics > 510 Mathematics 500 Natural sciences and mathematics 
Faculties:  Faculty of Mathematics, Computer Science and Statistics 
Language:  English 
Date of oral examination:  16. March 2011 
1. Referee:  Leisch, Friedrich 
MD5 Checksum of the PDFfile:  7f2221598e5c20c9db3ae26bea24baed 
Signature of the printed copy:  0001/UMC 19424 
ID Code:  12990 
Deposited On:  09. May 2011 07:30 
Last Modified:  24. Oct 2020 03:55 