Knollmüller, Jakob (2020): Metric Gaussian variational inference. Dissertation, LMU München: Fakultät für Physik |
Vorschau |
PDF
Knollmueller_Jakob.pdf 34MB |
Abstract
One main result of this dissertation is the development of Metric Gaussian Variational Inference (MGVI), a method to perform approximate inference in extremely high dimensions and for complex probabilistic models. The problem with high-dimensional and complex models is twofold. Fist, to capture the true posterior distribution accurately, a sufficiently rich approximation for it is required. Second, the number of parameters to express this richness scales dramatically with the number of model parameters. For example, explicitly expressing the correlation between all model parameters requires their squared number of correlation coefficients. In settings with millions of model parameter, this is unfeasible. MGVI overcomes this limitation by replacing the explicit covariance with an implicit approximation, which does not have to be stored and is accessed via samples. This procedure scales linearly with the problem size and allows to account for the full correlations in even extremely large problems. This makes it also applicable to significantly more complex setups. MGVI enabled a series of ambitious signal reconstructions by me and others, which will be showcased. This involves a time- and frequency-resolved reconstruction of the shadow around the black hole M87* using data provided by the Event Horizon Telescope Collaboration, a three-dimensional tomographic reconstruction of interstellar dust within 300pc around the sun from Gaia starlight-absorption and parallax data, novel medical imaging methods for computed tomography, an all-sky Faraday rotation map, combining distinct data sources, and simultaneous calibration and imaging with a radio-interferometer. The second main result is an an approach to use several, independently trained and deep neural networks to reason on complex tasks. Deep learning allows to capture abstract concepts by extracting them from large amounts of training data, which alleviates the necessity of an explicit mathematical formulation. Here a generative neural network is used as a prior distribution and certain properties are imposed via classification and regression networks. The inference is then performed in terms of the latent variables of the generator, which is done using MGVI and other methods. This allows to flexibly answer novel questions without having to re-train any neural network and to come up with novel answers through Bayesian reasoning. This novel approach of Bayesian reasoning with neural networks can also be combined with conventional measurement data.
Dokumententyp: | Dissertationen (Dissertation, LMU München) |
---|---|
Themengebiete: | 500 Naturwissenschaften und Mathematik
500 Naturwissenschaften und Mathematik > 530 Physik |
Fakultäten: | Fakultät für Physik |
Sprache der Hochschulschrift: | Englisch |
Datum der mündlichen Prüfung: | 26. Oktober 2020 |
1. Berichterstatter:in: | Enßlin, Torsten |
MD5 Prüfsumme der PDF-Datei: | 7bde6c7ffbdc99166905ea448a78f048 |
Signatur der gedruckten Ausgabe: | 0001/UMC 27700 |
ID Code: | 27154 |
Eingestellt am: | 18. Feb. 2021 09:55 |
Letzte Änderungen: | 18. Feb. 2021 09:55 |