Dufter, Philipp (2021): Distributed representations for multilingual language processing. Dissertation, LMU München: Faculty of Mathematics, Computer Science and Statistics |
Preview |
Licence: Creative Commons: Attribution 4.0 (CC-BY) Dufter_Philipp.pdf 4MB |
Abstract
Distributed representations are a central element in natural language processing. Units of text such as words, ngrams, or characters are mapped to real-valued vectors so that they can be processed by computational models. Representations trained on large amounts of text, called static word embeddings, have been found to work well across a variety of tasks such as sentiment analysis or named entity recognition. More recently, pretrained language models are used as contextualized representations that have been found to yield even better task performances. Multilingual representations that are invariant with respect to languages are useful for multiple reasons. Models using those representations would only require training data in one language and still generalize across multiple languages. This is especially useful for languages that exhibit data sparsity. Further, machine translation models can benefit from source and target representations in the same space. Last, knowledge extraction models could not only access English data, but data in any natural language and thus exploit a richer source of knowledge. Given that several thousand languages exist in the world, the need for multilingual language processing seems evident. However, it is not immediately clear, which properties multilingual embeddings should exhibit, how current multilingual representations work and how they could be improved. This thesis investigates some of these questions. In the first publication, we explore the boundaries of multilingual representation learning by creating an embedding space across more than one thousand languages. We analyze existing methods and propose concept based embedding learning methods. The second paper investigates differences between creating representations for one thousand languages with little data versus considering few languages with abundant data. In the third publication, we refine a method to obtain interpretable subspaces of embeddings. This method can be used to investigate the workings of multilingual representations. The fourth publication finds that multilingual pretrained language models exhibit a high degree of multilinguality in the sense that high quality word alignments can be easily extracted. The fifth paper investigates reasons why multilingual pretrained language models are multilingual despite lacking any kind of crosslingual supervision during training. Based on our findings we propose a training scheme that leads to improved multilinguality. Last, the sixth paper investigates the use of multilingual pretrained language models as multilingual knowledge bases.
Item Type: | Theses (Dissertation, LMU Munich) |
---|---|
Keywords: | natural language processing, representation learning, multilinguality, machine learning, word embeddings |
Subjects: | 000 Computers, Information and General Reference 000 Computers, Information and General Reference > 004 Data processing computer science |
Faculties: | Faculty of Mathematics, Computer Science and Statistics |
Language: | English |
Date of oral examination: | 28. April 2021 |
1. Referee: | Schütze, Hinrich |
MD5 Checksum of the PDF-file: | 1a1b6e2a6a72c53d76b717e2cef4a6d7 |
Signature of the printed copy: | 0001/UMC 27933 |
ID Code: | 28014 |
Deposited On: | 27. May 2021 10:08 |
Last Modified: | 27. May 2021 10:08 |