Logo Logo
Help
Contact
Switch language to German
Machine learning systems for human emotional states
Machine learning systems for human emotional states
Human emotions play a vital role in our everyday lives and influence our communication, perception and experience in the real world. However, detecting, modeling, and predicting human emotional states remains challenging. Emotions are subjective, ambiguous, volatile, and current sensing modalities lack robustness in realistic environments and are potentially intrusive. The central question of this thesis is: How can we design emotion-aware systems under these constraints that utilize recent advances in machine learning? We present novel input for human emotions based on multimodal and contextual data. We furthermore propose machine learning frameworks that provide more explanatory, generalizable, privacy-preserving, and multi-sensory emotion state predictions. We introduce two ubiquitous mobile sensing applications for human emotional states [P1, P6]. The applications are designed to be minimally intrusive and do not require body contact by acquiring contextual data only to infer emotions. We show that the re-formalization of the classification problem to an indirect estimation context-emotion pattern recognition works better than existing (facial expressions) baseline approaches. Our findings have major implications for future emotional state recognition systems because ubiquitous devices can easily acquire contextual data and transform them into smart emotion-estimation sensors. In the second pillar of this thesis, we focus on machine learning architectures for human emotional states. We present three different deep learning architectures for learning emotion representations from multivariate time-series data [P4 ,P3, P2]. We show how the representations can be designed to be emotion predictive but also participant-invariant [P2]. We furthermore propose an architecture that can learn a shared emotional representation from multiple noisy datasets with different inherent sensing characteristics [P3, P4]. In summary, the proposed approaches are tailored to capture detailed semantics in noisy input data and also propose measures for making the prediction more interpretable, domain-invariant and scalable. Furthermore, we present a novel transformer-based deep learning architecture for emotion prediction that is able to derive explanations alongside its predictions [P7]. Thereby, a better understanding of the link between emotion sensor input and the model decision can be derived that ultimately helps designing appropriate user interfaces powered by emotion prediction methods. Ultimately, we show a machine learning application of contextual emotion prediction that enables an emotion-aware experience in a dynamic driving environment [P5]. We present the first navigation application that is able to route after emotions. We thereby leverage the concept of context-emotion linking to predict emotional-states on future road-segments and optimize after travel-time and predicted emotional states. We show in a blind-user study, that emotional navigation has a positive effect on valence after riding the predicted happy route. Although our work is concentrated on human emotional states, we argue that this enables other human-states (e.g., fatigue, cognitive load) prediction systems to be designed to be more robust, scalable, privacy-aware. These systems also allow the user to understand and investigate the decision process and offer explanations. Furthermore, our work provides a foundation for emotion-aware systems in unconstrained, dynamic environments.
Machine Learning, Ubiquitous Computing, Automotive, Emotions, Affective Computing, Representation Learning, HCI
Bethge, David
2023
English
Universitätsbibliothek der Ludwig-Maximilians-Universität München
Bethge, David (2023): Machine learning systems for human emotional states. Dissertation, LMU München: Faculty of Mathematics, Computer Science and Statistics
[thumbnail of Bethge_David.pdf]
Preview
PDF
Bethge_David.pdf

50MB

Abstract

Human emotions play a vital role in our everyday lives and influence our communication, perception and experience in the real world. However, detecting, modeling, and predicting human emotional states remains challenging. Emotions are subjective, ambiguous, volatile, and current sensing modalities lack robustness in realistic environments and are potentially intrusive. The central question of this thesis is: How can we design emotion-aware systems under these constraints that utilize recent advances in machine learning? We present novel input for human emotions based on multimodal and contextual data. We furthermore propose machine learning frameworks that provide more explanatory, generalizable, privacy-preserving, and multi-sensory emotion state predictions. We introduce two ubiquitous mobile sensing applications for human emotional states [P1, P6]. The applications are designed to be minimally intrusive and do not require body contact by acquiring contextual data only to infer emotions. We show that the re-formalization of the classification problem to an indirect estimation context-emotion pattern recognition works better than existing (facial expressions) baseline approaches. Our findings have major implications for future emotional state recognition systems because ubiquitous devices can easily acquire contextual data and transform them into smart emotion-estimation sensors. In the second pillar of this thesis, we focus on machine learning architectures for human emotional states. We present three different deep learning architectures for learning emotion representations from multivariate time-series data [P4 ,P3, P2]. We show how the representations can be designed to be emotion predictive but also participant-invariant [P2]. We furthermore propose an architecture that can learn a shared emotional representation from multiple noisy datasets with different inherent sensing characteristics [P3, P4]. In summary, the proposed approaches are tailored to capture detailed semantics in noisy input data and also propose measures for making the prediction more interpretable, domain-invariant and scalable. Furthermore, we present a novel transformer-based deep learning architecture for emotion prediction that is able to derive explanations alongside its predictions [P7]. Thereby, a better understanding of the link between emotion sensor input and the model decision can be derived that ultimately helps designing appropriate user interfaces powered by emotion prediction methods. Ultimately, we show a machine learning application of contextual emotion prediction that enables an emotion-aware experience in a dynamic driving environment [P5]. We present the first navigation application that is able to route after emotions. We thereby leverage the concept of context-emotion linking to predict emotional-states on future road-segments and optimize after travel-time and predicted emotional states. We show in a blind-user study, that emotional navigation has a positive effect on valence after riding the predicted happy route. Although our work is concentrated on human emotional states, we argue that this enables other human-states (e.g., fatigue, cognitive load) prediction systems to be designed to be more robust, scalable, privacy-aware. These systems also allow the user to understand and investigate the decision process and offer explanations. Furthermore, our work provides a foundation for emotion-aware systems in unconstrained, dynamic environments.