Esmaily, Jamal (2025): Computational mechanisms of social decision making and learning. Dissertation, LMU München: Graduate School of Systemic Neurosciences (GSN) |
Vorschau |
PDF
Esmaily_Jamal.pdf 30MB |
Abstract
Understanding the mechanisms of decision-making and learning is fundamental to advancing cognitive neuroscience and improving collective decision-making strategies. While extensive research has elucidated aspects of individual decision-making, significant gaps remain in our understanding of the computational and neurobiological processes underlying social decision-making and how decision-making strategies are learned over time. To address these gaps, we conducted two complementary studies that leverage computational modelling, neurobiological data, and behavioural analysis. In our first study, we focused on the neural mechanisms underlying social decision-making, specifically how collaborators align their confidence judgments. Despite the importance of confidence communication in social contexts, the computational basis for this process has remained elusive. We developed a neurobiological model supported by EEG, eye-tracking, and behavioural data to investigate confidence matching during perceptual decision-making. By combining psychophysical tasks, neural data, and computational modelling, we demonstrated how humans utilize information about a collaborator’s confidence to adjust their own decisions and confidence levels, providing a robust framework for predicting and validating confidence alignment in collaborative tasks. Studying social decision-making using methods designed for individual decision-making poses unique challenges. Social decision-making often requires a large number of subjects (N~30) and individual decision making computational models requires subjects extensive training to achieve reliable results, making interdisciplinary studies like ours particularly demanding. During this process, we noticed that the computational mechanisms of training itself are poorly understood, with many studies discarding training data as noisy or irrelevant. This gap motivated our second study, which aimed to explore how decision-making strategies are learned. To address this, we developed a reinforcement learning (RL) framework that models perceptual decision-making as a dynamic process where the decision boundary is optimized over time. Our model learns to balance the cost of waiting with external rewards, offering a computational tool to study the evolution of decision thresholds and learning dynamics. Together, these studies provide a comprehensive exploration of both social decision-making mechanisms and the learning processes that shape decision strategies. The results offer insights into how humans align confidence in collaborative settings and how they refine decision boundaries through experience. These findings may have broad implications for realworld applications, such as improving teamwork in high-stakes environments (e.g., medical diagnostics, or financial trading) and developing training programs that enhance decisionmaking efficiency. By advancing our understanding of these complex processes, this research lays the groundwork for more effective individual and collaborative decision-making strategies.
Dokumententyp: | Dissertationen (Dissertation, LMU München) |
---|---|
Themengebiete: | 500 Naturwissenschaften und Mathematik
500 Naturwissenschaften und Mathematik > 570 Biowissenschaften, Biologie |
Fakultäten: | Graduate School of Systemic Neurosciences (GSN) |
Sprache der Hochschulschrift: | Englisch |
Datum der mündlichen Prüfung: | 13. Mai 2025 |
1. Berichterstatter:in: | Bahrami, Bahador |
MD5 Prüfsumme der PDF-Datei: | ee5dafd6a6ba187401631846c06937d6 |
Signatur der gedruckten Ausgabe: | 0001/UMC 31260 |
ID Code: | 35396 |
Eingestellt am: | 20. Jun. 2025 11:29 |
Letzte Änderungen: | 20. Jun. 2025 11:29 |