Logo Logo
Hilfe
Kontakt
Switch language to English
Learning, conditionals, causation
Learning, conditionals, causation
This dissertation is on conditionals and causation. In particular, we (i) propose a method of how an agent learns conditional information, and (ii) analyse causation in terms of a new type of conditional. Our starting point is Ramsey's (1929/1990) test: accept a conditional when you can infer its consequent upon supposing its antecedent. Inspired by this test, Stalnaker (1968) developed a semantics of conditionals. In Ch. 2, we define and apply our new method of learning conditional information. It says, roughly, that you learn conditional information by updating on the corresponding Stalnaker conditional. By generalising Lewis's (1976) updating rule to Jeffrey imaging, our learning method becomes applicable to both certain and uncertain conditional information. The method generates the correct predictions for all of Douven's (2012) benchmark examples and Van Fraassen's (1981) Judy Benjamin Problem. In Ch. 3, we prefix Ramsey's test by suspending judgment on antecedent and consequent. Unlike the Ramsey Test semantics by Stalnaker (1968) and Gärdenfors (1978), our strengthened semantics requires the antecedent to be inferentially relevant for the consequent. We exploit this asymmetric relation of relevance in a semantic analysis of the natural language conjunction 'because'. In Ch. 4, we devise an analysis of actual causation in terms of production, where production is understood along the lines of our strengthened Ramsey Test. Our analysis solves the problems of overdetermination, conjunctive scenarios, early and late preemption, switches, double prevention, and spurious causation -- a set of problems that still challenges counterfactual accounts of actual causation in the tradition of Lewis (1973c). In Ch. 5, we translate our analysis of actual causation into Halpern and Pearl's (2005) framework of causal models. As a result, our analysis is considerably simplified on the cost of losing its reductiveness. The upshot is twofold: (i) Jeffrey imaging on Stalnaker conditionals emerges as an alternative to Bayesian accounts of learning conditional information; (ii) the analyses of causation in terms of our strengthened Ramsey Test conditional prove to be worthy rivals to contemporary counterfactual accounts of causation.
Not available
Günther, Mario Konrad
2019
Englisch
Universitätsbibliothek der Ludwig-Maximilians-Universität München
Günther, Mario Konrad (2019): Learning, conditionals, causation. Dissertation, LMU München: Graduate School of Systemic Neurosciences (GSN)
[thumbnail of Guenther_Mario_K.pdf]
Vorschau
PDF
Guenther_Mario_K.pdf

1MB

Abstract

This dissertation is on conditionals and causation. In particular, we (i) propose a method of how an agent learns conditional information, and (ii) analyse causation in terms of a new type of conditional. Our starting point is Ramsey's (1929/1990) test: accept a conditional when you can infer its consequent upon supposing its antecedent. Inspired by this test, Stalnaker (1968) developed a semantics of conditionals. In Ch. 2, we define and apply our new method of learning conditional information. It says, roughly, that you learn conditional information by updating on the corresponding Stalnaker conditional. By generalising Lewis's (1976) updating rule to Jeffrey imaging, our learning method becomes applicable to both certain and uncertain conditional information. The method generates the correct predictions for all of Douven's (2012) benchmark examples and Van Fraassen's (1981) Judy Benjamin Problem. In Ch. 3, we prefix Ramsey's test by suspending judgment on antecedent and consequent. Unlike the Ramsey Test semantics by Stalnaker (1968) and Gärdenfors (1978), our strengthened semantics requires the antecedent to be inferentially relevant for the consequent. We exploit this asymmetric relation of relevance in a semantic analysis of the natural language conjunction 'because'. In Ch. 4, we devise an analysis of actual causation in terms of production, where production is understood along the lines of our strengthened Ramsey Test. Our analysis solves the problems of overdetermination, conjunctive scenarios, early and late preemption, switches, double prevention, and spurious causation -- a set of problems that still challenges counterfactual accounts of actual causation in the tradition of Lewis (1973c). In Ch. 5, we translate our analysis of actual causation into Halpern and Pearl's (2005) framework of causal models. As a result, our analysis is considerably simplified on the cost of losing its reductiveness. The upshot is twofold: (i) Jeffrey imaging on Stalnaker conditionals emerges as an alternative to Bayesian accounts of learning conditional information; (ii) the analyses of causation in terms of our strengthened Ramsey Test conditional prove to be worthy rivals to contemporary counterfactual accounts of causation.