Logic and probability are key themes of cognitive science that have long had an uneasy coexistence. This course will introduce the stochastic lambda calculus, an extension of the standard lambda calculus into a system with probabilistic semantics. This provides a mathematical foundation for uniting logic and probability: compositional languages, that support reasoning by probabilistic inference. This general framework is realized in the probabilistic programming language Church, which we will use for hands on examples throughout the class.
We will use Church to explore the basic principles of inference over structured probabilistic models, as they are relevant to modeling human cognition. These include explaining-away, the Bayesian Occam's razor, and learning-to-learn in hierarchical Bayesian models. Examples will be draw from various domains in cognitive science, including causal learning and language. We will then cover highlights from current frontiers in Bayesian modeling, including models of social cognition and rational process-level models. We will end with applications to natural language semantics and pragmatics.
none (but basic familiarity with probability and programming will help)
Email: ngoodman (AT) stanford (DOT) edu
Noah D. Goodman is Assistant Professor of Psychology, Linguistics (by courtesy), and Computer Science (by courtesy) at Stanford University. He studies the computational basis of human thought, merging behavioral experiments with formal methods from statistics and logic. Specific projects vary from concept learning and language understanding to inference algorithms for probabilistic programming languages. He received his Ph.D. in mathematics from the University of Texas at Austin in 2003. In 2005 he entered cognitive science, working as Postdoc and Research Scientist at MIT. In 2010 he moved to Stanford where he runs the Computation and Cognition Lab.