The course will compare three main approaches to formal and computational semantics: model theory, proof theory, and distributional treatments of meaning (Vector Space Models). It will also consider ways of developing a probabilistic semantics for natural language. I will look at these approaches with respect to four main questions that an adequate theory of semantic representation must answer convincingly. First, how is lexical meaning integrated into the combinatorial procedures through which the interpretations of syntactically complex expressions are computed? Second, how is semantic entailment expressed in the theory? Third, how is the pervasive gradience of semantic properties captured? Finally, how can language learners acquire the class of representations that the theory makes available? These questions will be considered in the context of the guiding concern of computational semantics to develop robust, wide coverage systems for representing the semantic properties of natural languages, where these systems can be effectively learned and their representations of meanings can be efficiently computed.
An understanding of at least classical first-order logic and elementary set theory
Email: shalom (DOT) lappin (AT) kcl (DOT) ac (DOT) uk
I am Professor of Computational Linguistics at King's College London, and a Fellow of the British Academy. My main areas of research and teaching are computational linguistics, formal and computational semantics, and the application of information theoretic methods to the the acquisition and representation of linguistic knowledge. Prior to arriving at King's in 1999 I taught at the School of Oriental and African Studies, Tel Aviv University, University of Haifa, University of Ottawa, and Ben Gurion University of the Negev. I was also a Research Staff Member in the Natural Language Group of the Computer Science Department at the IBM T.J. Watson Research Center. I am currently working on probabilistic models of syntactic and semantic representation.