Andras Kornai: Unifying formulaic, geometric, and algebraic theories of semantics

Sunday, June 19, Taper Hall 116, 9AM -- 1:50PM

In the past 5-10 years a geometric form of semantic representation, word vectors, has taken computational linguistics by storm. Mainstream linguistic semantics, Montague Grammar and its lineal descendants, has remained largely unreceptive to representing word and sentence meaning in finite-dimensional Euclidean space: the five-volume Wiley Blackwell Companion to Semantics (2021) does not even mention the idea. At the same time, major database collection efforts such as the Google and Microsoft knowledge graphs have amassed hundreds of billions of facts about the world. These efforts, relying on simple algebraic meaning representation methods using labeled graphs or relational triples, have also remained largely under the radar of logic-based formal semantics even though semantic search (information retrieval), information extraction, and the increasingly effective Semantic Web are all powered by a combination of the geometric and algebraic methods. This one-day short course will investigate the similarities and differences between the formula-based mainstream, the geometric, and the algebraic approaches. The focus will be on explaining the vector-based and graph-based approaches to people already familiar with logical semantics. We will describe some of the novel insights these approaches bring to such traditional concerns of linguistic semantics as meaning postulates, generics, temporal and spatial models, indexicals, lexical categorization, the meaning of bound morphemes, deep cases, negation, and implicature.

Readings

Background

Vector Semantics

Slides

First lecture

Second lecture

Third lecture

Video of all three lectures in one