Language Emerges from Computability

Abstract

Human language is an extremely complex yet tightly constrained system. Linguists study which classes of constraints are necessary and sufficient, how systems effectively learn them from sparse data, and what representations they use when doing so. This talk will show how each of these properties emerges from basic principles of computability, drawing on insights from automata theory and algorithmic learning theory. I will also show how these principles can be applied to explore the generalization abilities of distributed "neural" machine learning models.

Bio

Jon Rawski is an assistant professor in the Dept of Linguistics & language Development, where he teaches courses on general and computational linguistics. His work concerns the mathematics of language and learning, by intersecting cognitive science, linguistics, and theoretical computer science. He received his PhD from Stony Brook University in 2021.

Time and Location

Tuesday, March 15, 2022 at 1:30PM in MH 225 or zoom