Language models and information theoretical complexity metrics

IMPRS Lecture

  • Date: May 2, 2019
  • Time: 01:00 PM - 03:00 PM (Local Time Germany)
  • Speaker: Dr John T. Hale
  • Department of Linguistics, University of Georgia, Athens, USA
  • Location: Max Planck Institute for Human Cognitive and Brain Sciences
  • Room: Lecture Hall
  • Host: IMPRS Coordination
  • Contact: imprs-neurocom@cbs.mpg.de

Abstract

It is said that the mind/brain is "predictive." For the prime case of human cognition—language comprehension—, one way of using this idea leverages the concept of a 'language model' as developed in the field of natural language processing.

Via information-theoretical complexity metrics such as surprisal and entropy reduction, language models can link theoretical proposals about grammar and processing to observable neural signals.

This tutorial session teaches the basics of language models for those with no background in computational linguistics, emphasizing their utility in formalizing hypotheses regarding language processing in the brain. Of use beyond the language domain, the course serves to inspire and foster the use of quantitative, naturalistic quantifications of cognitive processing demands in all of cognitive neuroscience.

Registration:     https://survey3.gwdg.de/index.php?r=survey/index&sid=882744&lang=en

References/suggested readings

(1) People who have said that the mind is predictive

(2) People who have sought to apply this idea to Language

(3) The idea of a language model as it figures in computational linguistics

Go to Editor View