Timezone: CEST

MONDAY, 27 June 2022




Welcome and Introduction: Nik Weiskopf

Session I:Computational Models in Language and Communication

Chair:Gesa Hartwigsen

Mariya Toneva

Neuroscience Institute, Princeton University, USA & Max Planck Institute for Software Systems, Saarbruecken, Germany

Neuroscientists have made progress towards answering the what, where, and when of language comprehension. However, how information is aggregated by the brain across different locations and time points is still elusive. Meanwhile, the field of natural language processing (NLP) has created computational systems based on deep neural networks that learn to aggregate the meaning of words in specific ways to perform a specific language task.

In this talk, I argue that neurolinguistics can benefit from using NLP systems as model organisms for how information is aggregated during language comprehension, despite NLP systems’ differences from the human brain. Model organisms make it easier to study a specific brain function because they allow for direct interventions, which are not ethical or practical to do in humans. As an example, I discuss one case study of intervening in an NLP system to isolate a computational representation of supra-word meaning and investigate its neural basis. Using this new computational representation, we show that fMRI and MEG recordings have a very different sensitivity to this supra-word meaning representation, suggesting that the maintenance and integration of composed meaning may be supported by different processes in the brain.

We conclude by discussing the ways in which NLP systems are not yet perfect model organisms for human language comprehension. However, importantly, NLP systems are not static, and future improvements that can lead to a more human-like understanding of language will result in even better model organisms.


Jean-Remi King

École normale supérieure de Paris, France

Deep learning has recently made remarkable progress in natural language processing. Yet, the resulting algorithms fall short of the language abilities of the human brain. To bridge this gap, we here explore the similarities and differences between these two systems using large-scale datasets of magneto/electro-encephalography (M/EEG), functional Magnetic Resonance Imaging (fMRI), and intracranial recordings. After investigating where and when deep language algorithms map onto the brain, we show that enhancing these algorithms with long-range forecasts makes them more similar to the brain. Our results further reveal that, unlike current deep language models, the human brain is tuned to generate a hierarchy of long-range predictions, whereby the fronto-parietal cortices forecast more abstract and more distant representations than the temporal cortices. Overall, our studies show how the interface between AI and neuroscience clarifies the computational bases of natural language processing.


Coffee Break


Andrea Martin

Max Planck Institute for Psycholinguistics, Nijmegen, NL

Human language is an example of a formally-describable system that is both statistical and algebraic. As such, its computational properties are markedly different than in other perception-action systems: hierarchical relationships between sounds, words, phrases, and sentences, structure-dependence, and the unbounded ability to combine smaller units into larger ones. These and other formal properties have long made language difficult to account for from a biological systems perspective, and within models of cognition. I focus on this foundational puzzle – essentially “what does a neural system need to represent information that is both algebraic and statistical?” - and discuss the computational requirements, including the role of neural oscillations across time, for what I believe is necessary for a system to represent and process language. I build on examples from cognitive neuroimaging data and computational simulations, and outline a developing theory that integrates basic insights from linguistics and psycholinguistics with the currency of neural computation, which in turn demarcates the boundary conditions for biological and artificial systems in contact with human language.


Lunch Break


Poster Session I


Coffee Break


Small Group Workshops


Poster Session II


Welcome Barbecue

TUESDAY, 28 June 2022

Session III: Computational Models in Cognitive and Affective Neuroscience

Chair:Martin Hebart

Mitsuo Kawato

Center for Information and Neural Networks (CINEt), Kyoto, Japan.

Internal models are the neural processes within brain, which simulate external events. Within the domain of motor control, we have inverse models and forward models. The former computes motor commands from movement goals. The latter predicts sensory signals from issued motor commands. In vision, we have corresponding forward and inverse optics model. The former is a generative and the latter is an inference model. One of the most perplexing enigmas related to internal models is to understand the neural mechanisms that enable animals to learn large-dimensional problems with so few trials. Kawato and Cortese (2021) proposed a computational neuroscience model of metacognition. The model comprises a modular hierarchical reinforcement-learning architecture of parallel and layered, generative-inverse model pairs. In the prefrontal cortex, a distributed executive network called the “cognitive reality monitoring network” (CRMN) orchestrates conscious involvement of generative-inverse-model pairs in perception and action. Based on mismatches between computations by generative and inverse models, as well as reward prediction errors, CRMN computes a “responsibility signal” that gates selection and learning of pairs in perception, action, and reinforcement learning. A high responsibility signal is given to the pairs that best capture the external world, that are competent in movements (small mismatch), and that are capable of reinforcement learning (small reward-prediction error). CRMN selects pairs with higher responsibility signals as objects of metacognition, and consciousness is determined by the entropy of responsibility signals across all pairs.


Janneke Jehee

Radboud University, Nijmegen, NL.

Whether we are deciding about Covid-related restrictions, estimating a ball’s trajectory when playing tennis, or interpreting radiological images – virtually every choice we make is based on uncertain evidence. How do we infer that information is more or less reliable when making these decisions? How does the brain represent knowledge of this uncertainty? In this talk, I will present recent neuroimaging data combined with novel analysis tools to address these questions. Our results indicate that sensory uncertainty can reliably be estimated from the human visual cortex on a trial-by-trial basis, and moreover that observers appear to rely on this uncertainty in their perceptual decision-making.


Coffee Break


Emma Holmes

Department of Speech Hearing and Phonetic Sciences, University College London, UK

Our acoustic environments typically contain multiple sounds that overlap in time. For example, if we try to listen to what a friend’s saying at a busy restaurant, there are typically other conversations going on around us at the same time. Selective attention enables us to focus on someone’s voice when other sounds are present. In these situations, attention does not appear to be all-or-none, but rather builds up over time. In this talk, I’ll describe some work we’ve done to understand the computational processes underlying slow inductions of attentional set. The computational modelling is based on active inference (Friston et al., 2017), and treats selective attention as a Bayesian inference problem. Using this framework, we modelled a ‘cocktail party’ listening paradigm and tested competing hypotheses about how behavioural and EEG data are generated during this paradigm. The model generates quantitative (testable) predictions about behavioural, psychophysical and electrophysiological responses, and underlying changes in synaptic efficacy. By comparing model predictions to empirical data, we were able to tease apart different computational processes underpinning attention-related EEG activity and those contributing to differences in reaction times for completing the listening task.

WEDNEDSDAY, 29 June 2022


Poster Session III

Session IV: Computational Models in Basic and Clinical Neuroscience

Chair:Christian Doeller

Xiaosi Gu

Icahn School of Medicine at Mount Sinai, USA.

Given the complex and dynamic nature of our social relationships, the human brain needs to quickly learn and adapt to new social situations. Breakdown of any of these computations could lead to social deficits, as observed in many psychiatric disorders. In this talk, I will present our recent neurocomputational work that attempts to model both 1) how humans dynamically adapt beliefs about other people and 2) how they exert influence over social others through forward planning. Lastly, I will present our findings of how impaired social computations might manifest in different disorders such as addiction, delusion, and autism. Taken together, these findings reveal the dynamic and proactive nature of human interactions as well as the clinical significance of these high-order mental processes.


Coffee Break


Karl Friston

Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, UK

How can we understand ourselves as sentient creatures? And what are the principles that underwrite sentient behaviour? This presentation uses the free energy principle to furnish an account in terms of active inference. First, we will try to understand sentience from the point of view of physics; in particular, the properties that self-organising systems—that distinguish themselves from their lived world—must possess. We then rehearse the same story from the point of view of a neurobiologist, trying to understand functional brain architectures. The narrative starts with a heuristic proof (and simulations of a primordial soup) suggesting that life—or biological self-organization—is an inevitable and emergent property of any dynamical system that possesses a Markov blanket. This conclusion is based on the following arguments: if a system can be differentiated from its external milieu, then its internal and external states must be conditionally independent. These independencies induce a Markov blanket that separates internal and external states. Crucially, this equips internal states with an information geometry, pertaining to probabilistic beliefs about something; namely external states. This free energy is the same quantity that is optimized in Bayesian inference and machine learning (where it is known as an evidence lower bound). In short, internal states will appear to infer—and act on—their world to preserve their integrity. This leads to a Bayesian mechanics, which can be neatly summarised as self-evidencing. In the second half of the talk, we will unpack these ideas using simulations of Bayesian belief updating in the brain and relate them to predictive processing and sentient behaviour. Key words: active inference ∙ autopoiesis ∙ cognitive ∙ dynamics ∙ free energy ∙ epistemic value ∙ self-organization.


Caswell Barry

Division of Biosciences, University College London, UK


Lunch Break

Panel Discussion: Ethical Implications of Computational Neuroimaging and Artificial Intelligence

Chairs:Sofie Valk & Nico Scherf

Panelists:Thomas Grote & Caswell Barry ,& Philipp Kellmeyer


Introduction and brief Talks by Panelists


Panel Discussion


Coffee Break


Poster Prizes / Concluding Remarks and Take Home Messages

Go to Editor View