Programme

Timezone: CEST

MONDAY, 27 June 2022

08.15-08.45

Registration

09.00-09.15

Welcome and Introduction: Nik Weiskopf


Session I:Computational Models in Language and Communication

09.15-10.00

Mariya Toneva

Neuroscience Institute, Princeton University, USA

10.00-10.45

Jean-Remi King

École normale supérieure de Paris, France

Deep learning has recently made remarkable progress in natural language processing. Yet, the resulting algorithms fall short of the language abilities of the human brain. To bridge this gap, we here explore the similarities and differences between these two systems using large-scale datasets of magneto/electro-encephalography (M/EEG), functional Magnetic Resonance Imaging (fMRI), and intracranial recordings. After investigating where and when deep language algorithms map onto the brain, we show that enhancing these algorithms with long-range forecasts makes them more similar to the brain. Our results further reveal that, unlike current deep language models, the human brain is tuned to generate a hierarchy of long-range predictions, whereby the fronto-parietal cortices forecast more abstract and more distant representations than the temporal cortices. Overall, our studies show how the interface between AI and neuroscience clarifies the computational bases of natural language processing.


10.45-11.15

Coffee Break

11.15-12.00

Andrea Martin

Max Planck Institute for Psycholinguistics, Nijmegen, NL

12.15-13.30

Lunch Break

13.45-15.00

Poster session I

15.00-15.30

Coffee Break

15.30-17.45

Small Group Workshops

18.00-19.15

Poster Session II

19.15-20.30

Welcome Barbecue

20.30-21.00

Launch IMPRS @MPI CBS Alumni Program

TUESDAY, 28 June 2022

Session II: Computational Models in Neuroimaging Physics and Signal Processing

09.00-09.45

Christian Beckmann

Radboud University, Statistical Imaging Neuroscience, Nijmegen, NL.

09.45-10.30

n.n.

10.30-11.00

Coffee Break

11.00-11.45

n.n.

11.45-13.00

Alumni Talks

13.15-14.30

Lunch Break


Session III: Computational Models in Cognitive and Affective Neuroscience

14.30-15.15

Mitsuo Kawato

Center for Information and Neural Networks (CINEt), Kyoto, Japan.

Internal models are the neural processes within brain, which simulate external events. Within the domain of motor control, we have inverse models and forward models. The former computes motor commands from movement goals. The latter predicts sensory signals from issued motor commands. In vision, we have corresponding forward and inverse optics model. The former is a generative and the latter is an inference model. One of the most perplexing enigmas related to internal models is to understand the neural mechanisms that enable animals to learn large-dimensional problems with so few trials. Kawato and Cortese (2021) proposed a computational neuroscience model of metacognition. The model comprises a modular hierarchical reinforcement-learning architecture of parallel and layered, generative-inverse model pairs. In the prefrontal cortex, a distributed executive network called the “cognitive reality monitoring network” (CRMN) orchestrates conscious involvement of generative-inverse-model pairs in perception and action. Based on mismatches between computations by generative and inverse models, as well as reward prediction errors, CRMN computes a “responsibility signal” that gates selection and learning of pairs in perception, action, and reinforcement learning. A high responsibility signal is given to the pairs that best capture the external world, that are competent in movements (small mismatch), and that are capable of reinforcement learning (small reward-prediction error). CRMN selects pairs with higher responsibility signals as objects of metacognition, and consciousness is determined by the entropy of responsibility signals across all pairs.


15.15-16.00

Janneke Jehee

Radboud University, Nijmegen, NL.

Whether we are deciding about Covid-related restrictions, estimating a ball’s trajectory when playing tennis, or interpreting radiological images – virtually every choice we make is based on uncertain evidence. How do we infer that information is more or less reliable when making these decisions? How does the brain represent knowledge of this uncertainty? In this talk, I will present recent neuroimaging data combined with novel analysis tools to address these questions. Our results indicate that sensory uncertainty can reliably be estimated from the human visual cortex on a trial-by-trial basis, and moreover that observers appear to rely on this uncertainty in their perceptual decision-making.


16.00-16.30

Coffee Break

16.30-17.15

Emma Holmes

Department of Speech Hearing and Phonetic Sciences, University College London, UK

Our acoustic environments typically contain multiple sounds that overlap in time. For example, if we try to listen to what a friend’s saying at a busy restaurant, there are typically other conversations going on around us at the same time. Selective attention enables us to focus on someone’s voice when other sounds are present. In these situations, attention does not appear to be all-or-none, but rather builds up over time. In this talk, I’ll describe some work we’ve done to understand the computational processes underlying slow inductions of attentional set. The computational modelling is based on active inference (Friston et al., 2017), and treats selective attention as a Bayesian inference problem. Using this framework, we modelled a ‘cocktail party’ listening paradigm and tested competing hypotheses about how behavioural and EEG data are generated during this paradigm. The model generates quantitative (testable) predictions about behavioural, psychophysical and electrophysiological responses, and underlying changes in synaptic efficacy. By comparing model predictions to empirical data, we were able to tease apart different computational processes underpinning attention-related EEG activity and those contributing to differences in reaction times for completing the listening task.


WEDNEDSDAY, 29 June 2022

Session III: Computational Models in Basic and Clinical Neuroscience

09.00-10.15

Poster session III

10.30-11.15

Xiaosi Gu

Icahn School of Medicine at Mount Sinai, USA.

11.15-11.45

Coffee Break

11.45-12.30

Karl Friston

Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, UK

How can we understand ourselves as sentient creatures? And what are the principles that underwrite sentient behaviour? This presentation uses the free energy principle to furnish an account in terms of active inference. First, we will try to understand sentience from the point of view of physics; in particular, the properties that self-organising systems—that distinguish themselves from their lived world—must possess. We then rehearse the same story from the point of view of a neurobiologist, trying to understand functional brain architectures. The narrative starts with a heuristic proof (and simulations of a primordial soup) suggesting that life—or biological self-organization—is an inevitable and emergent property of any dynamical system that possesses a Markov blanket. This conclusion is based on the following arguments: if a system can be differentiated from its external milieu, then its internal and external states must be conditionally independent. These independencies induce a Markov blanket that separates internal and external states. Crucially, this equips internal states with an information geometry, pertaining to probabilistic beliefs about something; namely external states. This free energy is the same quantity that is optimized in Bayesian inference and machine learning (where it is known as an evidence lower bound). In short, internal states will appear to infer—and act on—their world to preserve their integrity. This leads to a Bayesian mechanics, which can be neatly summarised as self-evidencing. In the second half of the talk, we will unpack these ideas using simulations of Bayesian belief updating in the brain and relate them to predictive processing and sentient behaviour. Key words: active inference ∙ autopoiesis ∙ cognitive ∙ dynamics ∙ free energy ∙ epistemic value ∙ self-organization.


12.30-13.15

Caswell Barry

Division of Biosciences, University College London, UK

13.30-14.15

Lunch Break

Panel Discussion: Ethical Implications of Computational Neuroimaging and Artificial Intelligence

14.30-15.00

Introduction and brief Talks by Panelists

15.00-15.45

Panel Discussion

15.45-16.15

Coffee Break

16.15-16.45

Poster Prizes / Poster Talks

16.45-17.00

Concluding Remarks and Take Home Messages


Go to Editor View