Introduction to (G)LMMs
- Start: Aug 30, 2022
- End: Aug 31, 2022
- Speaker: Dr Daniel Schad
- Location: Max Planck Institute for Human Cognitive and Brain Sciences
- Room: Lecture Hall (C101)
- Host: IMPRS Coordination
- Contact: email@example.com
The course will provide an introduction to linear mixed-effects models (LMMs) in R. It will start by discussing the linear model. An important topic in LMMs are contrasts, which provide the way to encode hypotheses about factors in linear (mixed effects) models. Therefore, the course will provide a detailed discussion of contrast coding, and will introduce a powerful way to encode any linear hypotheses about factors into contrasts by using the generalised matrix inverse, which can be easily implemented using the R package hypr. The course will also cover the coding of covariates (i.e., continuous predictor variables). Based on the knowledge about contrasts, the second day will provide an introduction to the LMM, it will discuss fixed effects and variance components, and how they can be estimated in R using the lmer function. Moreover, we will treat the important question of how variance components and correlation parameters can be selected to achieve parsimonious LMMs. In case there is interest and enough time, we can moreover discuss power analyses for LMMs using the design R package.
PhD students should already be somewhat familiar with R / R-Studio and with linear models & frequentist statistics.
10.00 - 11.30 Linear model (LM)
short break *
12.00 - 13.30 Contrasts (1): Coding factors in linear models
lunch break *
14.30 - 16.00 Contrasts (2): Generalised inverse & hypr package
short break *
16.30 - 18.00 Contrasts (3): Polynomials, ANOVA, Nested Contrasts, and Covariates
10.00 - 11.30 Linear mixed model (LMM)
short break *
12.00 - 13.30 LMM fixed effects & variance components
14.30 - 16.00 LMM selection of variance components / correlation parameters
short break *
16.30 - 18.00 Power analysis / Final Discussion
(This is a rough plan, the timing of topics and breaks might vary.)
Any interested PhD student of IMRPS NeuroCom or employee at MPI CBS is invited to register.
Literature on linear mixed-effects models
⁃ Bates, D. M. (2010). lme4: Mixed-effects modeling with R. https://www.researchgate.net/publication/235709638_Lme4_Mixed-Effects_Modeling_With_R
⁃ Pinheiro, J., & Bates, D. (2006). Mixed-effects models in S and S-PLUS. Springer Science & Business Media.
⁃ Baayen, R. H. (2008). Analyzing Linguistic Data. A Practical Introduction to Statistics Using R. Cambridge University Press.
⁃ Gelman, A., & Hill, J. (2006). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press.
⁃ Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4), 390-412.
⁃ Kliegl, R., Masson, M. E., & Richter, E. M. (2010). A linear mixed model analysis of masked repetition priming. Visual Cognition, 18(5), 655-681.
⁃ Kliegl, R., Wei, P., Dambacher, M., Yan, M., & Zhou, X. (2011). Experimental effects and individual differences in linear mixed models: Estimating the relationship between spatial, object, and attraction effects in visual attention. Frontiers in Psychology, 1, 238.
⁃ Schad, D. J., Vasishth, S., Hohenstein, S., & Kliegl, R. (2020). How to capitalize on a priori contrasts in linear (mixed) models: A tutorial. Journal of Memory and Language, 110, 104038.
⁃ Vasishth, S., Schad, D.J., Bürki, A., & Kliegl, R. (in prep). Linear mixed models for linguistics and psychology: A comprehensive introduction. CRC Press. https://vasishth.github.io/Freq_CogSci/
⁃ Bates, D., Kliegl, R., Vasishth, S., & Baayen, H. (2015). Parsimonious mixed models. arXiv preprint arXiv:1506.04967.
⁃ Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3), 255-278.
⁃ Matuschek, H., Kliegl, R., Vasishth, S., Baayen, H., & Bates, D. (2017). Balancing Type I error and power in linear mixed models. Journal of Memory and Language, 94, 305-315.