Hostname: page-component-77f85d65b8-v2srd Total loading time: 0 Render date: 2026-03-29T00:27:13.610Z Has data issue: false hasContentIssue false

Random Effects Multinomial Processing Tree Models: A Maximum Likelihood Approach

Published online by Cambridge University Press:  01 January 2025

Steffen Nestler*
Affiliation:
Universität Münster
Edgar Erdfelder*
Affiliation:
Universität Mannheim
*
Correspondence should be made to Steffen Nestler, Institut für Psychologie, Universität Münster, Fliednerstr. 21, 48149Münster, Germany. Email: steffen.nestler@uni-muenster.de
Correspondence should be made to Edgar Erdfelder, Universität Mannheim, Fakultät für Sozialwissenschaften A5, 68159Mannheim, Germany. Email: erdfelder@uni-mannheim.de
Rights & Permissions [Opens in a new window]

Abstract

The present article proposes and evaluates marginal maximum likelihood (ML) estimation methods for hierarchical multinomial processing tree (MPT) models with random and fixed effects. We assume that an identifiable MPT model with S parameters holds for each participant. Of these S parameters, R parameters are assumed to vary randomly between participants, and the remaining S-R\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$S-R$$\end{document} parameters are assumed to be fixed. We also propose an extended version of the model that includes effects of covariates on MPT model parameters. Because the likelihood functions of both versions of the model are too complex to be tractable, we propose three numerical methods to approximate the integrals that occur in the likelihood function, namely, the Laplace approximation (LA), adaptive Gauss–Hermite quadrature (AGHQ), and Quasi Monte Carlo (QMC) integration. We compare these three methods in a simulation study and show that AGHQ performs well in terms of both bias and coverage rate. QMC also performs well but the number of responses per participant must be sufficiently large. In contrast, LA fails quite often due to undefined standard errors. We also suggest ML-based methods to test the goodness of fit and to compare models taking model complexity into account. The article closes with an illustrative empirical application and an outlook on possible extensions and future applications of the proposed ML approach.

Information

Type
Theory & Methods
Creative Commons
Creative Common License - CCCreative Common License - BY
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Copyright
Copyright © 2023 The Author(s)
Figure 0

Figure 1. The pair-clustering MPT model (adapted from Riefer & Batchelder, 1988, p. 330, Figure 2). Rectangles indicate stimulus classes (left) and observable responses (right). Rectangles with rounded corners represent latent cognitive states. Parameters attached to the branches indicate transition probabilities from left to right, specifically, storing a word pair as a cluster (c), retrieving a stored cluster in free recall (r), storing and retrieving a word from a non-clustered pair in free recall (u), and storing and retrieving a singleton in free recall (a).

Figure 1

Table 1 Relative frequencies of converged replications (CR) and average computation time (in s), depending on the estimator, the type of ML approximation method, the number of individuals T, and the number of responses N per individual.

Figure 2

Table 2 Relative bias of parameter estimates in percent, depending on the approximation method, the number of individuals T, and the number of responses N per individual.

Figure 3

Table 3 Coverage rate of parameter estimates, depending on the approximation method, the number of individuals T, and the number of responses N per individual.

Figure 4

Table 4 Parameter estimates of the mean structure for the illustrative data example.

Figure 5

Table 5 Parameter estimates of the covariance structure for the illustrative example with real data.