We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter presents a systematic theory of generalized (or universal) Fechnerian scaling, based on the intuition underlying Fechner’s original theory. The intuition is that subjective distances among stimuli are computed by cumulating small discriminability values between “neighboring” stimuli. A stimulus space is supposed to be endowed by a dissimilarity function, computed from a discrimination probability function for any pair of stimuli chosen in two distinct observation areas. On the most abstract level, one considers all possible chains of stimuli leading from stimulus a to stimulus b and back to a, and takes the infimum of the sums of the dissimilarities along these chains as the subjective distance between a and b. In arc-connected spaces, the cumulation of dissimilarity values along all possible chains reduces to their cumulation along continuous paths, leading to a fully fledged metric geometry. In topologically Euclidean spaces, the cumulation along paths further reduces to integration along smooth paths, and the geometry in question acquires the form of a generalized Finsler geometry. The chapter also discusses Fechner’s original derivation of his logarithmic law, observation sorites paradox, a generalized Floyd--Warshall algorithm for computing metric distances from dissimilarities, and an ultra-metric version and data-analytic application of Fechnerian scaling.
Encoding models of neuroimaging data combine assumptions about underlying neural processes with knowledge of the task and the type of neuroimaging technique being used to produce equations that predict values of the dependent variable that is measured at each recording site (e.g., the fMRI BOLD response). Voxel-based encoding models include an encoding model that predicts how every hypothesized neural population responds to each stimulus, and a measurement model that first transforms neural population responses into aggregate neural activity and then into values of the dependent variable being measured. Encoding models can be inverted to produce decoding schemes that use the observed data to make predictions about what stimulus was presented on each trial, thereby allowing unique tests of a mathematical model. Representational similarity analysis is a multivariate method that provides unique tests of a model by comparing its predicted similarity structures to similarity structures extracted from neuroimaging data. Model-based fMRI is a set of methods that were developed to test the validity of purely behavioral computational models against fMRI data. Collectively, encoding methods provide useful and powerful new tests of models – even purely cognitive models – that would have been considered fantasy just a few decades ago.
Approximate Bayesian analysis is presented as the solution for complex computational models where no explicit maximum likelihood estimation is possible. The activation-suppression racemodel (ASR), which does have a likelihood amenable to Markov chain Monte Carlo methods, is used to demonstrate the accuracy with which parameters can be estimated with the approximate Bayesian methods.
Cognitive diagnosis models originated in the field of educational measurement as a psychometric tool to provide finer-grained information more suitable for formative assessment. Typically,but not necessarily, these models classify examinees as masters or nonmasters on a set of binary attributes. This chapter aims to provide a general overview of the original models and the extensions, and methodological developments, that have been made in the last decade. The main topics covered in this chapter include model estimation, Q-matrix specification, model fit evaluation, and procedures for gathering validity and reliability evidences. The chapter ends with a discussion of future trends in the field.
Response inhibition refers to an organism’s ability to suppress unwanted impulses, or actions and responses that are no longer required or have become inappropriate.In the stop-signal task, participants perform a response time task (go task), and occasionally, the go stimulus is followed by a stop signal after a variable delay, indicating subjects to withhold their response (stop task). The main interest of modeling is in estimating the unobservable latency of the stopping process as a characterization of the response inhibition mechanism. Here we analyze and compare the underlying assumptions of different models, including parametric and non-parametric versions of the race model. New model classes based on the concept of copulas are introduced and a number of unsolved problems facing all existing models are pointed out.
Statistical decision theory provides a general account of perceptual decision-making in a wide variety of tasks that range from simple target detection to complete identification. The fundamental assumptions are that all sensory representations are inherently noisy and that every behavior, no matter how trivial, requires a decision. Statistical decision theory is referred to as signal detection theory (SDT) when the stimuli vary on only one sensory dimension, and general recognition theory (GRT) when the stimuli vary on two or more sensory dimensions. SDT and GRT are both reviewed. The SDT review focuses on applications to the two-stimulus identification task and multiple-look experiments, and on response-time extensions of the model (e.g., the drift-diffusion model). The GRT review focuses on applications to identification and categorization experiments, and in the former case, especially on experiments in which the stimuli are constructed by factorially combining several levels of two stimulus dimensions. The basic GRT properties of perceptual separability, decisional separability, perceptual independence, and holism are described. In the case of identification experiments, the summary statistics method for testing perceptual interactions is described, and so is the model-fitting approach. Response time and neuroscience extensions of GRT are reviewed.
Vision science combines ideas from physics, biology, and psychology. The language and ideas of mathematics help scientists communicate and provide an initial framing for understanding the visual system. Mathematics combined with computational modeling adds important realism to the formulations. Together, mathematics and computational tools provide a realistic estimate of the initial signals the brain analyzes to render visual judgments (e.g., motion, depth, and color). This chapter first traces calculations from the representation of the light signal, to how that signal is transformed by the lens to the retinal image, and then how the image is converted into cone photoreceptor excitations. The central steps in the initial encoding rely heavily on linear systems theory and the mathematics of signal-dependent noise. We then describe computational methods that add more realism to the description of how light is encoded by cone excitations. Finally, we describe the mathematical formulation of the ideal observer using all the encoded information to perform a visual discrimination task, and Bayesian methods that combine prior information and sensory data to estimate the light input. These tools help us reason about the information present in the neural representation, what information is lost, and types of neural circuits for extracting information.
The idea that memory behavior relies on a gradually changing internal state has a long history in mathematical psychology. This chapter traces this line of thought from statistical learning theory in the 1950s, through distributed memory models in the latter part of the twentieth century and early part of the twenty-first century through to modern models based on a scale-invariant temporal history. We discuss the neural phenomena consistent with this form of representation and sketch the kinds of cognitive models that can be constructed and connections with formal models of various memory tasks.
Although learning was a key focus during the early years of mathematical psychology, the cognitive revolution of the 1960s caused the field to languish for several decades. Two breakthroughs in neuroscience resurrected the field. The first was the discovery of long-term potentiation and long-term depression, which served as promising models of learning at the cellular level. The second was the discovery that humans have multiple learning and memory systems that each require a qualitatively different kind of model. Currently, the field is well represented at all of Marr’s three levels of analysis. Descriptive and process models of human learning are dominated by two different, but converging, approaches – one rooted in Bayesian statistics and one based on popular machine-learning algorithms. Implementational models are in the form of neural networks that mimic known neuroanatomy and account for learning via biologically plausible models of synaptic plasticity. Models of all these types are reviewed, and advantages and disadvantages of the different approaches are considered.
The investigation of processes involved in merging information from different sensory modalities has become the subject of research in many areas, including anatomy, physiology, and behavioral sciences. This field of research termed "multisensory integration’’ is flourishing, crossing borders between psychology and neuroscience. The focus of this chapter is on measures of multisensory integration based on numerical data collected from single neurons and in behavioral paradigms:spike numbers, reaction time, frequency of correct or incorrect responses in detection, recognition, and discrimination tasks. Defining that somewhat fuzzy term, it has been observed that at least some kind of numerical measurement assessing the strength of crossmodal effects is required. On the empirical side, these measures typically serve to quantify effects of various covariates on multisensory integration like age, certain disorders, developmental conditions, training and rehabilitation, in addition to attention and learning. On the theoretical side, these measures often help to probe hypotheses about underlying integration mechanisms like optimality in combining information or inverse effectiveness, without necessarily subscribing to a specific model.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.