Prediction of behavior and related outcomes has long been a principal goal of psychology. The development of intelligence tests in the early 20th century led to new strategies for predicting academic and occupational performance (Sternberg, 1996). Similarly, in the mid-19th century, Fechner’s use of mathematical models for prediction of behavior (Wertheimer, 1987) initiated a long tradition that has continued to the present day, which can be seen in the mathematical models of learning developed by Clark Hull (Hull, 1943), dynamic models of the development of spatial memory abilities (Spencer, Smith, & Thelen, 2001), and others. These ideas also led to information processing theories of memory (Atkinson & Shiffrin, 1968; Raaijmakers & Shiffrin, 1980), which have been used to predict specific stages of memory processing that are impaired in patient groups (Brown et al., 1995; Brown, Woodard, & Rich, 1994). Recently, use of prediction strategies in neuropsychology has turned toward development of new approaches for identification and characterization of the development and disease course of neurocognitive and neuropsychiatric disorders.
Prediction involves making a statement about an event that is uncertain, typically based on some type of known information. Although the term “prediction” brings to mind the forecasting of some future outcome or event, the term prediction can also be used for concurrent diagnostic purposes. Since approximately 2000, the number of research studies focused on prediction of diagnosis and clinical trajectories has increased dramatically. Figure 1 shows the number of publications per year returned from a simple search of the terms “preclinical prediction” in the National Library of Medicine (PubMed) database from 1950 to October 1, 2016. Studies of preclinical prediction in neuropsychology have typically focused on identification of persons at the highest risk for specific cognitive conditions. Early identification of risk, or of subtle signs of incipient disease, opens the possibility for treatments to prevent disease development, or to delay onset or slow progression of clinically significant symptoms. Prediction studies after brain injury have also been important for prognosticating the trajectory of recovery and for planning resource allocation for treatment.
State-of-the-art prediction strategies have been facilitated by at least two significant developments over the past 25 years. First, the availability of high-speed computers and software capable of performing complex statistical analyses has supported the development and validation of complex theoretical predictive models. For instance, machine learning approaches to prediction would not be possible without powerful computing resources (Hey, 2010). Machine learning has been successfully used in the context of diagnosis (Bigler, 2013; Mundt, Freed, & Greist, 2000; Teipel, Meindl, Grinberg, Heinsen, & Hampel, 2008) and prognosis (Gutman et al., 2015; Koutsouleris et al., 2009; Moradi, Pepe, Gaser, Huttunen, & Tohka, 2015; Schmidt-Richberg et al., 2016). Second, the identification of biomarkers of a variety of neurological and psychiatric conditions has provided a set of predictors that are highly sensitive to risk factors and pathological changes leading to these conditions (Chong, Lim, & Sahadevan, 2006; Craig-Schapiro, Fagan, & Holtzman, 2009; Mayeux, 2004; Sharma & Laskowitz, 2012; Shaw, Korecka, Clark, Lee, & Trojanowski, 2007). Such biomarkers can be used to test predictions regarding possible etiologies associated with neuropsychological abnormalities (Ivanoiu et al., 2015; Miller et al., 2008; Wirth et al., 2013). Improved genetic testing has also contributed to more accurate predictions of functional changes when combined with neuropsychological and imaging data (O’Hara et al., 1998; Reiman et al., 2004; Small et al., 1996). Some researchers have argued, based on the literature, that “neuromarkers often provide better predictions (neuroprognosis), alone or in combination with other measures, than traditional behavioral measures” (page 11, Gabrieli, Ghosh, & Whitfield-Gabrieli, 2015). As data increasingly emerge, our field will be enriched by identifying the best combinations of measures to predict disease.
Many neurocognitive disorders are known to evolve from a presymptomatic to mildly clinical state to a fully clinical disorder. In essence, they go through different biological and clinical “stages” (McGorry et al., 2007). Two disorders receiving a great deal of research over the past two decades are Alzheimer’s disease (AD) and schizophrenia. For example, AD is preceded by mild cognitive impairment (MCI), and in many cases schizophrenia is preceded by a clinical high-risk (CHR) state identified by attenuated positive psychotic symptoms (i.e., mild delusions and hallucinations with some degree of intact reality testing; Tsuang et al., 2013; Yung & McGorry, 1996). At least in principle, identifying predictors and mechanisms of transition to AD or to psychosis among individuals showing signs of incipient neurocognitive disorders are critical steps in the search for preventive or early intervention strategies (Woodberry, Shapiro, Bryant, & Seidman, 2016). Interest in early detection and prevention of schizophrenia and other psychotic disorders has led to more than a decade of work studying young people who may be at risk of developing a psychotic illness, and advances have been made in prediction of transition to psychosis from a CHR stage (Cannon et al., 2008, 2016; Carrion et al., 2016), including the usage of neuropsychological measures (Giuliano et al., 2012; Seidman, Giuliano, & Walker, 2010; Seidman et al., 2016).
Studies investigating early detection of neurological and psychiatric conditions have improved understanding of etiology and diagnosis significantly, and they have opened new avenues for management. Presymptomatic detection is also essential to the development of effective intervention strategies, as it provides a window for preventing/delaying onset or reducing severity. This special issue of the Journal of the International Neuropsychological Society includes nine papers describing cutting-edge empirical findings that exemplify key methodological advances for preclinical detection of a variety of neurological, neurodevelopmental, and neuropsychiatric conditions. Methodological approaches taken include assessment of familial and genetic risk analyses, phenotypic characterization using cognitive and/or imaging methods, and evaluation of biomarker effectiveness. These papers provide substantive integrative and synthetic summaries of the current status of preclinical detection methodologies and future directions for the field.
As novel biomarkers of early, disease-related changes are identified, strategies for making optimal use of this information are becoming increasingly important. In this special issue, several papers focus on description of novel methodologies for combining biomarker data with other clinical information for diagnosis or prognosis. In a thorough literature review, Cooper and colleagues describe state-of-the-art objective biomarkers in prodromal Parkinson’s disease (PD), and they discuss several strategies for combining these biomarkers with clinical and genetic data for improving sensitivity and specificity for identification of persons with prodromal PD. Soldan and colleagues demonstrate that beta-amyloid and phosphorylated tau measured in cerebrospinal fluid can predict cognitive functioning as long as 10 years later. Using data from the Alzheimer’s Disease Neuroimaging Initiative, Edmonds and colleagues demonstrate how a novel method for staging preclinical AD using amyloid positron emission tomography (PET) imaging can be combined with detailed cognitive assessment to better characterize preclinical AD. Notably, this study found that considerable amyloid accumulation had already occurred before clinical diagnosis. Finally, in a cross-sectional study, Quenon and colleagues demonstrate the relationships between imaging measures of extent of early AD neuropathology, as indexed by in vivo neuroimaging biomarkers (amyloid PET, hippocampal volume, and measures of cortical thickness) and level of memory performance on the Free and Cued Selective Reminding Test. Although biomarkers of early AD neuropathology predicted overall memory performance, cueing efficiency, which is frequently impaired in AD, demonstrated particularly strong relationships with cortical thickness of regions that are commonly atrophic in early AD.
Assessment of the influence of familial and genetic risk factors has also become an important tool for forecasting diagnostic status. The family high-risk approach allows a defined selection process for ascertaining non-ill subjects in a family in which there is an identified proband with the illness. An advantage of such an approach is that it is not dependent on symptom expression, but rather genetic risk, and thus an unaffected individual could be studied at any age, enabling developmentally guided probes of risk (Agnew-Blais & Seidman, 2013). The “unaffected relatives” are typically offspring or siblings who are considered to be at higher risk for the illness or for phenotypes associated with the illness, because they carry approximately 50% of the genes for the illness. This approach has been used for over half a century, and has been one of the most fruitful ways of identifying components of the vulnerability to various illnesses, particularly schizophrenia. The most typical outcome used in many of these studies originally was “developing the illness” (e.g., schizophrenia, AD, etc.). However, outcomes can also be expressed in a range of phenotypes reflecting the underlying disorder, and outcomes such as functional disability are also very important. A wide range of phenotypes (e.g., working memory or attention problems, smaller hippocampi) can be studied at different ages to evaluate developmental effects, and in different sub-populations (e.g., those with higher vs. lower genetic loading) to study the specific subgroup expression of the phenotypes.
In this issue, Lancaster and colleagues demonstrate that baseline diffusion tensor imaging of the white matter microstructure in the medial temporal lobe can predict longitudinal changes in episodic memory functioning over 3 years in a sample of cognitively healthy older adults with an enriched familial and genetic risk for AD. Koscik and colleagues compare sensitivity for predicting subsequent cognitive impairment using either variability in performance across cognitive tasks or combinations of outcomes from particular tasks (e.g., memory and executive tasks) taken at baseline several years earlier. In an investigation of neuropsychological endophenotypes of familial risk for schizophrenia and affective psychosis, Seidman and colleagues found that working memory impairment was more robust than vigilance for characterizing the cognitive impairment associated with familial risk for schizophrenia. Although persons with familial risk for affective psychosis showed more impaired vigilance relative to other groups, this effect was eliminated after adjustment for several psychopathological symptoms. This work was part of an agenda to identify the most sensitive and specific neuropsychological predictors of risk for different forms of psychosis (see also Seidman et al., 2016). Each of these studies demonstrates novel methodologies for studying the influence of familial and genetic risk for possible diagnosis and prognosis.
Finally, two articles in this issue focus on the use of prediction strategies for prognosticating outcome after brain injury has already occurred in pediatric samples. Ransom and colleagues use evidence-based assessment (EBA) to identify teenage students who are at-risk for post-concussive academic difficulty. Self-reported post-injury symptoms and executive functioning difficulty, rather than parent-reported sequelae, showed the strongest relationships with perceived post-injury academic difficulties. This study demonstrates the utility of the EBA framework within the context of neuropsychological assessment. Till and colleagues studied cognitive, academic, and psychosocial difficulties experienced by children diagnosed with an acquired demyelinating syndrome (ADS), one-third of whom were later diagnosed with multiple sclerosis (MS), over a 6-month follow-up period. Children with ADS were shown to demonstrate a favorable neurocognitive outcome in the short-term, including children diagnosed with MS.
In summary, the papers in this Special Issue present several novel approaches toward developing methodologies for prediction in neuropsychology. Research on optimizing the information obtained from biomarkers will undoubtedly continue to be stimulated by the identification of new biomarkers in the future. Introduction of new frameworks for assessment, such as EBA, and other strategies for evaluating longitudinal cognitive, clinical, and neuroimaging changes in outcome, as presented by several studies in this special issue, will also be helpful for moving the field forward. Capitalization on new developments in genetic analyses and assessment of familial risk factors will also be important tools for improving predictive accuracy.
Nevertheless, we also face challenges with respect to definition of appropriate statistical models used for assessment of change, growth, or decline (Cronbach & Furby, 1970; Francis, Fletcher, Stuebing, Davidson, & Thompson, 1991; Gottman & Rushe, 1993; Harrell, 2015; Singer & Willett, 2003; Steyerberg, 2009; Steyerberg & Harrell, 2016; Steyerberg et al., 2010; Temkin, Heaton, Grant, & Dikmen, 1999). While these issues are certainly not new, continued focus on improving definitions of the change that we are predicting, and on models for assessing the effectiveness of variables predicting this change, are certainly warranted. Despite these challenges, research on preclinical prediction continues to grow, and future studies promise to contribute to improvement in preventative treatments before cognitive decline occurs as well as to more effective treatments and allocation of resources following brain injury.