We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Child Opportunity Index is an index of 29 indicators of social determinants of health linked to the United States of America Census. Disparities in the treatment of Wolff–Parkinson–White have not be reported. We hypothesise that lower Child Opportunity Index levels are associated with greater disease burden (antiarrhythmic use, ablation success, and Wolff–Parkinson–White recurrence) and ablation utilisation.
Methods:
A retrospective, single-centre study was performed with Wolff–Parkinson–White patients who received care from January 2021 to July 2023. Following exclusion for <5 years old and with haemodynamically significant CHD, 267 patients were included (45% high, 30% moderate, and 25% low Child Opportunity Index). Multi-level logistic and log-linear regression was performed to assess the relationship between Child Opportunity Index levels and outcomes.
Results:
Low patients were more likely to be Black (p < 0.0001) and to have public insurance (p = 0.0006), though, there were no significant differences in ablation utilisation (p = 0.44) or time from diagnosis to ablation (p = 0.37) between groups. There was an inverse relationship with emergency department use (p = 0.007). The low group had 2.8 times greater odds of having one or more emergency department visits compared to the high group (p = 0.004).
Conclusion:
The Child Opportunity Index was not related with ablation utilisation, while there was an inverse relationship in emergency department use. These findings suggest that while social determinants of health, as measured by Child Opportunity Index, may influence emergency department utilisation, they do not appear to impact the overall management and procedural timing for Wolff–Parkinson–White treatment.
Recent changes to US research funding are having far-reaching consequences that imperil the integrity of science and the provision of care to vulnerable populations. Resisting these changes, the BJPsych Portfolio reaffirms its commitment to publishing mental science and advancing psychiatric knowledge that improves the mental health of one and all.
Objectives/Goals: Lung transplant is a life-saving surgery for patients with advanced lung diseases yet long-term survival remains poor. The clinical features and lung injury patterns of lung transplant recipients who die early versus those who survive longer term remain undefined. Here, we use cell-free DNA and rejection parameters to help elucidate this further. Methods/Study Population: Lung transplant candidacy prioritizes patients who have a high mortality risk within 2 years and will likely survive beyond 5 years. We stratified patients who died within 2 years of transplant as early death (n = 50) and those who survived past 5 years as long-term survivors (n = 53). Lung transplant recipients had serial blood collected as part of two prospective cohort studies. Cell-free DNA (cfDNA) was quantified using relative (% donor-derived cfDNA {%ddcfDNA}) and absolute (nuclear-derived {n-cfDNA}, mitochondrial-derived {mt-cfDNA}) measurements. As part of routine posttransplant clinical care, all patients underwent pulmonary function testing (PFT), surveillance bronchoscopy with bronchoalveolar lavage (BAL), transbronchial biopsy (TBBx), and donor-specific antibody testing (DSA). Results/Anticipated Results: Over the first 2 years after transplant, the number of episodes of antibody-mediated rejection (p) Discussion/Significance of Impact: Clinically, early-death patients perform worse on routine surveillance PFTs and experience a worse degree of CLAD. These patients also have higher levels of cfDNA as quantified by n-cfDNA and mt-cfDNA. These results provide preliminary evidence that early-death patients have worse allograft rejection, dysfunction, and molecular injury.
We study how people solve the optimal stopping problem of buying an airline ticket. Over a set of problems, people were given 12 opportunities to buy a ticket ranging from 12 months before travel to 1 day before. The distributions from which prices were sampled changed over time, following patterns observed in industry analysis of flight ticket pricing. We characterize the optimal decision process in terms of a set of thresholds that set the maximum purchase price for each time point. In a behavioral analysis, we find that the average price people pay is above the optimal, that there is little evidence people learn over the sequence of problems, but that there are likely significant individual differences in the way people make decisions. In a model-based analysis, we propose a set of nine possible decision strategies, based on how purchasing probabilities change according to time and the price of the ticket. Using Bayesian latent-mixture methods, we infer the strategies used by the participants, finding that some use purely time-based strategies, while others also attend to the price of the tickets. We conclude by noting the limitations in the strategies as accounts of people's decision making, highlighting the need to consider sequential effects and other context effects on purchasing behavior.
Clinical outcomes of repetitive transcranial magnetic stimulation (rTMS) for treatment of treatment-resistant depression (TRD) vary widely and there is no mood rating scale that is standard for assessing rTMS outcome. It remains unclear whether TMS is as efficacious in older adults with late-life depression (LLD) compared to younger adults with major depressive disorder (MDD). This study examined the effect of age on outcomes of rTMS treatment of adults with TRD. Self-report and observer mood ratings were measured weekly in 687 subjects ages 16–100 years undergoing rTMS treatment using the Inventory of Depressive Symptomatology 30-item Self-Report (IDS-SR), Patient Health Questionnaire 9-item (PHQ), Profile of Mood States 30-item, and Hamilton Depression Rating Scale 17-item (HDRS). All rating scales detected significant improvement with treatment; response and remission rates varied by scale but not by age (response/remission ≥ 60: 38%–57%/25%–33%; <60: 32%–49%/18%–25%). Proportional hazards models showed early improvement predicted later improvement across ages, though early improvements in PHQ and HDRS were more predictive of remission in those < 60 years (relative to those ≥ 60) and greater baseline IDS burden was more predictive of non-remission in those ≥ 60 years (relative to those < 60). These results indicate there is no significant effect of age on treatment outcomes in rTMS for TRD, though rating instruments may differ in assessment of symptom burden between younger and older adults during treatment.
Nursing home residents may be particularly vulnerable to coronavirus disease 2019 (COVID-19). Therefore, a question is when and how often nursing homes should test staff for COVID-19 and how this may change as severe acute respiratory coronavirus virus 2 (SARS-CoV-2) evolves.
Design:
We developed an agent-based model representing a typical nursing home, COVID-19 spread, and its health and economic outcomes to determine the clinical and economic value of various screening and isolation strategies and how it may change under various circumstances.
Results:
Under winter 2023–2024 SARS-CoV-2 omicron variant conditions, symptom-based antigen testing averted 4.5 COVID-19 cases compared to no testing, saving $191 in direct medical costs. Testing implementation costs far outweighed these savings, resulting in net costs of $990 from the Centers for Medicare & Medicaid Services perspective, $1,545 from the third-party payer perspective, and $57,155 from the societal perspective. Testing did not return sufficient positive health effects to make it cost-effective [$50,000 per quality-adjusted life-year (QALY) threshold], but it exceeded this threshold in ≥59% of simulation trials. Testing remained cost-ineffective when routinely testing staff and varying face mask compliance, vaccine efficacy, and booster coverage. However, all antigen testing strategies became cost-effective (≤$31,906 per QALY) or cost saving (saving ≤$18,372) when the severe outcome risk was ≥3 times higher than that of current omicron variants.
Conclusions:
SARS-CoV-2 testing costs outweighed benefits under winter 2023–2024 conditions; however, testing became cost-effective with increasingly severe clinical outcomes. Cost-effectiveness can change as the epidemic evolves because it depends on clinical severity and other intervention use. Thus, nursing home administrators and policy makers should monitor and evaluate viral virulence and other interventions over time.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
To determine the effectiveness of active, upper-room, germicidal ultraviolet (GUV) devices in reducing bacterial contamination in patient rooms in air and on surfaces as a supplement to the central heating, ventilation, and air conditioning (HVAC) air handling unit (AHU) with MERV 14 filters and UV-C disinfection.
Methods:
This study was conducted in an academic medical center, burn intensive care unit (BICU), for 4 months in 2022. Room occupancy was monitored and recorded. In total, 402 preinstallation and postinstallation bacterial air and non–high-touch surface samples were obtained from 10 BICU patient rooms. Airborne particle counts were measured in the rooms, and bacterial air samples were obtained from the patient-room supply air vents and outdoor air, before and after the intervention. After preintervention samples were obtained, an active, upper-room, GUV air disinfection system was deployed in each of the patient rooms in the BICU.
Results:
The average levels of airborne bacteria of 395 CFU/m3 before GUV device installation and 37 CFU/m3 after installation indicated an 89% overall decrease (P < .0001). Levels of surface-borne bacteria were associated with a 69% decrease (P < .0001) after GUV device installation. Outdoor levels of airborne bacteria averaged 341 CFU/m3 in March before installation and 676 CFU/m3 in June after installation, but this increase was not significant (P = .517).
Conclusions:
Significant reductions in air and surface contamination occurred in all rooms and areas and were not associated with variations in outdoor air concentrations of bacteria. The significant decrease of surface bacteria is an unexpected benefit associated with in-room GUV air disinfection, which can potentially reduce overall bioburden.
Birnbaum and Quispe-Torreblanca (2018) evaluated a set of six models developed under true-and-error theory against data in which people made choices in repeated gambles. They concluded the three models based on expected utility theory were inadequate accounts of the behavioral data, and argued in favor of the simplest of the remaining three more general models. To reach these conclusions, they used non-Bayesian statistical methods: frequentist point estimation of parameters, bootstrapped confidence intervals of parameters, and null hypothesis significance testing of models. We address the same research goals, based on the same models and the same data, using Bayesian methods. We implement the models as graphical models in JAGS to allow for computational Bayesian analysis. Our results are based on posterior distribution of parameters, posterior predictive checks of descriptive adequacy, and Bayes factors for model comparison. We compare the Bayesian results with those of Birnbaum and Quispe-Torreblanca (2018). We conclude that, while the very general conclusions of the two approaches agree, the Bayesian approach offers better detailed answers, especially for the key question of the evidence the data provide for and against the competing models. Finally, we discuss the conceptual and practical advantages of using Bayesian methods in judgment and decision making research highlighted by this case study.
The less-is-more effect predicts that people can be more accurate making paired-comparison decisions when they have less knowledge, in the sense that they do not recognize all of the items in the decision domain. The traditional theoretical explanation is that decisions based on recognizing one alternative but not the other can be more accurate than decisions based on partial knowledge of both alternatives. I present new data that directly test for the less-is-more effect, coming from a task in which participants judge which of two cities is larger and indicate whether they recognize each city. A group-level analysis of these data provides evidence in favor of the less-is-more effect: there is strong evidence people make decisions consistent with recognition, and that these decisions are more accurate than those based on knowledge. An individual-level analysis of the same data, however, provides evidence inconsistent with a simple interpretation of the less-is-more effect: there is no evidence for an inverse-U-shaped relationship between accuracy and recognition, and especially no evidence that individuals who recognize a moderate number of cities outperform individuals who recognize many cities. I suggest a reconciliation of these contrasting findings, based on the systematic change of the accuracy of recognition-based decisions with the underlying recognition rate. In particular, the data show that people who recognize almost none or almost all cities make more accurate decisions by applying the recognition heuristic, when compared to the accuracy achieved by people with intermediate recognition rates. The implications of these findings for precisely defining and understanding the less-is-more effect are discussed, as are the constraints our data potentially place on models of the learning and decision-making processes involved. Keywords: recognition heuristic, less-is-more effect.
We consider the recently-developed “surprisingly popular” method for aggregating decisions across a group of people (Prelec, Seung and McCoy, 2017). The method has shown impressive performance in a range of decision-making situations, but typically for situations in which the correct answer is already established. We consider the ability of the surprisingly popular method to make predictions in a situation where the correct answer does not exist at the time people are asked to make decisions. Specifically, we tested its ability to predict the winners of the 256 US National Football League (NFL) games in the 2017–2018 season. Each of these predictions used participants who self-rated as “extremely knowledgeable” about the NFL, drawn from a set of 100 participants recruited through Amazon Mechanical Turk (AMT). We compare the accuracy and calibration of the surprisingly popular method to a variety of alternatives: the mode and confidence-weighted predictions of the expert AMT participants, the individual and aggregated predictions of media experts, and a statistical Elo method based on the performance histories of the NFL teams. Our results are exploratory, and need replication, but we find that the surprisingly popular method outperforms all of these alternatives, and has reasonable calibration properties relating the confidence of its predictions to the accuracy of those predictions.
We demonstrate the usefulness of cognitive models for combining human estimates of probabilities in two experiments. The first experiment involves people’s estimates of probabilities for general knowledge questions such as “What percentage of the world’s population speaks English as a first language?” The second experiment involves people’s estimates of probabilities in football (soccer) games, such as “What is the probability a team leading 1–0 at half time will win the game?”, with ground truths based on analysis of large corpus of games played in the past decade. In both experiments, we collect people’s probability estimates, and develop a cognitive model of the estimation process, including assumptions about the calibration of probabilities and individual differences. We show that the cognitive model approach outperforms standard statistical aggregation methods like the mean and the median for both experiments and, unlike most previous related work, is able to make good predictions in a fully unsupervised setting. We also show that the parameters inferred as part of the cognitive modeling, involving calibration and expertise, provide useful measures of the cognitive characteristics of individuals. We argue that the cognitive approach has the advantage of aggregating over latent human knowledge rather than observed estimates, and emphasize that it can be applied in predictive settings where answers are not yet available.
Heuristic decision-making models, like Take-the-best, rely on environmental regularities. They conduct a limited search, and ignore available information, by assuming there is structure in the decision-making environment. Take-the-best relies on at least two regularities: diminishing returns, which says that information found earlier in search is more important than information found later; and correlated information, which says that information found early in search is predictive of information found later. We develop new approaches to determining search orders, and to measuring cue discriminability, that make the reliance of Take-the-best on these regularities clear, and open to manipulation. We then demonstrate, in the well-studied German cities environment, and three new city environments, when and how these regularities support Take-the-best. To do this, we focus not on the accuracy of Take-the-best, as most previous studies have, but on a measure of its coherence as a decision-making process. In particular, we consider whether Take-the-best decisions, based on a single piece of information, can be justified because an exhaustive search for information is unlikely to yield a different decision. Using this measure, we show that when the two environmental regularities are present, the decisions made by limited search are unlikely to have changed after exhaustive search, but that both regularities are often necessary.
We study whether experts and novices differ in the way they make predictionsabout National Football League games. In particular, we measure to what extenttheir predictions are consistent with five environmental regularities that couldsupport decision making based on heuristics. These regularities involve the hometeam winning more often, the team with the better win-loss record winning moreoften, the team favored by the majority of media experts winning more often, andtwo others related to surprise wins and losses in the teams’ previousgame. Using signal detection theory and hierarchical Bayesian analysis, we showthat expert predictions for the 2017 National Football League (NFL) seasongenerally follow these regularities in a near optimal way, but novicepredictions do not. These results support the idea that using heuristics adaptedto the decision environment can support accurate predictions and be an indicatorof expertise.
We consider the wisdom of the crowd situation in which individuals make binary decisions, and the majority answer is used as the group decision. Using data sets from nine different domains, we examine the relationship between the size of the majority and the accuracy of the crowd decisions. We find empirically that these calibration curves take many different forms for different domains, and the distribution of majority sizes over decisions in a domain also varies widely. We develop a growth model for inferring and interpreting the calibration curve in a domain, and apply it to the same nine data sets using Bayesian methods. The modeling approach is able to infer important qualitative properties of a domain, such as whether it involves decisions that have ground truths or are inherently uncertain. It is also able to make inferences about important quantitative properties of a domain, such as how quickly the crowd accuracy increases as the size of the majority increases. We discuss potential applications of the measurement model, and the need to develop a psychological account of the variety of calibration curves that evidently exist.
Hierarchical Bayesian methods offer a principled and comprehensive way to relate psychological models to data. Here we use them to model the patterns of information search, stopping and deciding in a simulated binary comparison judgment task. The simulation involves 20 subjects making 100 forced choice comparisons about the relative magnitudes of two objects (which of two German cities has more inhabitants). Two worked-examples show how hierarchical models can be developed to account for and explain the diversity of both search and stopping rules seen across the simulated individuals. We discuss how the results provide insight into current debates in the literature on heuristic decision making and argue that they demonstrate the power and flexibility of hierarchical Bayesian methods in modeling human decision-making.
Drafting is a competitive task in which a set of decision makers choose from a set of resources sequentially, with each resource becoming unavailable once selected. How people make these choices raises basic questions about human decision making, including people’s sensitivity to the statistical regularities of the resource environment, their ability to reason about the behavior of their competitors, and their ability to execute and adapt sophisticated strategies in dynamic situations involving uncertainty. Sports provides one real-world example of drafting behavior, in which a set of teams draft players from an available pool in a well-regulated way. Fantasy sport competitions provide potentially large data sets of drafting behavior. We study fantasy football drafting behavior from the 2017 National Football League (NFL) season based on 1350 leagues hosted by the http://sleeper.app platform. We find people are sensitive to some important environmental regularities in the order in which they draft players, but also present evidence that they use a more narrow range of strategies than is likely optimal in terms of team composition. We find little to no evidence for the use of the complicated but well-documented strategy known as handcuffing, and no evidence of irrational influence from individual-level biases for different NFL teams. We do, however, identify a set of circumstances for which there is clear evidence that people’s choices are strongly influenced by the immediately preceding choice made by a competitor.
There are many ways to measure how people manage risk when they make decisions. A standard approach is to measure risk propensity using self-report questionnaires. An alternative approach is to use decision-making tasks that involve risk and uncertainty, and apply cognitive models of task behavior to infer parameters that measure people’s risk propensity. We report the results of a within-participants experiment that used three questionnaires and four decision-making tasks. The questionnaires are the Risk Propensity Scale, the Risk Taking Index, and the Domain Specific Risk Taking Scale. The decision-making tasks are the Balloon Analogue Risk Task, the preferential choice gambling task, the optimal stopping problem, and the bandit problem. We analyze the relationships between the risk measures and cognitive parameters using Bayesian inferences about the patterns of correlation, and using a novel cognitive latent variable modeling approach. The results show that people’s risk propensity is generally consistent within different conditions for each of the decision-making tasks. There is, however, little evidence that the way people manage risk generalizes across the tasks, or that it corresponds to the questionnaire measures.
One common and informative way that people express their beliefs, preferences, and opinions is by providing rankings. We use Thurstonian cognitive models to explore individual differences in naturally occurring ranking data for a variety of political, lifestyle, and sporting topics. After demonstrating that the standard Thurstonian model does not capture individual differences, we develop two extended models. The first allows for subgroups of people with different beliefs and opinions about all of the stimuli. The second allows for just a subset of polarized stimuli for which some people have different beliefs or opinions. We apply these two models, using Bayesian methods of inference, and demonstrate how they provide intuitive and useful accounts of the individual differences. We discuss the benefits of incorporating theory about individual differences into the processing assumptions of cognitive models, rather than through the statistical extensions that are currently often used in cognitive modeling.