We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The 21st Century Cures Act mandated that new medication research include patient focused drug development initiatives. The act also recognized CPSSs as integral members of the healthcare team. Inclusion of CPSSs within care teams is associated with reduced hospitalization, increased treatment engagement, and a renewed focus on patient desired outcomes. CPSSs are people with lived experience in navigating complex mental health systems and whose unique perspective helps guide peers on their journey to wellness. In the same manner that CPSS knowledge has improved clinical outcomes, partnering with CPSSs during CNS drug development may provide wellness outcomes in clinical trials that are more meaningful for people with lived experience. To this end, a CPSS Ambassador program was initiated.
Methods
Of 85 peer support specialists identified by internet searches, Linked-In, and peer support specialists’ registries, 7 CPSSs met our criteria (i.e., having lived experience of psychosis and being a member of a treatment team) and agreed to be part of our ambassador program. Interactions included 6 monthly virtual meetings and a live roundtable meeting. The objectives of the program were to: 1) understand unmet needs in people with lived experience and identify impediments to effective treatment, 2) learn best practices for discussing medication use to support wellness, 3) identify resources that can help educate people and families with lived experience, and 4) highlight the importance of CPSSs within healthcare teams to optimize treatment outcomes.
Results
This CPSS ambassador program emphasized the need for shared decision making and partnership to forge a positive treatment team alliance. As such, treatment goals should be tailored to patients’ needs (“nothing about me without me”). A major obstacle to effective treatment is the presence of bias or stigma among health care practitioners. Specifically, certain language used by clinicians has the potential to ostracize patients and negatively impact treatment. Medications should be discussed as one pillar of a larger treatment plan and not as a “fix” for symptoms. Educational resources written in layman’s terms are needed to explain treatment algorithms and medication side effects. And finally, CPSSs make a significant contribution to person-focused positive outcomes and are an essential part of the treatment team. CPSSs are a conduit of lived experience and advocate for the individual
Conclusions
The following key outcomes were illuminated because of this work together: CPSS’s are liaisons that facilitate the intersection between the treatment team and people utilizing mental health systems. CPSS’s are critical to successful navigation of the mental health care system and reaching desired outcomes. Best practices for treatment teams are about effective, person-based and stigma free partnerships for positive and patient focused outcomes.
Funding
Sumitomo Pharma America (formerly Sunovion Pharmaceuticals Inc)
Early gut microbiome development may impact brain and behavioral development. Using a nonhuman primate model (Macaca mulatta), we investigated the association between social environments and the gut microbiome on infant neurodevelopment and cognitive function. Infant rhesus monkeys (n = 33) were either mother-peer-reared (MPR) or nursery-reared (NR). Neurodevelopmental outcomes, namely emotional responsivity, visual orientation, and motor maturity, were assessed with the Primate Neonatal Neurobehavioral Assessment (PNNA) at 14–30 days. Cognitive development was assessed through tasks evaluating infant reward association, cognitive flexibility, and impulsivity at 6–8 months. The fecal microbiome was quantified from rectal swabs via 16S rRNA sequencing. Factor analysis was used to identify “co-abundance factors” describing patterns of microbial composition. We used multiple linear regressions with AIC Model Selection and differential abundance analysis (MaAsLin2) to evaluate relationships between co-abundance factors, microbiome diversity, and neuro-/cognitive development outcomes. At 30 days of age, a gut microbiome co-abundance factor, or pattern, with high Prevotella and Lactobacillus (β = −0.88, p = 0.04, AIC Weight = 68%) and gut microbiome alpha diversity as measured by Shannon diversity (β = −1.33, p = 0.02, AIC Weight = 80%) were both negatively associated with infant emotional responsivity. At 30 days of age, being NR was also associated with lower emotional responsivity (Factor 1 model: β = −3.13, p < 0.01; Shannon diversity model: β = −3.77, p < 0.01). The infant gut microbiome, along with early-rearing environments, may shape domains of neuro-/cognitive development related to temperament.
Conventional survey tools such as weighting do not address non-ignorable nonresponse that occurs when nonresponse depends on the variable being measured. This paper describes non-ignorable nonresponse weighting and imputation models using randomized response instruments, which are variables that affect response but not the outcome of interest. This paper uses a doubly robust estimator that is valid if one, but not necessarily both, of the weighting and imputation models is correct. When applied to a national 2019 survey, these tools produce estimates that suggest there was nontrivial non-ignorable nonresponse related to turnout, and, for subgroups, Trump approval and policy questions. For example, the conventional MAR-based weighted estimates of Trump support in the Midwest were 10 percentage points lower than the MNAR-based estimates.
This chapter uses history of polling to explain how pollsters have dealt with challenges of nonresponse. It tells the tale of three polling paradigms: large-scale polling, quota sampling, and random sampling. The first two paradigms came crashing down after pollsters made poor predictions for presidential elections. The third paradigm remains vibrant intellectually, but is increasingly difficult to implement. We do not yet know if the bad polling predictions in 2016 and 2020 will push the field to a new paradigm, but certainly they raised doubts about the current state of the field.
This chapter focuses on next-generation selection models that allow us to expand on the Heckman model using copula and control function models that allow one to estimate selection models for a large range of other statistical distributions. This chapter also shows how to generate weights that account for nonignorable nonresponse; not only do these weights increase the weight on demographic groups that respond with lower probabilities, they also increase weights on people with opinions that may make them less inclined to respond. This chapter also shows how to modify a Heckman model to allow for estimation of a nonignorable nonresponse selection model when we have a response-related variable that is available only for people in the survey sample.
This is a brief conclusion arguing that the direction forward is clear, even if the path is not. The time for assuming away problems is past. We should begin with a paradigm that reflects all the ways that polling can go wrong and then identify, model, and measure all the sources of bias, not just the ones that are easy to fix. Much work remains to be done, though, as these new models and data sources will require much evaluation and development theoretically, empirically, and practically. The payoff will be that survey researchers will be able to remain true to their aspirations of using information about a small number of people to understand the realities about many people, even as it gets harder and hear from anyone, let alone the random samples that our previous theory relied on.
This chapter explores ways to diagnose the potential for nonignorable nonresponse to cause problems. Section 7.1 describes how to define the range of possible values of population values that are consistent with the observed data. These calculations require virtually no assumptions and are robust to nonignorable nonresponse; they are simple yet tend to be uninformative. Section 7.2 shows how to postulate possible levels of nonignorability and assess how results would change.
This chapter brings together the argument so far, showing how nonignorable nonresponse may manifest itself and how the various models perform across these contexts, including how they may fail. It also highlights the ideal response to potential nonignorable nonresponse, which involves (1) creating randomized instruments, (2) using the randomized instrument to diagnose nonignorable nonresponse, (3) moving to conventional weights if there is no evidence of nonignorable nonresponse, but (4) using selection models explained here when there is evidence of nonignorable nonresponse. Section 11.1 simulates and analyzes data across a range of scenarios using multiple methods. Section 11.2 discusses how to diagnose whether nonresponse is nonignorable. Section 11.3 integrates the approaches with a decision tree based on properties of the data. Section 11.4 discusses how selection models can fail.
This chapter describes contemporary practices of probabilistic and nonprobabilistic pollsters. First, even pollsters who aspire to random sampling are doing something quite foreign to the random sampling paradigm. Continuing to use the language of random sampling is therefore becoming increasingly untenable. Second, the energy and growth in polling is concentrated in nonprobabilistic polls that do not even pretend to adhere to the tenets of the random sampling paradigm. When we use, teach, and critique such polls, we need a new language for assessing them. Finally, one of the biggest vulnerabilities for both probabilistic and nonprobabilistic polling is nonignorable nonresponse, something largely ignored in the current state of the art. It is striking that despite the incredible diversity of techniques currently deployed, academic and commercial pollsters mostly continue to use models that assume away nonignorable nonresponse.
This chapter illustrates how to use randomized response treatments to assess possible nonresponse bias. It focuses on a 2019 survey and shows how nonignorable nonresponse may have deflated Trump support in the Midwest and among Democrats even as nonignorable nonresponse inflated Trump support among Republicans. We also show that Democrats who responded to the poll were much more liberal on race than Democrats who did not respond, a pattern that was particularly strong among White Democrats and absent among non-White Democrats. Section 12.1 describes a survey design with a randomized response instrument. Section 12.2 discusses nonignorable nonresponse bias for turnout questions. Section 12.3 looks at presidential support, revealing regional and partisan differences in nonignorable nonresponse. Section 12.4 looks at race, focusing on partisan and racial differences in nonignorable nonresponse. Section 12.5 assesses nonignorable nonresponse on climate, taxes, and tariffs.
This chapter presents the intuition behind why nonignorable nonresponse can be a problem and how it can arise in many contexts. With a foundation that explicitly centers this possibility, we can better reason through when the problem may be larger, how to diagnose it, and how to fix or at least ameliorate it. Section 5.1 describes qualitatively when nonignorable nonresponse may be likely. Section 5.2 works through the intuition about how and why nonignorable nonresponse undermines polling accuracy. Section 5.3 presents a framework for modeling nonignorable nonresponse and culminates by describing Meng’s (2018) model of sampling error. Section 5.4 raises the possibility that nonignorability varies across groups, over time, and even across questions.