We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter introduces you to a relational approach to citizenship education, one that is grounded in how educators can and do engage children in coming together to interact with and learn from others in their communities. In this chapter, we are interested in a broad understanding of citizenship education, one which might be better understood as education for citizenship. While commonly understood as either a ‘status’ or a ‘practice’, this chapter argues that citizenship, above all else, concerns relationships. Recognising this fact requires us as educators to consider how civic relationships have been and are (often problematically) framed and how relational pedagogies can recognise and draw upon learners’ existing civic dispositions, might develop those civic dispositions further, and can build civically responsive educational settings. The chapter will provide a critical overview of relevant existing research on citizenship education before developing key features, including key pedagogical features, of a relational approach to citizenship education.
Cardiovascular disease (CVD) is a well-known risk factor for cognitive impairment and dementia, particularly among minoritized groups that have experienced a history of low childhood socioeconomic status (SES). Although previous literature has linked all levels of SES to varying degrees of stress exposure, children raised in higher SES households have more access to resources and services that encourage optimal growth and development than children who grow up in lower SES households. Given the disproportionate burden of dementia and cognitive deficits within minoritized groups, the present study examined whether childhood SES is associated with later life cognition among Black and White older adults and if this association persists after accounting for hypertension, a possible mediator of the relationship between childhood SES.
Participants and Methods:
1,184 participants were from the first wave of the STAR (n = 397 Black [Mage= 75.0 ±6.8 years]) and KHANDLE (386 Black [Mage= 76.2 ±7.2 years] and 401 White [Mage= 78.4 ±7.5 years]) cohorts. We used general linear models to examine the relationship between childhood SES and later-life executive function, semantic memory, and verbal memory scores, and midlife hypertension. Childhood SES was measured by self-reported perceived financial status (with participants given the following options: ‘pretty well off financially’, ‘about average’, ‘poor’, or ‘it varied’). These models were assessed in the full sample and also stratified by race.
Results:
In the full sample, childhood financial status was not associated with semantic memory, verbal episodic memory, or executive function. Financial status was associated with semantic memory in Black adults (β = -.124, t(771) = -2.52, p = .01) and this association persisted after accounting for hypertension (β = -.124, t(770) = -2.53, p = .01). There was no association between childhood financial status and later life semantic memory among White adults. There was no association between childhood financial status and later life verbal episodic memory or executive function in either Black or White adults in models with or without adjustment for hypertension.
Conclusions:
Our findings showed no relationship between childhood SES and cognition, except for semantic memory in Black participants; this relationship persisted after accounting for midlife CVD. Future analyses will assess both direct and indirect effects of more predictive measures of childhood SES on late-life cognition with midlife CVD as a mediator.
The COVID-19 pandemic has been devastating for people living with dementia (PLWD) and their caregivers. While prior research has documented these effects, it has not delved into their specific causes or how they are modified by contextual variation in caregiving circumstances.
Infants and children born with CHD are at significant risk for neurodevelopmental delays and abnormalities. Individualised developmental care is widely recognised as best practice to support early neurodevelopment for medically fragile infants born premature or requiring surgical intervention after birth. However, wide variability in clinical practice is consistently demonstrated in units caring for infants with CHD. The Cardiac Newborn Neuroprotective Network, a Special Interest Group of the Cardiac Neurodevelopmental Outcome Collaborative, formed a working group of experts to create an evidence-based developmental care pathway to guide clinical practice in hospital settings caring for infants with CHD. The clinical pathway, “Developmental Care Pathway for Hospitalized Infants with Congenital Heart Disease,” includes recommendations for standardised developmental assessment, parent mental health screening, and the implementation of a daily developmental care bundle, which incorporates individualised assessments and interventions tailored to meet the needs of this unique infant population and their families. Hospitals caring for infants with CHD are encouraged to adopt this developmental care pathway and track metrics and outcomes using a quality improvement framework.
The primary goal of this study was to determine if ultrasound (US) use after brief point-of-care ultrasound (POCUS) training on cardiac and lung exams would result in more paramedics correctly identifying a tension pneumothorax (TPTX) during a simulation scenario.
Methods:
A randomized controlled, simulation-based trial of POCUS lung exam education investigating the ability of paramedics to correctly diagnose TPTX was performed. The US intervention group received a 30-minute cardiac and lung POCUS lecture followed by hands-on US training. The control group did not receive any POCUS training. Both groups participated in two scenarios: right unilateral TPTX and undifferentiated shock (no TPTX). In both scenarios, the patient continued to be hypoxemic after verified intubation with pulse oximetry of 86%-88% and hypotensive with a blood pressure of 70/50. Sirens were played at 65 decibels to mimic prehospital transport conditions. A simulation educator stated aloud the time diagnoses were made and procedures performed, which were recorded by the study investigator. Paramedics completed a pre-survey and post-survey.
Results:
Thirty paramedics were randomized to the control group; 30 paramedics were randomized to the US intervention group. Most paramedics had not received prior US training, had not previously performed a POCUS exam, and were uncomfortable with POCUS. Point-of-care US use was significantly higher in the US intervention group for both simulation cases (P <.001). A higher percentage of paramedics in the US intervention group arrived at the correct diagnosis (77%) for the TPTX case as compared to the control group (57%), although this difference was not significantly different (P = 0.1). There was no difference in the correct diagnosis between the control and US intervention groups for the undifferentiated shock case. On the post-survey, more paramedics in the US intervention group were comfortable with POCUS for evaluation of the lung and comfortable decompressing TPTX using POCUS (P <.001). Paramedics reported POCUS was within their scope of practice.
Conclusions:
Despite being novice POCUS users, the paramedics were more likely to correctly diagnose TPTX during simulation after a brief POCUS educational intervention. However, this difference was not statistically significant. Paramedics were comfortable using POCUS and felt its use improved their TPTX diagnostic skills.
OBJECTIVES/GOALS: 1) Assess the patient-reported, perceived change in olfactory function after bimodal visual-olfactory training (OT) 2) Assess change in olfactory function after bimodal visual-olfactory training with a smell identification test 3) Assess which scents are most important to people with olfactory dysfunction (OD) METHODS/STUDY POPULATION: The participants are adults with subjective or clinically diagnosed OD with post-surgical or traumatic etiologies within the last 5 years. At the first of two study visits, participants complete the University of Pennsylvania Smell Identification Test (UPSIT) and complete general health (SF-36) and olfactory-related quality-of-life questionnaires. From a list of 34 scents, participants chose the 4 scents most important to them and smelled the scents twice daily for 3 months. Olfactory testing and the quality-of-life questionnaires were repeated at the final visit. RESULTS/ANTICIPATED RESULTS: 10 participants have enrolled in the study. There was one screen fail and one withdrawal. Six participants are currently undergoing OT and two have completed the study. Seven participants have post-surgical etiology and three have post-traumatic etiology of their OD. Of the two participants who have completed the study, one had an UPSIT score improvement from 25 to 33 out of the 40 questions correct. The minimally clinically important difference on the UPSIT is 4. She reports improvement subjectively. The second participant had a UPSIT score change from 25 to 24 and reports ability to smell is neither better nor worse. DISCUSSION/SIGNIFICANCE OF IMPACT: Traumatic and post-surgical, particularly post-transphenoidal hypophysectomy, are common etiologies of OD and no effective treatments exist. The results from our pilot study will help better inform the best way to undergo OT, how effective it is, and the planning of future studies.
In this paper, the author argues that Joseph Fins’ mosaic decisionmaking model for brain-injured patients is untenable. He supports this claim by identifying three problems with mosaic decisionmaking. First, that it is unclear whether a mosaic is a conceptually adequate metaphor for a decisionmaking process that is intended to promote patient autonomy. Second, that the proposed legal framework for mosaic decisionmaking is inappropriate. Third, that it is unclear how we ought to select patients for participation in mosaic decisionmaking.
Measuring the polarization of legislators and parties is a key step in understanding how politics develops over time. But in parliamentary systems—where ideological positions estimated from roll calls may not be informative—producing valid estimates is extremely challenging. We suggest a new measurement strategy that makes innovative use of the “accuracy” of machine classifiers, i.e., the number of correct predictions made as a proportion of all predictions. In our case, the “labels” are the party identifications of the members of parliament, predicted from their speeches along with some information on debate subjects. Intuitively, when the learner is able to discriminate members in the two main Westminster parties well, we claim we are in a period of “high” polarization. By contrast, when the classifier has low accuracy—and makes a relatively large number of mistakes in terms of allocating members to parties based on the data—we argue parliament is in an era of “low” polarization. This approach is fast and substantively valid, and we demonstrate its merits with simulations, and by comparing the estimates from 78 years of House of Commons speeches with qualitative and quantitative historical accounts of the same. As a headline finding, we note that contemporary British politics is approximately as polarized as it was in the mid-1960s—that is, in the middle of the “postwar consensus”. More broadly, we show that the technical performance of supervised learning algorithms can be directly informative about substantive matters in social science.
Glyphosate-tolerant spring wheat currently is being developed and most likely will be the first major genetically engineered crop to be marketed and grown in several areas of the northern Great Plains of the United States. The public has expressed concerns about environmental risks from glyphosate-tolerant wheat. Replacement of traditional herbicide active ingredients with glyphosate in a glyphosate-tolerant spring wheat system may alter ecological risks associated with weed management. The objective of this study was to use a Tier 1 quantitative risk assessment methodology to compare ecological risks for 16 herbicide active ingredients used in spring wheat. The herbicide active ingredients included 2,4-D, bromoxynil, clodinafop, clopyralid, dicamba, fenoxaprop, flucarbazone, glyphosate, MCPA, metsulfuron, thifensulfuron, tralkoxydim, triallate, triasulfuron, tribenuron, and trifluralin. We compared the relative risks of these herbicides to glyphosate to provide an indication of the effect of glyphosate when it is used in a glyphosate-tolerant spring wheat system. Ecological receptors and effects evaluated were avian (acute dietary risk), wild mammal (acute dietary risk), aquatic vertebrates (acute risk), aquatic invertebrates (acute risk), aquatic plants (acute risk), nontarget terrestrial plants (seedling emergence and vegetative vigor), and groundwater exposure. Ecological risks were assessed by integrating toxicity and exposure, primarily using the risk quotient method. Ecological risks for the 15 herbicides relative to glyphosate were highly variable. For risks to duckweed, green algae, groundwater, and nontarget plant seedling emergence, glyphosate had less relative risk than most other active ingredients. The differences in relative risks were most pronounced when glyphosate was compared with herbicides currently widely used on spring wheat.
Field studies were conducted in 2000 and 2001 to evaluate corn yield-loss predictions generated by WeedSOFT, a computerized weed management decision aid. Conventional tillage practices were used to produce corn in 76-cm rows in Illinois, Indiana, Kansas, Michigan, Missouri, Nebraska, and Wisconsin. A total of 21 site-years from these seven states were evaluated in this study. At 4 wk after planting, weed densities and size, crop-growth stage, estimated weed-free yield, and environmental conditions at the time of application were entered into WeedSOFT to generate POST treatments ranked by percent maximum yield (PMY). POST treatments were chosen with yield losses ranging from 0 to 20%. Data were subjected to linear regression analysis by state and pooled over all states to determine the relationship between actual and predicted yield loss. A slope value equal to one implies perfect agreement between actual and predicted yield loss. Slope value estimates for Illinois and Missouri were equal to one. Actual yield losses were higher than the software predicted in Kansas and lower than predicted in Michigan, Nebraska, and Wisconsin. Slope value estimate from a data set containing all site years was equal to one. This research demonstrated that variability in yield-loss predictions occurred at sites that contained a high density of a single weed specie (>100/m2) regardless of its competitive index (CI); at sites with a predominant broadleaf weed with a CI greater than five, such as Palmer amaranth, giant ragweed, common sunflower, and common cocklebur; and at sites that experience moderate to severe drought stress.
The ethical principle of autonomy requires physicians to respect patient autonomy when present, and to protect the patient who lacks autonomy. Fulfilling this ethical obligation when a patient has a communication impairment presents considerable challenges. Standard methods for evaluating decision-making capacity require a semistructured interview. Some patients with communication impairments are unable to engage in a semistructured interview and are at risk of the wrongful loss of autonomy. In this article, we present a general strategy for assessing decision-making capacity in patients with communication impairments. We derive this strategy by reflecting on a particular case. The strategy involves three steps: (1) determining the reliability of communication, (2) widening the bandwidth of communication, and (3) using compensatory measures of decision-making capacity. We argue that this strategy may be useful for assessing decision-making capacity and preserving autonomy in some patients with communication impairments.
Trypanosoma cruzi, causative agent of Chagas disease, co-infects its triatomine vector with its sister species Trypanosoma rangeli, which shares 60% of its antigens with T. cruzi. Additionally, T. rangeli has been observed to be pathogenic in some of its vector species. Although T. cruzi–T. rangeli co-infections are common, their effect on the vector has rarely been investigated. Therefore, we measured the fitness (survival and reproduction) of triatomine species Rhodnius prolixus infected with just T. cruzi, just T. rangeli, or both T. cruzi and T. rangeli. We found that survival (as estimated by survival probability and hazard ratios) was significantly different between treatments, with the T. cruzi treatment group having lower survival than the co-infected treatment. Reproduction and total fitness estimates in the T. cruzi and T. rangeli treatments were significantly lower than in the co-infected and control groups. The T. cruzi and T. rangeli treatment group fitness estimates were not significantly different from each other. Additionally, co-infected insects appeared to tolerate higher doses of parasites than insects with single-species infections. Our results suggest that T. cruzi–T. rangeli co-infection could ameliorate negative effects of single infections of either parasite on R. prolixus and potentially help it to tolerate higher parasite doses.
This research shows that a portfolio of wheat varieties could enhance profitability and reduce risk over the selection of a single variety for Kansas wheat producers. Many Kansas wheat farmers select varieties solely based on published average yields. This study uses portfolio theory from business investment analysis to find the optimal, yield-maximizing and risk-minimizing combination of wheat varieties in Kansas.
The objective of this research is to identify and quantify the motivations for organic grain farming in the United States. Survey data of US organic grain producers were used in regression models to find the statistical determinants of three motivations for organic grain production, including profit maximization, environmental stewardship, and an organic lifestyle. Results provide evidence that many organic grain producers had more than a single motivation and that younger farmers are more likely to be motivated by environmental and lifestyle goals than older farmers. Organic grain producers exhibited a diversity of motivations, including profit and stewardship.