We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Objectives/Goals: To identify clinical trial teams that are at risk of not meeting their recruitment goals as early in the recruitment period as possible, this project aims to provide timely accrual information and projected forecasts for accruals by the end of the recruitment period across all trials at USC. Methods/Study Population: This project aggregates recruitment accrual data periodically from OnCore to create per-study accrual pages that contain an up-to-date accrual chart, metrics like expected and actual accrual per month, and projected recruitment based on an X-month moving average (3 months by default). Trials at risk are identified as early as possible by using these projections to classify risk. In this initial phase, we’ve classified trials as medium risk (80%–99% accrual) or high risk (less than 80% accrual). The dashboard is currently available for all clinical trials at USC and users are automatically restricted to the studies that they administer or work on depending on their role. Results/Anticipated Results: The dashboard will provide visibility across the institution for the current accrual for all clinical trials in a standard, user-friendly format and use the same metrics and definitions of risk for trial accruals not meeting their targets. This will allow the institution to identify trials that need intervention to get back on track using a single set of criteria across all research teams. Users in different roles, whether department heads, principal investigators, or study coordinators can view the current accrual for all the trials that they administer or work on in one central location. The dashboard will also help to identify quality issues in OnCore by performing data quality checks nightly. Discussion/Significance of Impact: By providing a central location for role-based access to timely clinical trial accrual for the institution, the dashboard helps to identify trials at risk of not meeting their recruitment targets as early as possible to provide corrective advice/measures.
In the first decades of the twentieth century, the gap in age-adjusted mortality rates between people living in Republican and Democratic counties expanded; people in Democratic counties started living longer. This paper argues that political partisanship poses a direct problem for ameliorating these trends: trust and adherence in one’s personal doctor (including on non-COVID-19 related care) – once a non-partisan issue – now divides Democrats (more trustful) and Republicans (less trustful). We argue that this divide is largely a consequence of partisan conflict surrounding COVID-19 that spilled over and created a partisan cleavage in people’s trust in their own personal doctor. We then present experimental evidence that sharing a political background with your medical provider increases willingness to seek care. The doctor-patient relationship is essential for combating some of society’s most pressing problems; understanding how partisanship shapes this relationship is vital.
Economic variables such as socioeconomic status and debt are linked with an increased risk of a range of mental health problems and appear to increase the risk of developing of post-traumatic stress disorder (PTSD). Previous research has shown that people living in more deprived areas have more severe symptoms of depression and anxiety after treatment in England’s NHS Talking Therapies services. However, no research has examined if there is a relationship between neighbourhood deprivation and outcomes for PTSD specifically. This study was an audit of existing data from a single NHS Talking Therapies service. The postcodes of 138 service users who had received psychological therapy for PTSD were used to link data from the English Indices of Deprivation. This was analysed with the PCL-5 measure of PTSD symptoms pre- and post-treatment. There was no significant association between neighbourhood deprivation measures on risk of drop-out from therapy for PTSD, number of sessions received or PTSD symptom severity at the start of treatment. However, post-treatment PCL-5 scores were significantly more severe for those living in highly deprived neighbourhoods, with lower estimated income and greater health and disability. There was also a non-significant trend for the same pattern based on employment and crime rates. There was no impact of access to housing and services or living environment. Those living in more deprived neighbourhoods experienced less of a reduction in PTSD symptoms after treatment from NHS Talking Therapies services. Given the small sample size in a single city, this finding needs to be replicated with a larger sample.
Key learning aims
(1) Previous literature has shown that socioeconomic deprivation increases the risk of a range of mental health problems.
(2) Existing research suggests that economic variables such as income and employment are associated with greater incidence of PTSD.
(3) In the current study, those living in more deprived areas experienced less of a reduction in PTSD symptoms following psychological therapy through NHS Talking Therapies.
(4) The relatively poorer treatment outcomes in the current study are not explained by differences in baseline PTSD severity or drop-out rates, which were not significantly different comparing patients from different socioeconomic strata.
In response to the COVID-19 pandemic, we rapidly implemented a plasma coordination center, within two months, to support transfusion for two outpatient randomized controlled trials. The center design was based on an investigational drug services model and a Food and Drug Administration-compliant database to manage blood product inventory and trial safety.
Methods:
A core investigational team adapted a cloud-based platform to randomize patient assignments and track inventory distribution of control plasma and high-titer COVID-19 convalescent plasma of different blood groups from 29 donor collection centers directly to blood banks serving 26 transfusion sites.
Results:
We performed 1,351 transfusions in 16 months. The transparency of the digital inventory at each site was critical to facilitate qualification, randomization, and overnight shipments of blood group-compatible plasma for transfusions into trial participants. While inventory challenges were heightened with COVID-19 convalescent plasma, the cloud-based system, and the flexible approach of the plasma coordination center staff across the blood bank network enabled decentralized procurement and distribution of investigational products to maintain inventory thresholds and overcome local supply chain restraints at the sites.
Conclusion:
The rapid creation of a plasma coordination center for outpatient transfusions is infrequent in the academic setting. Distributing more than 3,100 plasma units to blood banks charged with managing investigational inventory across the U.S. in a decentralized manner posed operational and regulatory challenges while providing opportunities for the plasma coordination center to contribute to research of global importance. This program can serve as a template in subsequent public health emergencies.
How do international crises unfold? We conceptualize international relations as a strategic chess game between adversaries and develop a systematic way to measure pieces, moves, and gambits accurately and consistently over a hundred years of history. We introduce a new ontology and dataset of international events called ICBe based on a very high-quality corpus of narratives from the International Crisis Behavior (ICB) Project. We demonstrate that ICBe has higher coverage, recall, and precision than existing state of the art datasets and conduct two detailed case studies of the Cuban Missile Crisis (1962) and the Crimea-Donbas Crisis (2014). We further introduce two new event visualizations (event iconography and crisis maps), an automated benchmark for measuring event recall using natural language processing (synthetic narratives), and an ontology reconstruction task for objectively measuring event precision. We make the data, supplementary appendix, replication material, and visualizations of every historical episode available at a companion website crisisevents.org.
In 2022, highly pathogenic avian influenza (HPAI) A(H5N1) virus clade 2.3.4.4b became enzootic and caused mass mortality in Sandwich Tern Thalasseus sandvicensis and other seabird species across north-western Europe. We present data on the characteristics of the spread of the virus between and within breeding colonies and the number of dead adult Sandwich Terns recorded at breeding sites throughout north-western Europe. Within two months of the first reported mortalities, 20,531 adult Sandwich Terns were found dead, which is >17% of the total north-western European breeding population. This is probably an under-representation of total mortality, as many carcasses are likely to have gone unnoticed and unreported. Within affected colonies, almost all chicks died. After the peak of the outbreak, in a colony established by late breeders, 25.7% of tested adults showed immunity to HPAI subtype H5. Removal of carcasses was associated with lower levels of mortality at affected colonies. More research on the sources and modes of transmission, incubation times, effective containment, and immunity is urgently needed to combat this major threat for colonial seabirds.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
Auditory verbal hallucinations (AVH), or voice-hearing, can be a prominent symptom during fluctuating mood states in bipolar disorder (BD).
Aims:
The current study aimed to: (i) compare AVH-related distress in BD relative to schizophrenia (SCZ), (ii) examine correlations between phenomenology and voice beliefs across each group, and (iii) explore how voice beliefs may uniquely contribute to distress in BD and SCZ.
Method:
Participants were recruited from two international sites in Australia (BD=31; SCZ=50) and the UK (BD=17). Basic demographic-clinical information was collected, and mood symptoms were assessed. To document AVH characteristics, a 4-factor model of the Psychotic Symptoms Rating Scale and the Beliefs about Voices Questionnaire-Revised were used. Statistical analyses consisted of group-wise comparisons, Pearson’s correlations and multiple hierarchical regressions.
Results:
It was found that AVH-related distress was not significantly higher in BD than SCZ, but those with BD made significantly more internal attributions for their voices. In the BD group, AVH-related distress was significantly positively correlated with malevolence, omnipotence and resistance, However, only resistance, alongside mania and depressive symptoms, significantly contributed to AVH-related distress in BD.
Discussion:
Our findings have several clinical implications, including identification of voice resistance as a potential therapeutic target to prioritise in BD. Factoring in the influence of mood symptoms on AVH-related distress as well as adopting more acceptance-oriented therapies may also be of benefit.
This study investigated sex differences in Fe status, and associations between Fe status and endurance and musculoskeletal outcomes, in military training. In total, 2277 British Army trainees (581 women) participated. Fe markers and endurance performance (2·4 km run) were measured at the start (week 1) and end (week 13) of training. Whole-body areal body mineral density (aBMD) and markers of bone metabolism were measured at week 1. Injuries during training were recorded. Training decreased Hb in men and women (mean change (–0·1 (95 % CI –0·2, –0·0) and –0·7 (95 % CI –0·9, –0·6) g/dl, both P < 0·001) but more so in women (P < 0·001). Ferritin decreased in men and women (–27 (95 % CI –28, –23) and –5 (95 % CI –8, –1) µg/l, both P ≤ 0·001) but more so in men (P < 0·001). Soluble transferrin receptor increased in men and women (2·9 (95 % CI 2·3, 3·6) and 3·8 (95 % CI 2·7, 4·9) nmol/l, both P < 0·001), with no difference between sexes (P = 0·872). Erythrocyte distribution width increased in men (0·3 (95 % CI 0·2, 0·4)%, P < 0·001) but not in women (0·1 (95 % CI –0·1, 0·2)%, P = 0·956). Mean corpuscular volume decreased in men (–1·5 (95 % CI –1·8, –1·1) fL, P < 0·001) but not in women (0·4 (95 % CI –0·4, 1·3) fL, P = 0·087). Lower ferritin was associated with slower 2·4 km run time (P = 0·018), sustaining a lower limb overuse injury (P = 0·048), lower aBMD (P = 0·021) and higher beta C-telopeptide cross-links of type 1 collagen and procollagen type 1 N-terminal propeptide (both P < 0·001) controlling for sex. Improving Fe stores before training may protect Hb in women and improve endurance and protect against injury.
We present macrobotanical, starch, and phytolith data from artifacts and sediments from Middle Formative La Blanca (1000–600 cal BC) and Late Formative El Ujuxte (600 cal BC–cal AD 115 ) in the Soconusco region in Guatemala. Potential economic plants identified included palm (cf. Arecaceae), two varieties of maize (Zea mays), guava (Psidium guajava), bean (Phaseolus), chili peppers (Capsicum), squash (Cucurbitaceae), custard apple (Annonaceae), coco plum (Chrysobalanaceae), lerén (Calathea), arrowroot (Maranta), and bird-of-paradise (Heliconia). The results suggest that control of food production and consumption was critical for the transition from complex chiefdoms during the Middle Formative to the archaic state in the Late Formative. The arrival of a more productive South American variety of maize at El Ujuxte (about 2549 BP) allowed elites to exploit an already existing broad-based economic system and to use the maize-based religious system to increase control over maize agricultural practices and maintain power through ideology and disciplinary power. These data suggest that the arrival of fully domesticated South American maize likely influenced the overall development of Mesoamerican state-level societies.
Reappraisal of Wirnt von Gravenberg's Wigalois, showing how it confronts and takes issue with - rather than simply imitating - earlier German Arthurian romance.
Capacity development is essential for the effective management of protected areas and for achieving successful biodiversity conservation. European Natura 2000 sites form an extensive network of protected areas and developing the capacity of staff at all levels is a priority that will positively influence the appropriate implementation of conservation actions. In this study we identify the main challenges and potential solutions to developing the skills, knowledge and tools required for effective Natura 2000 site management. Our findings are based on a case study of the European project LIFE e-Natura2000.edu, which focuses on capacity development in practical biodiversity conservation and management through integrated and blended learning experiences (i.e. a combination of face-to-face and virtual teaching). We illustrate the main elements for successfully building capacity within a variety of knowledge and experience backgrounds and operating levels related to the management of Natura 2000 sites. Multifaceted, blended learning approaches are key to tackling the various needs of Natura 2000 managers in terms of skills, knowledge and tools.
To examine differences in surgical practices between salaried and fee-for-service (FFS) surgeons for two common degenerative spine conditions. Surgeons may offer different treatments for similar conditions on the basis of their compensation mechanism.
Methods:
The study assessed the practices of 63 spine surgeons across eight Canadian provinces (39 FFS surgeons and 24 salaried) who performed surgery for two lumbar conditions: stable spinal stenosis and degenerative spondylolisthesis. The study included a multicenter, ambispective review of consecutive spine surgery patients enrolled in the Canadian Spine Outcomes and Research Network registry between October 2012 and July 2018. The primary outcome was the difference in type of procedures performed between the two groups. Secondary study variables included surgical characteristics, baseline patient factors, and patient-reported outcome.
Results:
For stable spinal stenosis (n = 2234), salaried surgeons performed statistically fewer uninstrumented fusion (p < 0.05) than FFS surgeons. For degenerative spondylolisthesis (n = 1292), salaried surgeons performed significantly more instrumentation plus interbody fusions (p < 0.05). There were no statistical differences in patient-reported outcomes between the two groups.
Conclusions:
Surgeon compensation was associated with different approaches to stable lumbar spinal stenosis and degenerative lumbar spondylolisthesis. Salaried surgeons chose a more conservative approach to spinal stenosis and a more aggressive approach to degenerative spondylolisthesis, which highlights that remuneration is likely a minor determinant in the differences in practice of spinal surgery in Canada. Further research is needed to further elucidate which variables, other than patient demographics and financial incentives, influence surgical decision-making.
Racial disparities in colorectal cancer (CRC) can be addressed through increased adherence to screening guidelines. In real-life encounters, patients may be more willing to follow screening recommendations delivered by a race concordant clinician. The growth of telehealth to deliver care provides an opportunity to explore whether these effects translate to a virtual setting. The primary purpose of this pilot study is to explore the relationships between virtual clinician (VC) characteristics and CRC screening intentions after engagement with a telehealth intervention leveraging technology to deliver tailored CRC prevention messaging.
Methods:
Using a posttest-only design with three factors (VC race-matching, VC gender, intervention type), participants (N = 2267) were randomised to one of eight intervention treatments. Participants self-reported perceptions and behavioral intentions.
Results:
The benefits of matching participants with a racially similar VC trended positive but did not reach statistical significance. Specifically, race-matching positively influenced screening intentions for Black participants but not for Whites (b = 0.29, p = 0.10). Importantly, perceptions of credibility, attractiveness, and message relevance significantly influenced screening intentions and the relationship with race-matching.
Conclusions:
To reduce racial CRC screening disparities, investments are needed to identify patient-focused interventions to address structural barriers to screening. This study suggests that telehealth interventions that match Black patients with a Black VC can enhance perceptions of credibility and message relevance, which may then improve screening intentions. Future research is needed to examine how to increase VC credibility and attractiveness, as well as message relevance without race-matching.
Mounting evidence suggests that the first few months of life are critical for the development of obesity. The relationships between the timing of solid food introduction and the risk of childhood obesity have been examined previously; however, evidence for the association of timing of infant formula introduction remains scarce. This study aimed to examine whether the timing of infant formula introduction is associated with growth z-scores and overweight at ages 1 and 3 years. This study included 5733 full-term (≥ 37 gestational weeks) and normal birth weight (≥ 2500 and < 4000 g) children in the Born in Guangzhou Cohort Study, a prospective cohort study with data collected at 6 weeks, 6, 12 and 36 months. Compared with infant formula introduction at 0–3 months, introduction at 4–6 months was associated with the lower BMI, weight-for-age and weight-for-length z-scores at 1 and 3 years old. Also, introduction at 4–6 months was associated with the lower odds of at-risk of overweight at age 1 (adjusted OR 0·72, 95 % CI 0·55, 0·94) and 3 years (adjusted OR 0·50, 95 % CI 0·30, 0·85). Introduction at 4–6 months also decreased the odds of overweight at age 1 year (adjusted OR 0·42, 95 % CI 0·21, 0·84) but not at age 3 years. Based on our findings, compared with introduction within the first 3 months, introduction at 4–6 months has a reduction on later high BMI risk and at-risk of overweight. However, these results need to be replicated in other well-designed studies before more firm recommendations can be made.
Surface meltwater is becoming increasingly widespread on Antarctic ice shelves. It is stored within surface ponds and streams, or within firn pore spaces, which may saturate to form slush. Slush can reduce firn air content, increasing an ice-shelf's vulnerability to break-up. To date, no study has mapped the changing extent of slush across ice shelves. Here, we use Google Earth Engine and Landsat 8 images from six ice shelves to generate training classes using a k-means clustering algorithm, which are used to train a random forest classifier to identify both slush and ponded water. Validation using expert elicitation gives accuracies of 84% and 82% for the ponded water and slush classes, respectively. Errors result from subjectivity in identifying the ponded water/slush boundary, and from inclusion of cloud and shadows. We apply our classifier to the Roi Baudouin Ice Shelf for the entire 2013–20 Landsat 8 record. On average, 64% of all surface meltwater is classified as slush and 36% as ponded water. Total meltwater areal extent is greatest between late January and mid-February. This highlights the importance of mapping slush when studying surface meltwater on ice shelves. Future research will apply the classifier across all Antarctic ice shelves.
In the current discourse surrounding classical music institutions, issues of inclusion and diversity are regularly to the fore. There is pressure to prove the relevance of orchestras and ensembles to wider society, with outreach work in educational settings and in communities already an established part of their output. Using data gathered from a research project with the International Music and Performing Arts Charitable Trust Scotland (IMPACT Scotland), which is responsible for planning a new concert hall in Edinburgh to be called the Dunard Centre, this article extends these debates by relocating them to a new arena: the buildings classical institutions inhabit. First, the public nature of the concert hall is explored by examining three ‘strategies for publicness’ identified in concert-hall projects: the urbanistic strategy, the living building strategy and the ‘art for all’ strategy. These will be discussed in relation to the extensive literature on public space. The second part of the article examines recent developments in musicology and arts policy which encourage more ‘democratic’ arts practice. These will be used as the basis for asking how the concert hall (and its primary tenant, the orchestra) might better achieve the publicness that is so often promised on their behalf.