We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Animal foods, especially dairy products, eggs and fish, are the main source of iodine in the UK. However, the use of plant-based alternative products (PBAP) is increasing owing to issues of environmental sustainability. We previously measured the iodine content of milk-alternatives(1) but data are lacking on the iodine content of other plant-based products and there is now a greater number of iodine-fortified products. We aimed to compare: (i) the iodine concentration of fortified and unfortified PBAP and (ii) the iodine concentration of PBAP with their animal-product equivalents, including those not previously measured such as egg and fish alternatives.
The iodine concentration of 50 PBAP was analysed in March 2022 at LGC using ICP-MS. The products were selected from a market survey of six UK supermarkets in December 2021. Samples of matrix-matched (e.g. soya/oat) fortified and unfortified alternatives to milk (n = 13 and n = 11), yoghurt (n = 2 and n = 7) and cream (n = 1 and n = 5) were selected for analysis, as well as egg- (n = 1) and fish-alternatives (n = 10). We compared the iodine concentration between PBAPs and data on their animal-product equivalents(2).
The iodine concentration of fortified PBAPs was significantly higher than that of unfortified products; the median iodine concentration of fortified vs. unfortified milk alternatives was 321 vs. 0.84 µg/kg (p<0.001) and of fortified and unfortified yoghurt alternatives was 212 µg/kg vs 3.03 µg/kg (p = 0.04). The fortified cream alternative had a higher iodine concentration than the unfortified alternatives (259 vs. 26.5 µg/kg). The measured iodine concentration of the fortified products differed from that of the product label (both lower and higher); overall, the measured iodine concentration was significantly higher than that stated on the label (mean difference 49.1 µg/kg; p = 0.018).
Compared to the animal-product equivalents, the iodine concentration of unfortified PBAPs was significantly lower for milk (p<0.001) and yoghurt (p<0.001), while there was no difference with fortified versions of milk (p = 0.28) and yoghurt (p = 0.09). The egg alternative had an iodine concentration that was just 0.6% of that of chicken eggs (3.38 vs. 560 µg/kg). Three (30%) of the fish alternatives had kelp/seaweed as ingredients and the median iodine concentration of these products was (non-significantly) higher than those without (126 vs 75 μg/kg; p = 0.83). However, the iodine content of all fish-alternative products was ten-times lower than that of fish (median 99 vs. 995 µg/kg; p<0.001).
The majority of PBAP are not fortified with iodine but those that are fortified have a significantly higher iodine concentration than unfortified products and are closer to the value of their animal equivalents. From an iodine perspective, unfortified plant-based alternatives are not suitable replacements and consumers should ensure adequate iodine from other dietary sources. Manufacturers should consider iodine fortification of a greater number of plant-based alternatives.
The trace element selenium is known to protect against oxidative damage which is known to contribute to cognitive impairment with ageing (1,2). The aim of this study was to explore the association between selenium status (serum selenium and selenoprotein P (SELENOP)) and global cognitive performance at baseline and after 5 years in 85-year-olds living in the Northeast of England.
Serum selenium and SELENOP concentrations were measured at baseline by total reflection X-ray fluorescence (TXRF) and enzyme-linked immunosorbent assay (ELISA), respectively, in 757 participants from the Newcastle 85+ study. Global cognitive performance was assessed using the Standardized Mini-Mental State Examination (SMMSE) where scores ≤25 out of 30 indicated cognitive impairment. Logistic regressions explored the associations between selenium status and global cognition at baseline. Linear mixed models explored associations between selenium status and global cognition prospectively after 5 years. Covariates included sex, body mass index, physical activity, high sensitivity C-reactive protein, alcohol intake, self-rated health, medications and smoking status.
At baseline, in fully adjusted models, there was no increase in odds of cognitive impairment with serum selenium (OR 1.004, 95% CI 0.993-1.015, p = 0.512) or between SELENOP (OR 1.006, 95% CI 0.881-1.149, p = 0.930). Likewise, over 5 years, in fully adjusted models there was no association between serum selenium and cognitive impairment (β 7.20E-4 ± 5.57E-4, p = 0.197), or between SELENOP and cognitive impairment (β 3.50E-3 ± 6.85E-3, p = 0.610).
In this UK cohort of very old adults, serum selenium or SELENOP was not associated with cognitive impairment at baseline and 5 years. This was an unexpected finding despite SELENOP’s key role in the brain and the observed associations in other studies. Further research is needed to explore the effect of selenium on global cognition in very old adults.
Stroke outcomes research requires risk-adjustment for stroke severity, but this measure is often unavailable. The Passive Surveillance Stroke SeVerity (PaSSV) score is an administrative data-based stroke severity measure that was developed in Ontario, Canada. We assessed the geographical and temporal external validity of PaSSV in British Columbia (BC), Nova Scotia (NS) and Ontario, Canada.
Methods:
We used linked administrative data in each province to identify adult patients with ischemic stroke or intracerebral hemorrhage between 2014-2019 and calculated their PaSSV score. We used Cox proportional hazards models to evaluate the association between the PaSSV score and the hazard of death over 30 days and the cause-specific hazard of admission to long-term care over 365 days. We assessed the models’ discriminative values using Uno’s c-statistic, comparing models with versus without PaSSV.
Results:
We included 86,142 patients (n = 18,387 in BC, n = 65,082 in Ontario, n = 2,673 in NS). The mean and median PaSSV were similar across provinces. A higher PaSSV score, representing lower stroke severity, was associated with a lower hazard of death (hazard ratio and 95% confidence intervals 0.70 [0.68, 0.71] in BC, 0.69 [0.68, 0.69] in Ontario, 0.72 [0.68, 0.75] in NS) and admission to long-term care (0.77 [0.76, 0.79] in BC, 0.84 [0.83, 0.85] in Ontario, 0.86 [0.79, 0.93] in NS). Including PaSSV in the multivariable models increased the c-statistics compared to models without this variable.
Conclusion:
PaSSV has geographical and temporal validity, making it useful for risk-adjustment in stroke outcomes research, including in multi-jurisdiction analyses.
Mental health problems are elevated in autistic individuals but there is limited evidence on the developmental course of problems across childhood. We compare the level and growth of anxious-depressed, behavioral and attention problems in an autistic and typically developing (TD) cohort.
Methods
Latent growth curve models were applied to repeated parent-report Child Behavior Checklist data from age 2–10 years in an inception cohort of autistic children (Pathways, N = 397; 84% boys) and a general population TD cohort (Wirral Child Health and Development Study; WCHADS; N = 884, 49% boys). Percentile plots were generated to quantify the differences between autistic and TD children.
Results
Autistic children showed elevated levels of mental health problems, but this was substantially reduced by accounting for IQ and sex differences between the autistic and TD samples. There was small differences in growth patterns; anxious-depressed problems were particularly elevated at preschool and attention problems at late childhood. Higher family income predicted lower base-level on all three dimensions, but steeper increase of anxious-depressed problems. Higher IQ predicted lower level of attention problems and faster decline over childhood. Female sex predicted higher level of anxious-depressed and faster decline in behavioral problems. Social-affect autism symptom severity predicted elevated level of attention problems. Autistic girls' problems were particularly elevated relative to their same-sex non-autistic peers.
Conclusions
Autistic children, and especially girls, show elevated mental health problems compared to TD children and there are some differences in predictors. Assessment of mental health should be integrated into clinical practice for autistic children.
Background: Sex differences in treatment response to intravenous thrombolysis (IVT) are poorly characterized. We compared sex-disaggregated outcomes in patients receiving IVT for acute ischemic stroke in the Alteplase compared to Tenecteplase (AcT) trial, a Canadian multicentre, randomised trial. Methods: In this post-hoc analysis, the primary outcome was excellent functional outcome (modified Rankin Score [mRS] 0-1) at 90 days. Secondary and safety outcomes included return to baseline function, successful reperfusion (eTICI≥2b), death and symptomatic intracerebral hemorrhage. Results: Of 1577 patients, there were 755 women and 822 men (median age 77 [68-86]; 70 [59-79]). There were no differences in rates of mRS 0-1 (aRR 0.95 [0.86-1.06]), return to baseline function (aRR 0.94 [0.84-1.06]), reperfusion (aRR 0.98 [0.80-1.19]) and death (aRR 0.91 [0.79-1.18]). There was no effect modification by treatment type on the association between sex and outcomes. The probability of excellent functional outcome decreased with increasing onset-to-needle time. This relation did not vary by sex (pinteraction 0.42). Conclusions: The AcT trial demonstrated comparable functional, safety and angiographic outcomes by sex. This effect did not differ between alteplase and tenecteplase. The pragmatic enrolment and broad national participation in AcT provide reassurance that there do not appear to be sex differences in outcomes amongst Canadians receiving IVT.
Family-centered rounding has emerged as the gold standard for inpatient paediatrics rounds due to its association with improved family and staff satisfaction and reduction of harmful errors. Little is known about family-centered rounding in subspecialty paediatric settings, including paediatric acute care cardiology.
In this qualitative, single centre study, we conducted semi-structured interviews with providers and caregivers eliciting their attitudes toward family-centered rounding. An a priori recruitment approach was used to optimise diversity in reflected opinions. A brief demographic survey was completed by participants. We completed thematic analysis of transcribed interviews using grounded theory.
In total, 38 interviews representing the views of 48 individuals (11 providers, 37 caregivers) were completed. Three themes emerged: rounds as a moment of mutual accountability, caregivers’ empathy for providers, and providers’ objections to family-centered rounding. Providers’ objections were further categorised into themes of assumptions about caregivers, caregiver choices during rounds, and risk for exacerbation of bias and inequity.
Caregivers and providers in the paediatric acute care cardiology setting echoed some previously described attitudes toward family-centered rounding. Many of the challenges surrounding family-centered rounding might be addressed through access to training for caregivers and providers alike. Hospitals should invest in systems to facilitate family-centered rounding if they choose to implement this model of care as the current state risks erosion of provider–caregiver relationship.
Pain following surgery for cardiac disease is ubiquitous, and optimal management is important. Despite this, there is large practice variation. To address this, the Paediatric Acute Care Cardiology Collaborative undertook the effort to create this clinical practice guideline.
Methods:
A panel of experts consisting of paediatric cardiologists, advanced practice practitioners, pharmacists, a paediatric cardiothoracic surgeon, and a paediatric cardiac anaesthesiologist was convened. The literature was searched for relevant articles and Collaborative sites submitted centre-specific protocols for postoperative pain management. Using the modified Delphi technique, recommendations were generated and put through iterative Delphi rounds to achieve consensus
Results:
60 recommendations achieved consensus and are included in this guideline. They address guideline use, pain assessment, general considerations, preoperative considerations, intraoperative considerations, regional anaesthesia, opioids, opioid-sparing, non-opioid medications, non-pharmaceutical pain management, and discharge considerations.
Conclusions:
Postoperative pain among children following cardiac surgery is currently an area of significant practice variability despite a large body of literature and the presence of centre-specific protocols. Central to the recommendations included in this guideline is the concept that ideal pain management begins with preoperative counselling and continues through to patient discharge. Overall, the quality of evidence supporting recommendations is low. There is ongoing need for research in this area, particularly in paediatric populations.
The building of online atomic and molecular databases for astrophysics and for other research fields started with the beginning of the internet. These databases have encompassed different forms: databases of individual research groups exposing their own data, databases providing collected data from the refereed literature, databases providing evaluated compilations, databases providing repositories for individuals to deposit their data, and so on. They were, and are, the replacement for literature compilations with the goal of providing more complete and in particular easily accessible data services to the users communities. Such initiatives involve not only scientific work on the data, but also the characterization of data, which comes with the “standardization” of metadata and of the relations between metadata, as recently developed in different communities. This contribution aims at providing a representative overview of the atomic and molecular databases ecosystem, which is available to the astrophysical community and addresses different issues linked to the use and management of data and databases. The information provided in this paper is related to the keynote lecture “Atomic and Molecular Databases: Open Science for better science and a sustainable world” whose slides can be found at DOI : doi.org/10.5281/zenodo.6979352 on the Zenodo repository connected to the “cb5-labastro” Zenodo Community (https://zenodo.org/communities/cb5-labastro).
Humpback whales (Megaptera novaeangliae) exhibit maternally driven fidelity to feeding grounds, and yet occasionally occupy new areas. Humpback whale sightings and mortalities in the New York Bight apex (NYBA) have been increasing over the last decade, providing an opportunity to study this phenomenon in an urban habitat. Whales in this area overlap with human activities, including busy shipping traffic leading into the Port of New York and New Jersey. The site fidelity, population composition and demographics of individual whales were analysed to better inform management in this high-risk area. Whale watching and other opportunistic data collections were used to identify 101 individual humpback whales in the NYBA from spring through autumn, 2012–2018. Although mean occurrence was low (2.5 days), mean occupancy was 37.6 days, and 31.3% of whales returned from one year to the next. Individuals compared with other regional and ocean-basin-wide photo-identification catalogues (N = 52) were primarily resighted at other sites along the US East Coast, including the Gulf of Maine feeding ground. Sightings of mother-calf pairs were rare in the NYBA, suggesting that maternally directed fidelity may not be responsible for the presence of young whales in this area. Other factors including shifts in prey species distribution or changes in population structure more broadly should be investigated.
Background: Respiratory infection can be an immediate precursor to stroke and myocardial infarction (MI). Influenza vaccination is associated with reduced risk of MI. This relationship has also been suggested for stroke although it is unclear if the effect is consistent across age and risk groups. Methods: Using administrative data in Alberta from September 2009 – December 2018 we modelled the hazard of any stroke for individuals recently exposed (< 182 days) to the influenza vaccine compared to those without recent exposure adjusted for age, sex, anticoagulant use, atrial fibrillation, COPD, diabetes, hypertension, income quintile, and rural/urban home location. Results: 4,141,209 adults (29,687,899 person-years) were included; 1,769,565 (43%) received at least one vaccination in the 10-year time span. 38,126 stroke events were recorded. Adjusted for demographics and comorbidities, recent influenza vaccination significantly reduced the hazard of stroke (HR: 0.77; 95% CI: 0.76 – 0.79). This effect persisted across all stroke subtypes and across all ages and risk profiles. Conclusions: There is a 23% reduction in hazard of stroke among those recently vaccinated against influenza compared to those who were not. Protection extended to the entire adult population and was not limited to high-risk groups only.
This volume of the Haskins Society Journal brings together a rich and interdisciplinary collection of articles. Topics range from the politics and military organization of northern worlds of the Anglo-Normans and Angevins in the twelfth and thirteenth centuries, to the economic activity of women in Catalonia and political unrest in thirteenth-century Tripoli. Martin Millett's chapter on thesignificance of rural life in Roman Britain for the early Middle Ages continues the Journal's commitment to archaeological approaches to medieval history, while contributions on �lfric's complex use of sources in his homilies, Byrhtferth of Ramsey's reinterpretation of the Alfredian past, and the little known History of Alfred of Beverly engage with crucial questions of sources andhistoriographical production within Anglo-Saxon and Anglo-Norman England. Pieces on the political meaning of the Empress Helena and Constantine I for Angevin political ambitions and the role of relicssuch as the Holy Lance in strategies of political legitimation in Anglo-Saxon England and Ottonian Germany in the tenth century complete the volume.
Contributors: David Bachrach, Mark Blincoe, Katherine Cross, Sarah Ifft Decker, Joyce Hill, Katherine Hodges-Kluck, Jesse Izzo, Martin Millett, John Patrick Slevin, Oliver Stoutner, Laura Wangerin.
Increased risk donors in paediatric heart transplantation have characteristics that may increase the risk of infectious disease transmission despite negative serologic testing. However, the risk of disease transmission is low, and refusing an IRD offer may increase waitlist mortality. We sought to determine the risks of declining an initial IRD organ offer.
Methods and results:
We performed a retrospective analysis of candidates waitlisted for isolated PHT using 20072017 United Network of Organ Sharing datasets. Match runs identified candidates receiving IRD offers. Competing risks analysis was used to determine mortality risk for those that declined an initial IRD offer with stratified Cox regression to estimate the survival benefit associated with accepting initial IRD offers. Overall, 238/1067 (22.3%) initial IRD offers were accepted. Candidates accepting an IRD offer were younger (7.2 versus 9.8 years, p < 0.001), more often female (50 versus 41%, p = 0.021), more often listed status 1A (75.6 versus 61.9%, p < 0.001), and less likely to require mechanical bridge to PHT (16% versus 23%, p = 0.036). At 1- and 5-year follow-up, cumulative mortality was significantly lower for candidates who accepted compared to those that declined (6% versus 13% 1-year mortality and 15% versus 25% 5-year mortality, p = 0.0033). Decline of an IRD offer was associated with an adjusted hazard ratio for mortality of 1.87 (95% CI 1.24, 2.81, p < 0.003).
Conclusions:
IRD organ acceptance is associated with a substantial survival benefit. Increasing acceptance of IRD organs may provide a targetable opportunity to decrease waitlist mortality in PHT.
Diet is a modifiable risk factor for chronic disease and a potential modulator of telomere length (TL). The study aim was to investigate associations between diet quality and TL in Australian adults after a 12-week dietary intervention with an almond-enriched diet (AED). Participants (overweight/obese, 50–80 years) were randomised to an AED (n 62) or isoenergetic nut-free diet (NFD, n 62) for 12 weeks. Diet quality was assessed using a Dietary Guideline Index (DGI), applied to weighed food records, that consists of ten components reflecting adequacy, variety and quality of core food components and discretionary choices within the diet. TL was measured by quantitative PCR in samples of lymphocytes, neutrophils, and whole blood. There were no significant associations between DGI scores and TL at baseline. Diet quality improved with AED and decreased with NFD after 12 weeks (change from baseline AED + 9·8 %, NFD − 14·3 %; P < 0·001). TL increased in neutrophils (+9·6 bp, P = 0·009) and decreased in whole blood, to a trivial extent (–12·1 bp, P = 0·001), and was unchanged in lymphocytes. Changes did not differ between intervention groups. There were no significant relationships between changes in diet quality scores and changes in lymphocyte, neutrophil or whole blood TL. The inclusion of almonds in the diet improved diet quality scores but had no impact on TL mid-age to older Australian adults. Future studies should investigate the impact of more substantial dietary changes over longer periods of time.
Clinical trials are a fundamental tool in evaluating the safety and efficacy of new drugs, medical devices, and health system interventions. Clinical trial visits generally involve eligibility assessment, enrollment, intervention administration, data collection, and follow-up, with many of these steps performed during face-to-face visits between participants and the investigative team. Social distancing, which emerged as one of the mainstay strategies for reducing the spread of SARS-CoV-2, has presented a challenge to the traditional model of clinical trial conduct, causing many research teams to halt all in-person contacts except for life-saving research. Nonetheless, clinical research has continued during the pandemic because study teams adapted quickly, turning to virtual visits and other similar methods to complete critical research activities. The purpose of this special communication is to document this rapid transition to virtual methodologies at Clinical and Translational Science Awards hubs and highlight important considerations for future development. Looking beyond the pandemic, we envision that a hybrid approach, which implements remote activities when feasible but also maintains in-person activities as necessary, will be adopted more widely for clinical trials. There will always be a need for in-person aspects of clinical research, but future study designs will need to incorporate remote capabilities.
Antisaccade tasks can be used to index cognitive control processes, e.g. attention, behavioral inhibition, working memory, and goal maintenance in people with brain disorders. Though diagnoses of schizophrenia (SZ), schizoaffective (SAD), and bipolar I with psychosis (BDP) are typically considered to be distinct entities, previous work shows patterns of cognitive deficits differing in degree, rather than in kind, across these syndromes.
Methods
Large samples of individuals with psychotic disorders were recruited through the Bipolar-Schizophrenia Network on Intermediate Phenotypes 2 (B-SNIP2) study. Anti- and pro-saccade task performances were evaluated in 189 people with SZ, 185 people with SAD, 96 people with BDP, and 279 healthy comparison participants. Logistic functions were fitted to each group's antisaccade speed-performance tradeoff patterns.
Results
Psychosis groups had higher antisaccade error rates than the healthy group, with SZ and SAD participants committing 2 times as many errors, and BDP participants committing 1.5 times as many errors. Latencies on correctly performed antisaccade trials in SZ and SAD were longer than in healthy participants, although error trial latencies were preserved. Parameters of speed-performance tradeoff functions indicated that compared to the healthy group, SZ and SAD groups had optimal performance characterized by more errors, as well as less benefit from prolonged response latencies. Prosaccade metrics did not differ between groups.
Conclusions
With basic prosaccade mechanisms intact, the higher speed-performance tradeoff cost for antisaccade performance in psychosis cases indicates a deficit that is specific to the higher-order cognitive aspects of saccade generation.
The Eating Assessment in Toddlers FFQ (EAT FFQ) has been shown to have good reliability and comparative validity for ranking nutrient intakes in young children. With the addition of food items (n 4), we aimed to re-assess the validity of the EAT FFQ and estimate calibration factors in a sub-sample of children (n 97) participating in the Growing Up Milk – Lite (GUMLi) randomised control trial (2015–2017). Participants completed the ninety-nine-item GUMLi EAT FFQ and record-assisted 24-h recalls (24HR) on two occasions. Energy and nutrient intakes were assessed at months 9 and 12 post-randomisation and calibration factors calculated to determine predicted estimates from the GUMLi EAT FFQ. Validity was assessed using Pearson correlation coefficients, weighted kappa (κ) and exact quartile categorisation. Calibration was calculated using linear regression models on 24HR, adjusted for sex and treatment group. Nutrient intakes were significantly correlated between the GUMLi EAT FFQ and 24HR at both time points. Energy-adjusted, de-attenuated Pearson correlations ranged from 0·3 (fibre) to 0·8 (Fe) at 9 months and from 0·3 (Ca) to 0·7 (Fe) at 12 months. Weighted κ for the quartiles ranged from 0·2 (Zn) to 0·6 (Fe) at 9 months and from 0·1 (total fat) to 0·5 (Fe) at 12 months. Exact agreement ranged from 30 to 74 %. Calibration factors predicted up to 56 % of the variation in the 24HR at 9 months and 44 % at 12 months. The GUMLi EAT FFQ remained a useful tool for ranking nutrient intakes with similar estimated validity compared with other FFQ used in children under 2 years.
There has been scant exploration of the social and emotional wellbeing (SEWB) of young Indigenous populations that identify as LGBTQA+ (Lesbian, Gay, Bisexual, Transgender, Queer/Questioning, Asexual +). Given the vulnerability of this cohort living in Western settler colonial societies, wider investigation is called for to respond to their needs, experiences and aspirations. This paper summarizes existing research on the topic highlighting the lack of scholarship on the intersection of youth, Indigeneity, LGBTQA+ and SEWB. The paper takes a holistic approach to provide a global perspective that draws on an emerging body of literature and research driven by Indigenous scholars in settler colonial societies. The paper points to the importance of understanding converging colonial influences and ongoing contemporary elements, such as racism and marginalization that impact on young Indigenous LGBTQA+ wellbeing.
Registry-based trials have emerged as a potentially cost-saving study methodology. Early estimates of cost savings, however, conflated the benefits associated with registry utilisation and those associated with other aspects of pragmatic trial designs, which might not all be as broadly applicable. In this study, we sought to build a practical tool that investigators could use across disciplines to estimate the ranges of potential cost differences associated with implementing registry-based trials versus standard clinical trials.
Methods:
We built simulation Markov models to compare unique costs associated with data acquisition, cleaning, and linkage under a registry-based trial design versus a standard clinical trial. We conducted one-way, two-way, and probabilistic sensitivity analyses, varying study characteristics over broad ranges, to determine thresholds at which investigators might optimally select each trial design.
Results:
Registry-based trials were more cost effective than standard clinical trials 98.6% of the time. Data-related cost savings ranged from $4300 to $600,000 with variation in study characteristics. Cost differences were most reactive to the number of patients in a study, the number of data elements per patient available in a registry, and the speed with which research coordinators could manually abstract data. Registry incorporation resulted in cost savings when as few as 3768 independent data elements were available and when manual data abstraction took as little as 3.4 seconds per data field.
Conclusions:
Registries offer important resources for investigators. When available, their broad incorporation may help the scientific community reduce the costs of clinical investigation. We offer here a practical tool for investigators to assess potential costs savings.