We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Cardiovascular diseases (CVDs) are the leading cause of death worldwide(1). As poor diet quality is a major contributor to CVD burden; dietary intervention is recommended as a first-line approach to CVD prevention and management(2). Personalised nutrition (PN) refers to individualised nutrition care based on genetic, phenotypic, medical, and/or behavioural and lifestyle characteristics(3). Medical nutrition therapy by dietitians shares many of these principles and can be categorised as PN(4). PN may be beneficial in improving CVD risk factors and diet, however, this has not previously been systematically reviewed. The aim of this systematic review was to evaluate the effectiveness of PN interventions on CVD risk factors and diet in adults at elevated CVD risk. A comprehensive search was conducted in March 2023 across Embase, Medline, CINAHL, PubMed, Scopus and Cochrane databases, focusing on randomised controlled trials (RCTs) published after 2000 in English. Included studies tested the effect of PN interventions on adults with elevated CVD risk factors (determined by anthropometric measures, clinical indicators, or high overall CVD risk). Risk of bias was assessed using the Academy of Nutrition and Dietetics Quality Criteria checklist. Random-effects meta-analysis were conducted to explore weighted mean differences (WMD) in change or final mean values for studies with comparable data (studies with dietary counselling interventions), for outcomes including blood pressure (BP), blood lipids, and anthropometric measurements. Sixteen articles reporting on 15 unique studies (n = 7676) met inclusion criteria and were extracted. Outcomes of participants (n = 40–564) with CVD risk factors including hyperlipidaemia (n = 5), high blood pressure (n = 3), BMI > 25kg/m2 (n = 1) or multiple factors (n = 7) were reported. Results found potential benefits of PN on systolic blood pressure (SBP) (WMD −1.91 [95% CI −3.51, −0.31] mmHg), diastolic blood pressure (DBP) (WMD −1.49 [95% CI −2.39, −0.58] mmHg), triglycerides (TG) (WMD −0.18 [95% CI −0.34, −0.03] mmol/L), and dietary intake in individuals at high CVD risk. Results were inconsistent for plasma lipid and anthropometric outcomes. Dietary counselling PN interventions showed promising results on CVD risk factors in individuals at-risk individuals. Further evidence for other personalisation methods and improvements to methodological quality and longer study durations are required in future PN interventions.
Current clinical guidelines for people at risk of heart disease in Australia recommend nutrition intervention in conjunction with pharmacotherapy(1). However, Australians living in rural and remote regions have less access to medical nutritional therapy (MNT) provided by Accredited Practising Dietitians (APDs) than their urban counterparts(2). The aim of the HealthyRHearts study was to trial the delivery of MNT by APDs using telehealth to eligible patients of General Practitioners (GPs) located in small to large rural towns in the Hunter New England region(3) of New South Wales, Australia. The study design was a 12-month pragmatic randomised controlled trial. The key outcome was reduced total cholesterol. The study was place-based, meaning many of the research team and APDs were based rurally, to ensure the context of the GPs and patients was already known. Eligible participants were those assessed as moderate-to-high risk of CVD by their GP. People in the intervention group received five MNT consults (totalling two hours) delivered via telehealth by APDs, and also answered a personalised nutrition questionnaire to guide their priorities and to support personalised dietary behaviour change during the counselling. Both intervention and control groups received usual care from their GP and were provided access to the Australian Eating Survey (Heart version), a 242-item online food frequency questionnaire with technology-supported personalised nutrition reports that evaluated intake relative to heart healthy eating principles. Of the 192 people who consented to participate, 132 were eligible due to their moderate-to-high risk. Pre-post participant medication use with a registered indication(4) for hypercholesterolemia, hypertension and glycemic control were documented according to class and strength (defined daily dose: DDD)(5). Nine GP practices (with 91 participants recruited) were randomised to the intervention group and seven practices (41 participants) were randomised to control. Intervention participants attended 4.3 ± 1.4 out of 5 dietetic consultations offered. Of the132 people with baseline clinical chemistry, 103 also provided a 12-month sample. Mean total cholesterol at baseline was 4.97 ± 1.13 mmol/L for both groups, with 12-m reduction of 0.26 ± 0.77 for intervention and 0.28 ± 0.79 for control (p = 0.90, unadjusted value). Median (IQR) number of medications for the intervention group was 2 (1–3) at both baseline and 12 months (p = 0.78) with 2 (1–3) and 3 (2–3) for the control group respectively. Combined DDD of all medications was 2.1 (0.5–3.8) and 2.5 (0.75–4.4) at baseline and 12 months (p = 0.77) for the intervention group and 2.7 (1.5–4.0) and 3.0 (2.0–4.5) for the control group (p = 0.30). Results suggest that medications were a significant contributor to the management of total cholesterol. Further analysis is required to evaluate changes in total cholesterol attributable to medication prescription relative to the MNT counselling received by the intervention group.
Interest in the consumption of food containing live microbes (LM) as a component of dietary patterns has accelerated, due to potential positive contributions to health and chronic disease risk, including cardiovascular disease (CVD)(1,2). There are different patterns of LM consumption, including through the intake of probiotics or fermented foods or via a broader spectrum of foods that may harbour microbes, such as raw, unpeeled fruits and vegetables(3). To date, no study has quantitatively assessed potential intake of LM in a sample of Australians. The aim was to quantify presence of LM for common foods and beverages consumed in Australia, using the Australian Eating Survey® (AES) and AES-Heart®(4,5 food frequency questionnaires as the dietary assessment tool. Quantification of potential live microbial content (per gram) was conducted in accordance with the methodology outlined by Marco et al.(3). Briefly, foods were assigned to categories with LM ranges defined as low (Low; < 104 CFU/g), medium (Medium; 104–107 CFU/g), or high (High; > 107 CFU/g) for level of live microbes(3). These categories were based on the expected prevalence of viable microorganisms within different food matrices. Specifically, pasteurised food products are characterised as having microbial concentrations Low < 104 CFU/g. In contrast, fresh fruits and vegetables, consumed unpeeled exhibit a microbial range considered medium (Medium; 104–107 CFU/g), while unpasteurised fermented foods and probiotic supplemented foods exhibit significantly higher microbial content (High > 107 CFU/g). Based on this methodology, the estimated quantities of live microbes in 400 foods and beverages (including individual products and mixed dishes) within the AES and AES-Heart®(4,5 FFQs were determined and summarised across 22 food groups using the 2-digit codes from the 2011–2013 AUSNUT database(6). Preliminary results indicate the Low group was the most represented, out of the 400 foods 369 belong to this category. The food groups that represent the highest percentages in the Low group were vegetable products and dishes (13.8%) followed by meat, poultry, and game products and dishes (13.6%). The Medium group was composed by 25 items, with the most representative food groups being fruit products and dishes (48%). In the High group, the representative food groups were dairy and meat substitutes (e.g., soy yoghurt; 66.7%) and milk products and dishes (33.3%). The creation of this database will facilitates new research opportunities to investigate relationships between intake of live microbes and health outcomes, including CVD. Future research into how dietary pattern rich in live microbes related to chronic disease risk factors, such as reduced BMI, blood pressure, plasma lipids and glucose, in the Australian population could offer new insights into risk factor management through LM dietary interventions.
Identifying reliable blood pressure biomarkers is essential for understanding how dietary interventions might supported a reduction in hypertension. Metabolomics, which involves the analysis of small molecules in biological samples(1), offers a valuable tool for uncovering metabolic biomarkers linked to both dietary patterns and blood pressure, providing insights for more effective dietary strategies to manage or prevent hypertension. The aim was to evaluate associations between plasma and urinary metabolite concentrations with blood pressure measures (systolic blood pressure [SBP] and diastolic blood pressure [DBP]) in healthy Australian adults. This cross-sectional secondary analysis used baseline data from a randomised, cross-over feeding trial(2). Plasma and urinary metabolomic data were generated using Ultra-high Performance Liquid Chromatography-Tandem Mass Spectrometry (UHPLC-MS/MS) through Metabolon Inc.’s (Morrisville, USA) Global Discovery Panel. Blood pressure was assessed in clinic using the Uscom BP+ supra-systolic oscillometric central blood pressure device, with the cuff positioned on the upper arm at the strongest pulse signal location. Participants sat relaxed and comfortably for 5 minutes before their measurements were taken. They remained seated with legs uncrossed, feet flat on the floor, and were instructed to maintain even breathing throughout the tests. Blood pressure was measured with three consecutive readings taken from the supported left arm, with a 1-minute rest between each reading. The first reading was discarded, and the average of the remaining two was used as the final measurement. Metabolite concentrations were log-transformed. Associations among blood pressure measures and urinary or plasma metabolites were evaluated using linear regression models, adjusting for age and sex. A total of 34 healthy Australian adults (mean age 38.4 ± 18.1 years, 53% females) baseline data was included. After adjusting for multiple comparisons using the Benjamini-Hochberg procedure with a significance threshold of q < 0.2, a negative association between two urinary metabolites (gamma-glutamyl histidine and gamma-glutamyl phenylalanine) and DBP was identified. In addition, 32 plasma metabolites were associated with SBP with 18 showing a negative association, including 1,2-dilinoleoyl-GPC (18:2/18:2) and 1-linoleoyl-GPC (18:2), and 14 showing a positive association (beta-hydroxyisovalerate, 3-Hydroxyisobutyrate). Potential mechanisms based on existing research that might explain these associations include the role of gamma-glutamyl peptides in lowering DBP by reducing oxidative stress and improving endothelial function(3). In contrast, 3-hydroxybutyrate may elevate blood pressure due to metabolic disturbances linked to impaired branched-chain amino acid catabolism(4). Furthermore, 1,2-Dilinoleoyl-GPC and 1-linoleoyl-GPC, both contain linoleic acid, which could contribute to lowering systolic blood pressure (SBP) by mitigating vascular inflammation(5). Although some of these metabolites have been implicated in blood pressure regulation in prior research, others revealed new associations. These findings suggest potential candidate nutritional biomarkers for blood pressure, but further research is needed to confirm their reproducibility, and causal role in blood pressure regulation.
Approximately 15% of Australia’s workforce are shift workers, who are at greater risk for obesity and related conditions, such as type 2 diabetes and cardiovascular disease.(1,2,3) While current guidelines for obesity management prioritise diet-induced weight loss as a treatment option, there are limited weight-loss studies involving night shift workers and no current exploration of the factors associated with engagement in weight-loss interventions. The Shifting Weight using Intermittent Fasting in night shift workers (SWIFt) study was a randomised controlled trial that compared three, 24-week weight-loss interventions: continuous energy restriction (CER), and 500-calorie intermittent fasting (IF) for 2-days per week; either during the day (IF:2D), or the night shift (IF:2N). This current study provided a convergent, mixed methods, experimental design to: 1) explore the relationship between participant characteristics, dietary intervention group and time to drop out for the SWIFt study (quantitative); and 2) understand why some participants are more likely to drop out of the intervention (qualitative). Participant characteristics included age, gender, ethnicity, occupation, shift schedule, number of night shifts per four weeks, number of years in shift work, weight at baseline, weight change at four weeks, and quality of life at baseline. A Cox regression model was used to specify time to drop out from the intervention as the dependent variable and purposive selection was used to determine predictors for the model. Semi-structured interviews at baseline and 24-weeks were conducted and audio diaries every two weeks were collected from participants using a maximum variation sampling approach, and analysed using the five steps of framework analysis.(4) A total of 250 participants were randomised to the study between October 2019 and February 2022. Two participants were excluded from analysis due to retrospective ineligibility. Twenty-nine percent (n = 71) of participants dropped out of the study over the 24-week intervention. Greater weight at baseline, fewer years working shift work, lower weight change at four weeks, and women compared to men were associated with a significant increased rate of drop out from the study (p < 0.05). Forty-seven interviews from 33 participants were conducted and 18 participants completed audio diaries. Lack of time, fatigue and emotional eating were barriers more frequently reported by women. Participants with a higher weight at baseline more frequently reported fatigue and emotional eating barriers, and limited guidance on non-fasting days as a barrier for the IF interventions. This study provides important considerations for refining shift-worker weight-loss interventions for future implementation in order to increase engagement and mitigate the adverse health risks experienced by this essential workforce.
Emerging research has highlighted a relationship between diet and genetics, suggesting that individuals may benefit more from personalised dietary recommendations based on their genetic risk for cardiovascular disease (CVD)(1,2). This current study aims to: (1) Measure knowledge of genetics among healthcare professionals (HCPs) working in CVD, (2) Identify HCPs’ attitudes to using genetic risk to tailor dietary interventions, and (3) Identify perceived barriers and enablers to implementing genetics to tailor dietary interventions. In a mixed-methods study, Australian HCPs (dietitians and AHPRA registered healthcare professionals) working with people with CVD were invited to complete an anonymous online survey (REDCap) and an optional interview. Recruitment occurred through social media and relevant professional organisations. Survey questions were underpinned by the theoretical domains framework(3) and data was synthesised descriptively. Semi-structured interviews were undertaken via Zoom. Interview responses were analysed using a thematic analysis approach using Braun & Clarke methodology(4). Survey responders (n = 63, 89% female, mean age 42 ± 14 years) were primarily dietitians (83%), with ≥ 10 years of experience (56%) and spent at least 20% of their time working with people with CVD (n = 55, 87%). Approximately half of respondents were aware that genetic testing for CVD exists (n = 36) and always assess family history of CVD (n = 31). Few respondents reported using genetic testing (n = 5, 8%) or felt confident interpreting and using genetic testing (n = 7, 11%) in practice. Respondents were interested in incorporating genetics into their practice to tailor dietary advice (n = 44, 70%). Primary barriers to using genetic testing included financial costs to patients and negative implications for some patients. Almost all respondents agreed genetic testing will allow for more targeted and personalised approaches for prevention and management of CVD (94%). From the interviews (n = 15, 87% female, 43 ± 17 years, 87% dietitian), three themes were identified: (1) ‘On the periphery of care’—HCPs are aware of the role of genetics in health and are interested in knowing more, but it is not yet part of usual practice; (2) ‘A piece of the puzzle’—using genetic testing could be a tool to help personalise, prioritise and motivate participants; and (3) ‘Whose role is it?’—There is uncertainty regarding HCP roles and knowing exactly whose role it is to educate patients. Healthcare professionals are interested in using genetics to tailor dietary advice for CVD, but potential implications for patients need to be considered. Upskilling is required to increase their knowledge and confidence in this area. Further clarity regarding HCP roles in patient education is needed before this can be implemented in practice.
We provide an assessment of the Infinity Two fusion pilot plant (FPP) baseline plasma physics design. Infinity Two is a four-field period, aspect ratio $A = 10$, quasi-isodynamic stellarator with improved confinement appealing to a max-$J$ approach, elevated plasma density and high magnetic fields ($ \langle B\rangle = 9$ T). Here $J$ denotes the second adiabatic invariant. At the envisioned operating point ($800$ MW deuterium-tritium (DT) fusion), the configuration has robust magnetic surfaces based on magnetohydrodynamic (MHD) equilibrium calculations and is stable to both local and global MHD instabilities. The configuration has excellent confinement properties with small neoclassical transport and low bootstrap current ($|I_{bootstrap}| \sim 2$ kA). Calculations of collisional alpha-particle confinement in a DT FPP scenario show small energy losses to the first wall (${\lt}1.5 \,\%$) and stable energetic particle/Alfvén eigenmodes at high ion density. Low turbulent transport is produced using a combination of density profile control consistent with pellet fueling and reduced stiffness to turbulent transport via three-dimensional shaping. Transport simulations with the T3D-GX-SFINCS code suite with self-consistent turbulent and neoclassical transport predict that the DT fusion power$P_{{fus}}=800$ MW operating point is attainable with high fusion gain ($Q=40$) at volume-averaged electron densities $n_e\approx 2 \times 10^{20}$ m$^{-3}$, below the Sudo density limit. Additional transport calculations show that an ignited ($Q=\infty$) solution is available at slightly higher density ($2.2 \times 10^{20}$ m$^{-3}$) with $P_{{fus}}=1.5$ GW. The magnetic configuration is defined by a magnetic coil set with sufficient room for an island divertor, shielding and blanket solutions with tritium breeding ratios (TBR) above unity. An optimistic estimate for the gas-cooled solid breeder designed helium-cooled pebble bed is TBR $\sim 1.3$. Infinity Two satisfies the physics requirements of a stellarator fusion pilot plant.
The selection, design and optimization of a suitable blanket configuration for an advanced high-field stellarator concept is seen as a key feasibility issue and has been incorporated as a vital and necessary part of the Infinity Two fusion pilot plant physics basis. The focus of this work was to identify a baseline blanket which can be rapidly deployed for Infinity Two while also maintaining flexibility and opportunities for higher performing concepts later in development. Results from this analysis indicate that gas-cooled solid breeder designs such as the helium-cooled pebble bed (HCPB) are the most promising concepts, primarily motivated by the neutronics performance at applicable blanket build depths, and the relatively mature technology basis. The lithium lead (PbLi) family of concepts, particularly the dual-cooled lithium lead, offer a compelling alternative to solid blanket concepts as they have synergistic developmental pathways while simultaneously mitigating much of the technical risk of those designs. Homogenized three-dimensional neutronics analysis of the Infinity Two configuration indicates that the HCPB achieves an adequate tritium breeding ratio (TBR) (1.30 which enables sufficient margin at low engineering fidelity), and near appropriate shielding of the magnets (average fast fluence of 1.3 ${\times}$ 10$^{18}$ n cm$^{-2}$ per full-power year). The thermal analysis indicates that reasonably high thermal efficiencies (greater than 30 %) are readily achievable with the HCPB paired with a simple Rankine cycle using reheat. Finally, the tritium fuel cycle analysis for Infinity Two shows viability, with anticipated operational inventories of less than one kilogram (approximately 675 g) and a required TBR (TBR$_{\textrm {req}}$) of less than 1.05 to maintain fuel self-sufficiency (approximately 1.023 for a driver blanket with no inventory doubling). Although further optimization and engineering design are still required, at the physics basis stage all initial targets have been met for the Infinity Two configuration.
To quantify the impact of patient- and unit-level risk adjustment on infant hospital-onset bacteremia (HOB) standardized infection ratio (SIR) ranking.
Design:
A retrospective, multicenter cohort study.
Setting and participants:
Infants admitted to 284 neonatal intensive care units (NICUs) in the United States between 2016 and 2021.
Methods:
Expected HOB rates and SIRs were calculated using four adjustment strategies: birthweight (model 1), birthweight and postnatal age (model 2), birthweight and NICU complexity (model 3), and birthweight, postnatal age, and NICU complexity (model 4). Sites were ranked according to the unadjusted HOB rate, and these rankings were compared to rankings based on the four adjusted SIR models.
Results:
Compared to unadjusted HOB rate ranking (smallest to largest), the number and proportion of NICUs that left the fourth quartile (worst-performing) following adjustments were as follows: adjusted for birthweight (16, 22.5%), birthweight and postnatal age (19, 26.8%), birthweight and NICU complexity (22, 31.0%), birthweight, postnatal age and NICU complexity (23, 32.4%). Comparing NICUs that moved into the better-performing quartiles after birthweight adjustment to those that remained in the better-performing quartiles regardless of adjustment, the median percentage of low birthweight infants was 17.1% (Interquartile Range (IQR): 15.8, 19.2) vs 8.7% (IQR: 4.8, 12.6); and the median percentage of infants who died was 2.2% (IQR: 1.8, 3.1) vs 0.5% (IQR: 0.01, 12.0), respectively.
Conclusion:
Adjusting for patient and unit-level complexity moved one-third of NICUs in the worst-performing quartile into a better-performing quartile. Risk adjustment may allow for a more accurate comparison across units with varying levels of patient acuity and complexity.
Next generation high-power laser facilities are expected to generate hundreds-of-MeV proton beams and operate at multi-Hz repetition rates, presenting opportunities for medical, industrial and scientific applications requiring bright pulses of energetic ions. Characterizing the spectro-spatial profile of these ions at high repetition rates in the harsh radiation environments created by laser–plasma interactions remains challenging but is paramount for further source development. To address this, we present a compact scintillating fiber imaging spectrometer based on the tomographic reconstruction of proton energy deposition in a layered fiber array. Modeling indicates that spatial resolution of approximately 1 mm and energy resolution of less than 10% at proton energies of more than 20 MeV are readily achievable with existing 100 μm diameter fibers. Measurements with a prototype beam-profile monitor using 500 μm fibers demonstrate active readouts with invulnerability to electromagnetic pulses, and less than 100 Gy sensitivity. The performance of the full instrument concept is explored with Monte Carlo simulations, accurately reconstructing a proton beam with a multiple-component spectro-spatial profile.
The expensive-tissue hypothesis (ETH) posited a brain–gut trade-off to explain how humans evolved large, costly brains. Versions of the ETH interrogating gut or other body tissues have been tested in non-human animals, but not humans. We collected brain and body composition data in 70 South Asian women and used structural equation modelling with instrumental variables, an approach that handles threats to causal inference including measurement error, unmeasured confounding and reverse causality. We tested a negative, causal effect of the latent construct ‘nutritional investment in brain tissues’ (MRI-derived brain volumes) on the construct ‘nutritional investment in lean body tissues’ (organ volume and skeletal muscle). We also predicted a negative causal effect of the brain latent on fat mass. We found negative causal estimates for both brain and lean tissue (−0.41, 95% CI, −1.13, 0.23) and brain and fat (−0.56, 95% CI, −2.46, 2.28). These results, although inconclusive, are consistent with theory and prior evidence of the brain trading off with lean and fat tissues, and they are an important step in assessing empirical evidence for the ETH in humans. Analyses using larger datasets, genetic data and causal modelling are required to build on these findings and expand the evidence base.
Soybean [Glycine max (L.) Merr.] that lack resistance to auxin herbicides [i.e., not genetically modified for resistance] have well-documented responses to those particular herbicides, with yield loss being probable. When a soybean field is injured by auxin herbicides, regulatory authorities often collect a plant sample from that field. This research attempted to simulate soybean exposures due to accidental mixing of incorrect herbicides, tank contamination, or particle drift. This research examined whether analytical testing of herbicide residues on soybean to aminocyclopyrachlor (ACP), aminopyralid, 2,4-D, or dicamba would be related to the visual observations and yield responses from these herbicides. ACP and aminopyralid were applied to R1 soybean at 0.1, 1, and 10 g ae ha−1; 2,4-D and dicamba were applied at 1, 10, and 100 g ae ha−1. Visual evaluations and plant sample collections were undertaken at 1, 3, 7, 14, and 21 d after treatment (DAT), and yield was measured. The conservative limits of detection for the four herbicides in this project were 5, 10, 5, and 5 ng g−1 fresh weight of soybean for ACP, aminopyralid, 2,4-D, and dicamba, respectively. Many of the plant samples were non-detects, especially at lower application dosages. All herbicide concentrations rapidly declined soon after application, and many reached nondetectable limits by 14 DAT. All herbicide treatments caused soybean injury, although the response to 2,4-D was markedly lower than the responses to the other three herbicides. There was no apparent correlation between herbicide concentrations (which were declining over time) and the observed soybean injury (which was increasing over time or staying the same). This research indicated that plant samples should be collected as soon as possible after soybean exposure to auxin herbicides.
Adsorption of Cu2+ and Co2+ by synthetic imogolite, synthetic allophanes with a range of SiO2/ Al2O3 ratios, and allophanic clay fractions from volcanic ash soils was measured in an ionic medium of 0.05 M Ca(NO3)2. The effect of pH (and metal concentration) on adsorption was qualitatively similar for the synthetic and natural allophanes with relatively minor changes in behavior caused by variable SiO2/Al2O3 ratios. Cu and Co were chemisorbed by allophane at pH 5.0–5.5 and 6.9–7.2 (pH values for 50% adsorption level), respectively, with concomitant release of 1.6–1.9 protons/metal ion adsorbed. Quantitatively, adsorption by imogolite was less than that by the allophanes, presumably because of fewer sites available for chemisorption on the tubular structure of imogolite. Electron spin resonance studies of the imogolite and allophanes revealed that Cu2+ was adsorbed as a monomer on two types of surface sites. The preferred sites were likely adjacent AlOH groups binding Cu2+ by a binuclear mechanism; weaker bonding occurred at isolated AlOH or SiOH groups. These chemisorbed forms of Cu2+ were readily extracted by EDTA, CH3COOH, and metals capable of specific adsorption, but were not exchangeable. In addition, the H2O and/or OH− ligands of chemisorbed Cu2+ were readily displaced by NH3, with the formation of ternary Cu-ammonia-surface complexes.
The negative surface charge of synthetic allophanes with a range of Si/Al ratios decreased and positive charge increased with increasing alumina content at a given pH. The phosphate adsorption capacity also increased with increasing Al content. That this relationship between composition and chemical reactivity was not found for the soil allophanes is attributed to the presence of specifically adsorbed organic or inorganic anions on the natural material. Both synthetic and natural imogolites had a much lower capacity to adsorb phosphate than the allophanes and adsorbed anomalously high amounts of Cl− and ClO4− at high pH. It is proposed that intercalation of salt occurs in imogolite, although electron spin resonance studies using spin probes failed to reveal the trapping of small organic molecules in imogolite tubes. These spin probes in the carboxylated form did, however, suggest an electrostatic retention of carboxylate by imogolite and a more specific adsorption by allophane involving ligand exchange of surface hydroxyl. The results illustrate the inherent differences in charge and surface properties of allophane and imogolite despite the common structural unit which the two minerals incorporate.
High-quality evidence is lacking for the impact on healthcare utilisation of short-stay alternatives to psychiatric inpatient services for people experiencing acute and/or complex mental health crises (known in England as psychiatric decision units [PDUs]). We assessed the extent to which changes in psychiatric hospital and emergency department (ED) activity were explained by implementation of PDUs in England using a quasi-experimental approach.
Methods
We conducted an interrupted time series (ITS) analysis of weekly aggregated data pre- and post-PDU implementation in one rural and two urban sites using segmented regression, adjusting for temporal and seasonal trends. Primary outcomes were changes in the number of voluntary inpatient admissions to (acute) adult psychiatric wards and number of ED adult mental health-related attendances in the 24 months post-PDU implementation compared to that in the 24 months pre-PDU implementation.
Results
The two PDUs (one urban and one rural) with longer (average) stays and high staff-to-patient ratios observed post-PDU decreases in the pattern of weekly voluntary psychiatric admissions relative to pre-PDU trend (Rural: −0.45%/week, 95% confidence interval [CI] = −0.78%, −0.12%; Urban: −0.49%/week, 95% CI = −0.73%, −0.25%); PDU implementation in each was associated with an estimated 35–38% reduction in total voluntary admissions in the post-PDU period. The (urban) PDU with the highest throughput, lowest staff-to-patient ratio and shortest average stay observed a 20% (−20.4%, CI = −29.7%, −10.0%) level reduction in mental health-related ED attendances post-PDU, although there was little impact on long-term trend. Pooled analyses across sites indicated a significant reduction in the number of voluntary admissions following PDU implementation (−16.6%, 95% CI = −23.9%, −8.5%) but no significant (long-term) trend change (−0.20%/week, 95% CI = −0.74%, 0.34%) and no short- (−2.8%, 95% CI = −19.3%, 17.0%) or long-term (0.08%/week, 95% CI = −0.13, 0.28%) effects on mental health-related ED attendances. Findings were largely unchanged in secondary (ITS) analyses that considered the introduction of other service initiatives in the study period.
Conclusions
The introduction of PDUs was associated with an immediate reduction of voluntary psychiatric inpatient admissions. The extent to which PDUs change long-term trends of voluntary psychiatric admissions or impact on psychiatric presentations at ED may be linked to their configuration. PDUs with a large capacity, short length of stay and low staff-to-patient ratio can positively impact ED mental health presentations, while PDUs with longer length of stay and higher staff-to-patient ratios have potential to reduce voluntary psychiatric admissions over an extended period. Taken as a whole, our analyses suggest that when establishing a PDU, consideration of the primary crisis-care need that underlies the creation of the unit is key.
Therapeutics targeting frontotemporal dementia (FTD) are entering clinical trials. There are challenges to conducting these studies, including the relative rarity of the disease. Remote assessment tools could increase access to clinical research and pave the way for decentralized clinical trials. We developed the ALLFTD Mobile App, a smartphone application that includes assessments of cognition, speech/language, and motor functioning. The objectives were to determine the feasibility and acceptability of collecting remote smartphone data in a multicenter FTD research study and evaluate the reliability and validity of the smartphone cognitive and motor measures.
Participants and Methods:
A diagnostically mixed sample of 207 participants with FTD or from familial FTD kindreds (CDR®+NACC-FTLD=0 [n=91]; CDR®+NACC-FTLD=0.5 [n=39]; CDR®+NACC-FTLD>1 [n=39]; unknown [n=38]) were asked to remotely complete a battery of tests on their smartphones three times over two weeks. Measures included five executive functioning (EF) tests, an adaptive memory test, and participant experience surveys. A subset completed smartphone tests of balance at home (n=31) and a finger tapping test (FTT) in the clinic (n=11). We analyzed adherence (percentage of available measures that were completed) and user experience. We evaluated Spearman-Brown split-half reliability (100 iterations) using the first available assessment for each participant. We assessed test-retest reliability across all available assessments by estimating intraclass correlation coefficients (ICC). To investigate construct validity, we fit regression models testing the association of the smartphone measures with gold-standard neuropsychological outcomes (UDS3-EF composite [Staffaroni et al., 2021], CVLT3-Brief Form [CVLT3-BF] Immediate Recall, mechanical FTT), measures of disease severity (CDR®+NACC-FTLD Box Score & Progressive Supranuclear Palsy Rating Scale [PSPRS]), and regional gray matter volumes (cognitive tests only).
Results:
Participants completed 70% of tasks. Most reported that the instructions were understandable (93%), considered the time commitment acceptable (97%), and were willing to complete additional assessments (98%). Split-half reliability was excellent for the executive functioning (r’s=0.93-0.99) and good for the memory test (r=0.78). Test-retest reliabilities ranged from acceptable to excellent for cognitive tasks (ICC: 0.70-0.96) and were excellent for the balance (ICC=0.97) and good for FTT (ICC=0.89). Smartphone EF measures were strongly associated with the UDS3-EF composite (ß's=0.6-0.8, all p<.001), and the memory test was strongly correlated with total immediate recall on the CVLT3-BF (ß=0.7, p<.001). Smartphone FTT was associated with mechanical FTT (ß=0.9, p=.02), and greater acceleration on the balance test was associated with more motor features (ß=0.6, p=0.02). Worse performance on all cognitive tests was associated with greater disease severity (ß's=0.5-0.7, all p<.001). Poorer performance on the smartphone EF tasks was associated with smaller frontoparietal/subcortical volume (ß's=0.4-0.6, all p<.015) and worse memory scores with smaller hippocampal volume (ß=0.5, p<.001).
Conclusions:
These results suggest remote digital data collection of cognitive and motor functioning in FTD research is feasible and acceptable. These findings also support the reliability and validity of unsupervised ALLFTD Mobile App cognitive tests and provide preliminary support for the motor measures, although further study in larger samples is required.