We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As the Arctic warms and growing seasons start to lengthen, governments and producers are speculating about northern “climate-driven agricultural frontiers” as a potential solution to food insecurity. One of the central ecological factors in northern spaces, however, is permafrost (perennial frozen ground), which can drive cascading environmental changes upon thaw. Considering the land requirements for expanded agriculture and the unique challenges of northern farming, national and subnational governments are grappling with and facilitating this speculative boom in different ways. Analysing agricultural land use policy instruments from the US State of Alaska and the Republic of Sakha (Yakutia) in Russia, this paper investigates if and how permafrost factors into their legal frameworks and what impacts this has on agricultural development, conservation, and food security. Alaska and the Republic of Sakha were chosen for reasons including both having at least 100 years of agricultural history on permafrost soils, both containing extensive amounts of permafrost within their landmasses and both containing permafrost that is ice-rich. Comparing legal texts as indicative of state capacities and strategies to govern, the paper finds that the two regions diverge in how they understand and regulate permafrost, and suggests that these approaches could benefit from one another. Bringing together geoclimatic and sociocultural concerns to problematise static policy divisions, this paper gestures to a path forward wherein subnational policy can balance needs for food, environmental, and cultural security in the North.
Centanafadine (CTN) is a potential first-in-class norepinephrine/dopamine/serotonin triple reuptake inhibitor (NDSRI) that has demonstrated efficacy, safety, and tolerability vs placebo (PBO) in adults with ADHD in 2 pivotal phase 3 trials (Adler LA, et al. J Clin Psychopharmacol. 2022;42:429-39).
Methods
Pooled data from 2 double-blind, multicenter, PBO-controlled trials enrolling adults (18–55 years) meeting DSM-5 ADHD criteria were analyzed. Patients were randomized 1:1:1 to CTN sustained release (SR) 200 mg or 400 mg total daily dose (TDD) or matching PBO if Adult ADHD Investigator Symptom Rating Scale (AISRS) score was ≥28 at screening (if not receiving pharmacologic ADHD treatment) or ≥22 at screening and ≥28 at baseline (if receiving treatment). Having had no prior benefit from ≥2 ADHD therapies of different classes, use of prohibited medications, and positive alcohol/drug screens were exclusionary. Studies had 4 periods: (1) screening and washout (≤28 days), (2) single-blind PBO run-in (1 week), (3) double-blind treatment (6 weeks), and (4) follow-up (10 days after last dose). Patients with ≥30% Adult ADHD Self-report Scale (ASRS) improvement from start to end of screening were screen failures; those with ≥30% ASRS improvement from start to end of PBO run-in were terminated early. A mixed model for repeated measures analysis evaluated CTN SR vs PBO based on ADHD treatment history; least squares mean (LSM) change from baseline (BL) in AISRS at day 42 was the outcome of interest.
Results
In total, 859 patients were analyzed (CTN SR 200 mg TDD, n=287; 400 mg TDD, n=287; PBO, n=285). LSM change from BL in AISRS score was significant at day 42 for each CTN SR TDD group (both, P<0.001) in the overall population vs PBO. Among patients with prior stimulant/nonstimulant treatment (n=542), LSM change from BL was significant at day 42 vs PBO in the CTN SR 200 mg (P=0.016) and 400 mg (P=0.008) TDD groups. Although cohort size was limited (n=47), LSM change from baseline with CTN SR 400 mg TDD was significant (P<0.05) from days 14 to 42 in those who took 2 prior stimulant/nonstimulant treatments, with P=0.030 at day 42. In those with no prior stimulant/nonstimulant treatment (n=317), LSM change from BL was significant at day 42 for the CTN SR 200 mg (P=0.007) and 400 mg (P=0.008) TDD groups vs PBO. When analyzed by history of any past stimulant use, LSM change from BL was significant at day 42 for CTN 200 mg (n=179; P=0.013) and 400 mg (n=166; P=0.006), with significance (P<0.05) noted at day 7 (200 mg TDD) and at day 21 (400 mg TDD), remaining significant to day 42.
Conclusions
This pooled analysis suggests that CTN SR treatment is efficacious in adults with ADHD, regardless of prior treatments, an encouraging finding given reported adult ADHD treatment patterns.
Centanafadine (CTN) is a potential first-in-class norepinephrine/dopamine/serotonin triple reuptake inhibitor (NDSRI). The efficacy, safety, and tolerability of CTN sustained release (SR) for adults with ADHD was demonstrated in 2 pivotal phase 3 trials (Adler LA, et al. J Clin Psychopharmacol. 2022;42:429-39).
Methods
Adults (18–55 years) meeting DSM-5 criteria for ADHD enrolled in these double-blind, multicenter, placebo-controlled trials and randomized to treatment if ADHD Investigator Symptom Rating Scale (AISRS) score was ≥28 at screening (if not receiving pharmacologic treatment for ADHD) or ≥22 at screening and ≥28 at baseline (BL) (if receiving treatment). Having had no prior benefit from ≥2 ADHD therapies of 2 different classes, taking prohibited medications, and positive alcohol/drug screen were exclusionary. Trials had 4 periods: (1) screening and washout (≤28 days), (2) single-blind placebo run-in (1 week), (3) double-blind treatment (6 weeks), and (4) follow-up (10 days after last dose). Patients with ≥30% improvement in the Adult ADHD Self-report Scale (ASRS) from start to end of screening were screen failures; those with ≥30% ASRS improvement from start to end of placebo run-in were terminated early. Patients were randomized 1:1:1 to twice-daily CTN SR (200 or 400 mg total daily dose [TDD]) or matching placebo. The 200 mg/d group received CTN SR 200 mg TDD from days 1–42; the 400 mg/d group received 200 mg TDD on days 1–7, and increased to 400 mg TDD on day 8. This analysis assessed CTN SR effects based on median BL AISRS severity score (<38 or ≥38) using a mixed model for repeated measures analysis. Least squares mean (LSM) differences (95% CI) from BL at day 42 were compared between individual CTN SR dose groups and placebo, tested at a 2-sided significance level of 0.05.
Results
In total, 859 patients were randomized (200 mg TDD, n=287; 400 mg TDD, n=287; placebo, n=285). Significant LSM differences on the AISRS were observed vs placebo in the overall population (200 mg TDD and 400 mg TDD, P<0.0001 for each), in the low BL severity (200 mg TDD [P=0.016]; 400 mg TDD [P=0.019]), and in the high BL severity (200 mg TDD [P=0.005]; 400 mg TDD [P=0.003]) populations at day 42. Significant LSM differences vs placebo (P<0.01) began at day 7 (200 mg) and day 14 (400 mg) overall, remaining significant to day 42. Significant LSM differences were observed vs placebo (P<0.05) from day 14 (400 mg TDD) and day 21 (200 mg) in the low severity populations, and from day 21 (400 mg TDD) and day 7 (200 mg TDD) in the high severity population, remaining significant (P<0.05) to day 42.
Conclusions
CTN SR, a potential first-in-class NDSRI, is efficacious for patients with adult ADHD of low or high BL symptom severity, with significant improvements observed vs placebo within the first 3 weeks.
Crested floating heart [Nymphoides cristata (Roxb.) Kuntze] is an invasive aquatic plant in the southeastern United States. For clonal plants like N. cristata, clonal diversity may influence response to control tactics and/or evolutionary potential. However, little is known about the diversity of introduced N. cristata. In this study, we used genotyping by sequencing to quantify N. cristata diversity in the southeastern United States and determine how that diversity is distributed across the invaded range. Our results show that at least three distinct genetic lineages of N. cristata are present in the southeastern United States. Geographic distribution of the lineages varied, with one widespread lineage identified across several states and others only found in a single water body. There is also evidence of extensive asexual reproduction, with invaded water bodies often host to a single genetic lineage. The genetic diversity reported in this study likely results from multiple introductions of N. cristata to the southeastern United States and should be considered by managers when assessing control tactics, such as screening for biocontrol agents or herbicide testing. The extent and distribution of genetic diversity should also be considered by researchers studying the potential for invasive spread of N. cristata within the United States or hybridization with native Nymphoides species.
We evaluated SARS-CoV-2 anti-nucleocapsid (anti-N) seroconversion and seroreversion rates, risk factors associated with SARS-CoV-2 seroconversion, and COVID-19 risk perceptions among academic healthcare center employees in a rural state.
Methods:
Among employees aged ≥18 years who completed a screening survey (n = 1,377), we invited all respondents reporting previous COVID-19 (n = 85; 82 accepted) and a random selection of respondents not reporting previous COVID-19 (n = 370; 220 accepted) to participate. Participants completed surveys and provided blood samples at 3-month intervals (T0, T3, T6, T9). We used logistic regression to identify risk factors for seropositivity at T0.
Results:
The cohort was primarily direct patient caregivers (205/302; 67.9%), white (278/302; 92.1%), and female (212/302; 70.2%). At T0, 86/302 (28.4%) participants were seropositive. Of the seronegative participants, 6/198 (3.0%), 6/183 (3.3%), and 14/180 (7.8%) had seroconverted at T3, T6, and T9, respectively. The overall seroreversion rate was 6.98% at T9. At T0, nursing staff (odds ratio [OR], 2.37; 95% confidence interval [CI], 1.08, 5.19) and being within six feet of a non-household member outside of work (OR, 2.91; 95% CI, 1.02, 8.33) had significantly higher odds of seropositivity. Vaccination (OR, 0.05; 95% CI, 0.02, 0.12) and face mask use (OR, 0.36; 95% CI, 0.17, 0.78) were protective.
Conclusions:
The seroconversion and seroreversion rates were low among participants. Public health and infection prevention measures implemented early in the COVID-19 pandemic – vaccination, face mask use, and social distancing – were associated with significantly lower odds of SARS-CoV-2 seropositivity among participants.
Creating a sustainable residency research program is necessary to develop a sustainable research pipeline, as highlighted by the recent Society for Academic Emergency Medicine 2024 Consensus Conference. We sought to describe the implementation of a novel, immersive research program for first-year emergency medicine residents. We describe the curriculum development, rationale, implementation process, and lessons learned from the implementation of a year-long research curriculum for first-year residents. We further evaluated resident perception of confidence in research methodology, interest in research, and the importance of their research experience through a 32-item survey. In two cohorts, 25 first-year residents completed the program. All residents met their scholarly project requirements by the end of their first year. Two conference abstracts and one peer-reviewed publication were accepted for publication, and one is currently under review. Survey responses indicated that there was an increase in residents’ perceived confidence in research methodology, but this was limited by the small sample size. In summary, this novel resident research curriculum demonstrated a standardized, reproducible, and sustainable approach to provide residents with an immersive research program.
Coastal wetlands are hotspots of carbon sequestration, and their conservation and restoration can help to mitigate climate change. However, there remains uncertainty on when and where coastal wetland restoration can most effectively act as natural climate solutions (NCS). Here, we synthesize current understanding to illustrate the requirements for coastal wetland restoration to benefit climate, and discuss potential paths forward that address key uncertainties impeding implementation. To be effective as NCS, coastal wetland restoration projects will accrue climate cooling benefits that would not occur without management action (additionality), will be implementable (feasibility) and will persist over management-relevant timeframes (permanence). Several issues add uncertainty to understanding if these minimum requirements are met. First, coastal wetlands serve as both a landscape source and sink of carbon for other habitats, increasing uncertainty in additionality. Second, coastal wetlands can potentially migrate outside of project footprints as they respond to sea-level rise, increasing uncertainty in permanence. To address these first two issues, a system-wide approach may be necessary, rather than basing cooling benefits only on changes that occur within project boundaries. Third, the need for NCS to function over management-relevant decadal timescales means methane responses may be necessary to include in coastal wetland restoration planning and monitoring. Finally, there is uncertainty on how much data are required to justify restoration action. We summarize the minimum data required to make a binary decision on whether there is a net cooling benefit from a management action, noting that these data are more readily available than the data required to quantify the magnitude of cooling benefits for carbon crediting purposes. By reducing uncertainty, coastal wetland restoration can be implemented at the scale required to significantly contribute to addressing the current climate crisis.
Geotechnical drilling for a tunnel between Port Moody and Burnaby, BC, Canada, uncovered a buried fjord. Its sedimentary fill has a thickness of at least 130 m and extends more than 37 m below present mean sea level. Recovered sediments record cyclical growth and decay of successive Cordilleran ice sheets. The oldest sediments comprise 58 m of almost stoneless silt conformably overlying ice-proximal sediments and till, which in turn overlie bedrock. These sediments may predate Marine Isotope Stage (MIS) 4. Glacial sediments assigned to MIS 4 overlie this basal succession and, in turn, are overlain by MIS 3 interstadial sediments and sediments from two MIS 2 glacial advances. Indicators of relative sea-level elevations that bracket glacial deposits of MIS 4 and 2 indicate the cyclic existence of moat-like isostatic depressions in the front of expanding ice sheets. Compared with present sea level, these depressions were at least 160 m during the onsets of MIS 4 and MIS 2. Assuming a maximum eustatic drawdown of 120 m during MIS 2, isostatic depression may have exceeded 200 m during retreat of glacial ice from the Evergreen tunnel area. This is consistent with region-specific low mantle viscosity and rapid Cordilleran Ice Sheet buildup and wasting.
Mössbauer spectra of 9 glauconite samples from Upper Cretaceous and Lower Tertiary strata in the South Island of New Zealand contain a broad shoulder due to low intensity absorption continuous between 1.0 and 2.5 mm/sec when the absorber is at room temperature; the shoulder is absent, and sharp peaks are apparent in spectra taken with the absorber at 80°K. The data suggest that electron transfer occurs between adjacent Fe3+ and Fe2+ ions at room temperature. The low temperature spectra indicate that all Fe in the glauconites is in octahedral coordination. Fe3+ and Fe2+ ions occur in both eis and trans sites; Fe3+ shows a strong preference for eis sites whereas Fe2+ shows an even stronger preference for trans sites.
The partially variable oxidation state of Fe in glauconite is interpreted in terms of a geochemical model for glauconitization of a degraded or incomplete progenitor phyllosilicate. The model involves exchange of Fe2+ for other cations which temporarily stabilize the progenitor, followed by Fe2+-Fe3+ charge transfer reactions. Each reaction results from the system's tendency towards equilibrium. The model is supported by the observation that artificially leached glauconite increases both its Fe3+ and its Fe2+ content when placed in a solution containing Fe2+ as the only Fe ion present.
Human evaluation of animal emotional expressivity can inform animal welfare. Qualitative Behavioural Assessment (QBA) has been applied to domesticated and some non-domesticated animals, but its use in primates is limited despite their emotional expressivity. We aimed to develop and apply a QBA for bonobos (Pan paniscus) through two consecutive studies. We applied Free Choice Profiling (FCP) and the Fixed List methodology, respectively, in Study 1 and 2, and invited students and bonobo experts to rate video clips of zoo-living bonobos of different sexes and age classes, and before and after moving to a new enclosure. In Study 1, students described dimension 1 as ranging from ‘quiet/calm’ to ‘angry/active’ and dimension 2 from ‘sad/anxious’ to ‘happy/loving’. Experts described dimension 1 ranging from ‘quiet/relaxed’ to ‘nervous/alert’ and dimension 2 from ‘nervous/bored’ to ‘playful/happy’. Using a fixed list of descriptors, informed by findings from Study 1, students in Study 2 described dimension 1 as ranging from ‘quiet/calm’ to ‘agitated/frustrated’, and dimension 2 from ‘sad/stressed’ to ‘happy/positively engaged’. Experts described dimension 1 as ranging from ‘quiet/calm’ to ‘active/excited’, and dimension 2 from ‘sad/bored’ to ‘happy/positively engaged’. Students scored adults as more ‘calm/quiet’ and experts scored subadults as more ‘happy/positively engaged’. Additionally, experts in Study 2 rated bonobos as more ‘active/excited’ in their new enclosure. Reliability was moderate to good for the dimensions. Additionally, animal-directed empathy of observers influenced QBA scores. This is the first time, FCP has been successfully used as a method to study primate emotional expressivity. Our findings show the promise of employing QBA in primate studies and in industry, with validation of additional metrics to enable its use for welfare-monitoring purposes.
A growing theoretical literature identifies how the process of constitutional review shapes judicial decision-making, legislative behavior, and even the constitutionality of legislation and executive actions. However, the empirical interrogation of these theoretical arguments is limited by the absence of a common protocol for coding constitutional review decisions across courts and time. We introduce such a coding protocol and database (CompLaw) of rulings by 42 constitutional courts. To illustrate the value of CompLaw, we examine a heretofore untested empirical implication about how review timing relates to rulings of unconstitutionality (Ward and Gabel 2019). First, we conduct a nuanced analysis of rulings by the French Constitutional Council over a 13-year period. We then examine the relationship between review timing and strike rates with a set of national constitutional courts in one year. Our data analysis highlights the benefits and flexibility of the CompLaw coding protocol for scholars of judicial review.
Marine litter poses a complex challenge in Indonesia, necessitating a well-informed and coordinated strategy for effective mitigation. This study investigates the seasonality of plastic concentrations around Sulawesi Island in central Indonesia during monsoon-driven wet and dry seasons. By using open data and methodologies including the HYCOM and Parcels models, we simulated the dispersal of plastic waste over 3 months during both the southwest and northeast monsoons. Our research extended beyond data analysis, as we actively engaged with local communities, researchers and policymakers through a range of outreach initiatives, including the development of a web application to visualize model results. Our findings underscore the substantial influence of monsoon-driven currents on surface plastic concentrations, highlighting the seasonal variation in the risk to different regional seas. This study adds to the evidence provided by coarser resolution regional ocean modelling studies, emphasizing that seasonality is a key driver of plastic pollution within the Indonesian archipelago. Inclusive international collaboration and a community-oriented approach were integral to our project, and we recommend that future initiatives similarly engage researchers, local communities and decision-makers in marine litter modelling results. This study aims to support the application of model results in solutions to the marine litter problem.
Animals under human care are exposed to a potentially large range of both familiar and unfamiliar humans. Human-animal interactions vary across settings, and individuals, with the nature of the interaction being affected by a suite of different intrinsic and extrinsic factors. These interactions can be described as positive, negative or neutral. Across some industries, there has been a move towards the development of technologies to support or replace human interactions with animals. Whilst this has many benefits, there can also be challenges associated with increased technology use. A day-long Animal Welfare Research Network workshop was hosted at Harper Adams University, UK, with the aim of bringing together stakeholders and researchers (n = 38) from the companion, farm and zoo animal fields, to discuss benefits, challenges and limitations of human-animal interactions and machine-animal interactions for animals under human care and create a list of future research priorities. The workshop consisted of four talks from experts within these areas, followed by break-out room discussions. This work is the outcome of that workshop. The key recommendations are that approaches to advancing the scientific discipline of machine-animal interactions in animals under human care should focus on: (1) interdisciplinary collaboration; (2) development of validated methods; (3) incorporation of an animal-centred perspective; (4) a focus on promotion of positive animal welfare states (not just avoidance of negative states); and (5) an exploration of ways that machines can support a reduction in the exposure of animals to negative human-animal interactions to reduce negative, and increase positive, experiences for animals.
The prevalence of childhood obesity is increasing globally(1). While BMI is commonly used to define obesity, it is unable to differentiate between fat and muscle mass, leading to calls to measure body composition specifically(2). While several tools are available to assess body composition in infancy, it is unclear if they are directly comparable. Among a subset of healthy infants born to mothers participating in a randomised controlled trial of a preconception and antenatal nutritional supplement(3), measurements were made at ages 6 weeks (n = 58) and 6 months (n = 70) using air displacement plethysmography (ADP), whole-body dual-energy X-ray absorptiometry (DXA), and bioelectrical impedance spectroscopy (BIS). Estimates of percentage fat mass (%FM) were compared using Cohen’s kappa statistic (κ) and Bland-Altman analysis (4,5). There was none to weak agreement when comparing tertiles of %FM (κ = 0.15–0.59). When comparing absolute values, the bias (i.e., mean difference) was smallest when comparing BIS to ADP at 6 weeks (+1.7%). A similar bias was observed at 6 months when comparing DXA to ADP (+1.8%). However, when comparing BIA to DXA at both ages, biases were much larger (+7.6% and +4.7% at 6 weeks and 6 months, respectively). Furthermore, there was wide interindividual variance (limits of agreement [LOA] i.e., ± 1.96 SD) for each comparison. At 6 weeks, LOA ranged from ± 4.8 to ± 6.5% for BIA vs. DXA and BIA vs. ADP, respectively. At 6 months, LOA were even wider, ranging from ± 7.3 to ± 8.1% (DXA vs. ADP and BIA vs. DXA, respectively). Proportional biases were apparent when comparing BIS to the other tools at both ages, with BIS generally overestimating %FM more among infants with low adiposity. In addition to differences according to tool type, within-tool factors impacted body composition estimation. For ADP measurements, the choice of FFM density reference (Fomon vs. Butte) had minimal impact; however, choice of DXA software version (GE Lunar enCORE basic vs. enhanced) and BIS analysis approach (empirical equation vs. mixture theory prediction) led to very different estimates of body composition. In conclusion, when comparing body composition assessment tools in infancy, there was limited agreement between three commonly used tools. Therefore, researchers and clinicians must be cautious when conducting longitudinal analyses or when comparing findings across studies, as estimates are not comparable across tools.
Impaired motor and cognitive function can make travel cumbersome for People with Parkinson’s disease (PwPD). Over 50% of PwPD cared for at the University of Arkansas for Medical Sciences (UAMS) Movement Disorders Clinic reside over 30 miles from Little Rock. Improving access to clinical care for PwPD is needed.
Objective:
To explore the feasibility of remote clinic-to-clinic telehealth research visits for evaluation of multi-modal function in PwPD.
Methods:
PwPD residing within 30 miles of a UAMS Regional health center were enrolled and clinic-to-clinic telehealth visits were performed. Motor and non-motor disease assessments were administered and quantified. Results were compared to participants who performed at-home telehealth visits using the same protocols during the height of the COVID pandemic.
Results:
Compared to the at-home telehealth visit group (n = 50), the participants from regional centers (n = 13) had similar age and disease duration, but greater disease severity with higher total Unified Parkinson’s disease rating scale scores (Z = −2.218, p = 0.027) and lower Montreal Cognitive Assessment scores (Z = −3.350, p < 0.001). Regional center participants had lower incomes (Pearson’s chi = 21.3, p < 0.001), higher costs to attend visits (Pearson’s chi = 16.1, p = 0.003), and lived in more socioeconomically disadvantaged neighborhoods (Z = −3.120, p = 0.002). Prior research participation was lower in the regional center group (Pearson’s chi = 4.5, p = 0.034) but both groups indicated interest in future research participation.
Conclusions:
Regional center research visits in PwPD in medically underserved areas are feasible and could help improve access to care and research participation in these traditionally underrepresented populations.