We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Background: Deep brain stimulation (DBS) in Parkinson’s disease (PD) requires extensive trial-and-error programming, often taking over a year to optimize. An objective, rapid biomarker of stimulation success is needed. Our team developed a functional magnetic resonance imaging (fMRI)-based algorithm to identify optimal DBS settings. This study prospectively compared fMRI-guided programming with standard-of-care (SoC) clinical programming in a double-blind, crossover, non-inferiority trial. Methods: Twenty-two PD-DBS patients were prospectively enrolled for fMRI using a 30-sec DBS-ON/OFF cycling paradigm. Optimal settings were identified using our published classification algorithm. Subjects then underwent >1 year of SoC programming. Clinical improvement was assessed under SoC and fMRI-determined stimulation conditions. Results: fMRI optimization significantly reduced the time required to determine optimal settings (1.6 vs. 5.6 months, p<0.001). Unified Parkinson’s Disease Rating Scale (UPDRSIII) improved comparably with both approaches (23.8 vs. 23.6, p=0.9). Non-inferiority was demonstrated within a predefined margin of 5 points (p=0.0018). SoC led to greater tremor improvement (p=0.019), while fMRI showed greater bradykinesia improvement (p=0.040). Conclusions: This is the first prospective evaluation of an algorithm able to suggest stimulation parameters solely from the fMRI response to stimulation. It suggests that fMRI-based programming may achieve equivalent outcomes in less time than SoC, reducing patient burden while potentially enhancing bradykinesia response.
Soluble Intercellular Adhesion Molecule-1 (sICAM-1) has emerged as an inflammatory biomarker of many essential functions. We investigated the level of sICAM-1 influenced by Clonorchis sinensis (C. sinensis) co-infection in chronic hepatitis B (CHB) patients to explore the degree of liver tissue inflammation and liver function damage after co-infection. The study included data from patients with C. sinensis mono-infection (n=27), hepatitis B virus (HBV) mono-infection (n=32), C. sinensis and HBV co-infection (n=24), post-hepatitis B liver cirrhosis (n=18), post-hepatitis B liver cirrhosis co-infected with C. sinensis (n=16), and healthy controls (n=39). The level of sICAM-1 was measured with specific enzyme-linked immunosorbent assay method. Compared to the healthy control group, all the experimental groups had significantly higher serum sICAM-1 levels. The levels of sICAM-1 in co-infected groups were significantly higher compared to the mono-infection groups and were positively correlated with the levels of glutamate aminotransferase (ALT) and aspartate aminotransferase (AST). Our research findings confirmed that co-infection could exacerbate liver tissue inflammation and liver function damage in patients, could raise the sICAM-1 level, and may lead to the chronicity of HBV infection. These results provide clues for pathological mechanism study and formulating treatment plans.
Background: Subthalamic nucleus (STN) deep brain stimulation (DBS) improves the cardinal symptoms of Parkinson’s disease (PD). However, the therapeutic mechanisms are incompletely understood. By leveraging patient-specific brain responses to DBS using functional magnetic resonance imaging (fMRI) acquired during stimulation, we identify and validate symptom-specific networks associated with clinical improvement. Methods: Forty PD patients with STN-DBS were enrolled for fMRI using a 30-sec DBS-ON/OFF cycling paradigm. The four cardinal motor outcomes of PD were chosen a priori and measured using the Movement Disorder Society-Sponsored Revision of the Unified Parkinson’s Disease Rating Scale, part III (MDS-UPDRSIII): axial instability, tremor, rigidity, bradykinesia. Stimulation-dependent changes in blood oxygen level-dependent (BOLD) signal were correlated with each symptom. Results: The relationship between BOLD response and outcomes revealed significant networks of clinical response (p<0.001). Using BOLD responses from the network hubs, each symptom-specific model was significantly predictive of actual improvement: axial instability (R2=0.38, p=0.000026), bradykinesia (R2=0.29, p=0.00033), rigidity (R2=0.40, p=0.000013), tremor (R2=0.26, p=0.00073). Conclusions: Using patient-specific imaging, we provide evidence of an association between DBS-evoked fMRI response and individual symptom improvement. Brain networks associated with clinical improvement were different depending on the PD symptom examined, suggesting the presence of symptom-specific networks of efficacy which may allow personalization of DBS therapy.
Background: Success of deep brain stimulation (DBS) in Parkinson’s disease (PD) relies on time-consuming trial-and-error testing of stimulation settings. Here, we prospectively compared an fMRI-based stimulation optimization algorithm with >1 year of standard-of-care (SoC) programming in a double-blind, crossover, non-inferiority trial. Methods: Twenty-seven PD-DBS patients were prospectively enrolled for fMRI using a 30-sec DBS-ON/OFF cycling paradigm. Optimal settings were identified using our published classification algorithm. Subjects then underwent >1 year of SoC programming. Clinical improvement was assessed, after an overnight medication washout period, under SoC and fMRI-determined stimulation conditions. A predefined non-inferiority margin was -5 points on the Unified Parkinson’s Disease Rating Scale (UPDRSIII). Results: UPDRSIII improved from 45.3 (SD=14.6) at baseline to 24.9 (SD=10.9) and 24.1 (SD=10.9) during SoC and fMRI-determined stimulation, respectively. The mean difference in scores was 0.8 (SD=8.5; 95% CI -4.5 to 6.2). The non-inferiority margin was not contained within the 95% confidence interval, establishing non-inferiority (p=0.013). Conclusions: This is the first prospective evaluation of an algorithm able to suggest stimulation parameters solely from the fMRI response to stimulation. It suggests equivalent outcomes may be achieved in 3 hours of fMRI scanning immediately after surgery compared to SoC requiring 6 or more in-person clinic visits throughout >1 year.
OBJECTIVES/GOALS: ICD-10 coding inconsistencies hinder timely recognition and treatment of metabolic syndrome (MetS), posing a significant risk for cardiometabolic disease progression. This study employed a digital phenotype for MetS and compared odds for medication and lifestyle intervention compared to those coded for MetS. METHODS/STUDY POPULATION: MetS is a cluster of cardiometabolic risk factors that increase risk for numerous adverse clinical outcomes. Patients with MetS were identified through electronic medical records on TriNetX LLC using the standard ICD-10 code or through a digital phenotype, involving grouping codes for the individual components. Percentage of patients with MetS not captured with the standard code was identified. In addition, disparities in blood pressure, glucose, lipid-lowering medication, and lifestyle intervention between the coding schemas were assessed, shedding light on healthcare inequities and informing targeted interventions. Odds ratios (RR) were presented for all outcomes. RESULTS/ANTICIPATED RESULTS: Patient demographics and lab values were similar between the standard code and digital phenotype cohorts. Of the 4.3 million individuals aged 50 to 80 identified as having MetS using the digital phenotype in the TriNetX research network, only 1.78% of participants shared the standard code. Individuals with the digital phenotype for MetS were at lower odds in receiving glucose lowering medication (OR: 2.11, 95% CI: 1.98–2.13, p <0.001) and exercise or nutrition-based intervention advice (OR: 1.76, 95% CI: 1.55–1.96, p <0.001) after controlling for demographics and lab values for each MetS component. DISCUSSION/SIGNIFICANCE: This project utilized TriNetX to create a digital phenotype for MetS, and suggests most patients are not coded for it using the standard ICD-10 system. This is troublesome given those with the standard code are less likely to receive certain interventions.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
The Classic Maya polities of Baking Pot and Lower Dover developed along two dramatically different trajectories. At Baking Pot, the capital and associated apical elite regime grew concomitantly with surrounding populations over a thousand-year period. The smaller polity of Lower Dover, in contrast, formed when a Late Classic political center was established by an emergent apical elite regime amidst several long-established intermediate elite-headed districts. The different trajectories through which these polities formed should have clear implications for residential size variability. We employ the Gini coefficient to measure variability in household volume to compare patterns of residential size differentiation between the two polities. The Gini coefficients, while similar, suggest greater differentiation in residential size at Baking Pot than at Lower Dover, likely related to the centralized control of labor by the ruling elite at Baking Pot. While the Gini coefficient is synonymous with measuring wealth inequalities, we suggest that in the Classic period Belize River Valley, residential size was more reflective of labor control.
Background: Women are reported to have worse outcomes than men following ischemic stroke despite similar treatment effects for thrombolysis and endovascular treatment. Methods: We performed a post-hoc analysis of patients with acute ischemic stroke and intracranial occlusion enrolled in INTERRSeCT, an international prospective cohort study. We compared workflow times, reperfusion therapy choices, and 90-day modified Rankin scale (mRS) scores. Results: We included 575 patients, mean age 70.2 years (SD: 13.1) and 48.5% female. There were no significant sex differences in onset-to-CT (males: 115 minutes [IQR: 72-171], females: 114 minutes [IQR: 75-196] ) or CT-to-thrombolysis time (males: 24 minutes [IQR: 17-32], females: 23 minutes [IQR: 18-36]). However, female participants had a 12-minute faster CT-to-groin-puncture time, p=0.001. Reperfusion therapies did not significantly differ by sex. Reperfusion therapies included thrombolysis alone (males: 46%, females: 49%), EVT alone (males: 34%, females: 34%), thrombolysis plus EVT (males: 8%, females 9%) and conservative management (males: 12%, females: 8%). Median 90-day mRS was 2 (IQR: 1-4) in both males and females, p=0.1. Conclusions: In the INTERRSeCT cohort, rates of reperfusion therapy, workflow times and 90-day outcomes were similar between sexes, suggesting that women are not subject to any poorer performance in key quality indicators for reperfusion treatment for acute stroke.
Background: In Parkinson’s disease, deep brain stimulation (DBS) of the subthalamic nucleus (STN) or globus pallidus internus (GPi) produces comparable motor benefits. Although both increases the risk of cognition and verbal fluency (VF) decline, the risk is greater following STN-DBS. The consequences of stimulating these different sites on brain network activity is unknown. We use functional magnetic resonance imaging (fMRI) during in vivo stimulation to investigate differences between STN-DBS and GPi-DBS and correlate with change in VF. Methods: Left-sided, stimulation-cycling block-design fMRI was acquired at 3-Tesla in 51 STN-DBS and 15 GPi-DBS following routine clinical programming. Blood oxygen level-dependent (BOLD) response to stimulation was compared between groups. Phonemic VF was assessed pre- and postoperatively. Results: Voxel-wise t-test between STN-DBS and GPi-DBS BOLD response maps revealed areas of significant difference (p<0.001) in the left frontal operculum and the left caudate head. Stimulation BOLD response appears to show slight inverse correlation with postoperative VF decline. The trend is reversed at the left frontal operculum in STN-DBS compared to GPi-DBS. Conclusions: Decline in VF in PD-DBS seems associated with the stimulation BOLD response at the left frontal operculum and the left caudate head. The effect differs depending on stimulation site, suggesting differing effects on brain network activity.
A small spheroid settling in a quiescent fluid experiences an inertial torque that aligns it so that it settles with its broad side first. Here we show that an active particle experiences such a torque too, as it settles in a fluid at rest. For a spherical squirmer, the torque is $\boldsymbol {T}^\prime = -{\frac {9}{8}} m_f (\boldsymbol {v}_s^{(0)} \wedge \boldsymbol {v}_g^{(0)})$ where $\boldsymbol {v}_s^{(0)}$ is the swimming velocity, $\boldsymbol {v}_g^{(0)}$ is the settling velocity in the Stokes approximation and $m_f$ is the equivalent fluid mass. This torque aligns the swimming direction against gravity: swimming up is stable, swimming down is unstable.
Water fountains (WFs) are thought to represent an early stage in the morphological evolution of circumstellar envelopes surrounding low- and intermediate-mass evolved stars. These objects are considered to transition from spherical to asymmetric shapes. Despite their potential importance in this transformation process of evolved stars, there are only a few known examples. To identify new WF candidates, we used databases of circumstellar OH (1612 MHz) and H2O (22.235 GHz) maser sources, and compared the velocity ranges of the two maser lines. Finally, 41 sources were found to have a velocity range for the H2O maser line that exceeded that of the OH maser line. Excluding known planetary nebulae and after reviewing the maser spectra in the original literature, we found for 11 sources the exceedance as significant, qualifying them as new WF candidates.
Variable camber flap technology can adjust the spanwise circulation distribution, thereby reducing the induced drag. Therefore, the concept of variable camber flap is introduced into the design of propeller aircraft wing, and the design for drag reduction of propeller aircraft is carried out. The numerical simulation of the propeller aircraft is carried out by using the actuator disc method with non-uniform distribution of radial and circumferential loads. Through the unsteady simulation of a single propeller, the aerodynamic load on a periodic propeller is extracted as a boundary condition to the steady simulation of the full aircraft. The load extracted by the actuator disc is compared with the unsteady simulation result, which verifies the reliability of the method. The design for drag reduction at cruise and climb design conditions are respectively carried out with the variable camber flap technology. The variable camber cruise configuration is evaluated at both the begin and end cruise conditions. The results show that, after the flaps deflecting at a small angle according to the circulation distribution, the camber distribution of the wing is adjusted to make the circulation distribution closer to the elliptical circulation distribution. At the design cruise condition, the drag coefficient is reduced by 1.4 counts, and the lift-drag ratio increase by 0.1. At both begin and end cruise conditions, the drag coefficient decreases by 1 count, and the lift-drag ratio increases by 0.07. At the design climb condition, the drag coefficient decreases by 1 count, and the lift-to-drag ratio increases by 0.09.
About 800 foodborne disease outbreaks are reported in the United States annually. Few are associated with food recalls. We compared 226 outbreaks associated with food recalls with those not associated with recalls during 2006–2016. Recall-associated outbreaks had, on average, more illnesses per outbreak and higher proportions of hospitalisations and deaths than non-recall-associated outbreaks. The top confirmed aetiology for recall-associated outbreaks was Salmonella. Pasteurised and unpasteurised dairy products, beef and molluscs were the most frequently implicated foods. The most common pathogen−food pairs for outbreaks with recalls were Escherichia coli-beef and norovirus-molluscs; the top pairs for non-recall-associated outbreaks were scombrotoxin-fish and ciguatoxin-fish. For outbreaks with recalls, 48% of the recalls occurred after the outbreak, 27% during the outbreak, 3% before the outbreak, and 22% were inconclusive or had unknown recall timing. Fifty per cent of recall-associated outbreaks were multistate, compared with 2% of non-recall-associated outbreaks. The differences between recall-associated outbreaks and non-recall-associated outbreaks help define the types of outbreaks and food vehicles that are likely to have a recall. Improved outbreak vehicle identification and traceability of rarely recalled foods could lead to more recalls of these products, resulting in fewer illnesses and deaths.
An increasing number of unexpectedly diverse benthic communities are being reported from microbially precipitated carbonate facies in shallow-marine platform settings after the end-Permian mass extinction. Ostracoda, which was one of the most diverse and abundant metazoan groups during this interval, recorded its greatest diversity and abundance associated with these facies. Previous studies, however, focused mainly on taxonomic diversity and, therefore, left room for discussion of paleoecological significance. Here, we apply a morphometric method (semilandmarks) to investigate morphological variance through time to better understand the ecological consequences of the end-Permian mass extinction and to examine the hypothesis that microbial mats played a key role in ostracod survival. Our results show that taxonomic diversity and morphological disparity were decoupled during the end-Permian extinction and that morphological disparity declined rapidly at the onset of the end-Permian extinction, even though the high diversity of ostracods initially survived in some places. The decoupled changes in taxonomic diversity and morphological disparity suggest that the latter is a more robust proxy for understanding the ecological impact of the extinction event, and the low morphological disparity of ostracod faunas is a consequence of sustained environmental stress or a delayed post-Permian radiation. Furthermore, the similar morphological disparity of ostracods between microbialite and non-microbialite facies indicates that microbial mats most likely represent a taphonomic window rather than a biological refuge during the end-Permian extinction interval.
The Chinese culture of filial piety has historically emphasised children's responsibility for their ageing parents. Little is understood regarding the inverse: parents’ responsibility and care for their adult children. This paper uses interviews with 50 families living in rural China's Anhui Province to understand intergenerational support in rural China. Findings indicate that parents in rural China take on large financial burdens in order to sustain patrilineal traditions by providing housing and child care for their adult sons. These expectations lead some rural elders to become migrant workers in order to support their adult sons while others provide live-in grandchild-care, moving into their children's urban homes or bringing grandchildren into their own homes. As the oldest rural generations begin to require ageing care of their own, migrant children are unable to provide the sustained care and support expected within the cultural tradition of xiao. This paper adds to the small body of literature that examines the downward transfer of support from parents to their adult children in rural China. The authors argue that there is an emerging cultural rupture in the practice of filial piety – while the older generation is fulfilling their obligations of upbringing and paying for adult children's housing and child care; these adult children are not necessarily available or committed to the return of care for their ageing parents. The authors reveal cultural and structural lags that leave millions of rural ageing adults vulnerable in the process of urbanisation in rural China.
Brain-derived neurotrophic factor (BDNF) has an important role in learning, motivation and regulation of mood. A body of research indicates that dysregulation of BDNF is found in post-traumatic stress disorder (PTSD). The aim of this study was to investigate the association of baseline plasma BDNF and follow-up PTSD symptoms in Chinese motor vehicle accident survivors.
Method
Motor vehicle accident (MVA) survivors were recruited from one Emergency Room of Shanghai. BDNF plasma levels were measured in 24 hours after motor vehicle accident. The Clinician-Administered PTSD Scale (CAPS) was used to evaluated PTSD symptoms one month after accident. Totally, 60 MVA survivors participated in this study and 49 of them completed follow-up evaluation.
Results
In the one month follow-up interview, 14 of the MVA survivors met the PTSD diagnosis. The PTSD MVA survivors shown lower baseline BDNF plasma level when compare with non-PTSD participants (p < 0.05).
Conclusions
People who show lower plasma BDNF after traumatic event may be more susceptible to PTSD, and plasma BDNF could be a predictor of PTSD.
Current available antidepressants exhibit low remission rate with a long response lag time. Growing evidence has demonstrated acute sub-anesthetic dose of ketamine exerts rapid, robust, and lasting antidepressant effects. However, a long term use of ketamine tends to elicit its adverse reactions. The present study aimed to investigate the antidepressant-like effects of intermittent and consecutive administrations of ketamine on chronic unpredictable mild stress (CUMS) rats, and to determine whether ketamine can redeem the time lag for treatment response of classic antidepressants. The behavioral responses were assessed by the sucrose preference test, forced swimming test, and open field test. In the first stage of experiments, all the four treatment regimens of ketamine (10 mg/kg ip, once daily for 3 or 7 consecutive days, or once every 7 or 3 days, in a total 21 days) showed robust antidepressant-like effects, with no significant influence on locomotor activity and stereotype behavior in the CUMS rats. The intermittent administration regimens produced longer antidepressant-like effects than the consecutive administration regimens and the administration every 7 days presented similar antidepressant-like effects with less administration times compared with the administration every 3 days. In the second stage of experiments, the combination of ketamine (10 mg/kg ip, once every 7 days) and citalopram (20 mg/kg po, once daily) for 21 days caused more rapid and sustained antidepressant-like effects than citalopram administered alone. In summary, repeated sub-anesthestic doses of ketamine can redeem the time lag for the antidepressant-like effects of citalopram, suggesting the combination of ketamine and classic antidepressants is a promising regimen for depression with quick onset time and stable and lasting effects.
Predicting the magnitude of the annual seasonal peak in influenza-like illness (ILI)-related emergency department (ED) visit volumes can inform the decision to open influenza care clinics (ICCs), which can mitigate pressure at the ED. Using ILI-related ED visit data from the Alberta Real Time Syndromic Surveillance Net for Edmonton, Alberta, Canada, we developed (training data, 1 August 2004–31 July 2008) and tested (testing data, 1 August 2008–19 February 2014) spatio-temporal statistical prediction models of daily ILI-related ED visits to estimate high visit volumes 3 days in advance. Our Main Model, based on a generalised linear mixed model with random intercept, incorporated prediction residuals over 14 days and captured increases in observed volume ahead of peaks. During seasonal influenza periods, our Main Model predicted volumes within ±30% of observed volumes for 67%–82% of high-volume days and within 0.3%–21% of observed seasonal peak volumes. Model predictions were not as successful during the 2009 H1N1 pandemic. Our model can provide early warning of increases in ILI-related ED visit volumes during seasonal influenza periods of differing intensities. These predictions may be used to support public health decisions, such as if and when to open ICCs, during seasonal influenza epidemics.