We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Evaluate Department of Defense (DoD) antimicrobial stewardship programs (ASPs) by assessing the relationship between key clinical outcome metrics (antibiotic use, incidence of resistant pathogens, and incidence of Clostridioides difficile infections) and CDC Core Element (CE) adherence.
Design:
Retrospective, cross-sectional study of DoD hospitals in 2018 and 2021
Methods:
National Healthcare Safety Network Standardized Antimicrobial Administration Ratios (SAARs) were used to measure antibiotic use and microbiology results to evaluate four types of pathogen incidence. A novel CE scoring approach used scores to quantitatively assess relationships with CE adherence and outcome metrics using correlation and regression models. Assessments were repeated with 2021 data for Priority CE adherence and to conduct adjusted regressions for CEs and Priority CEs controlling for categorical bed size.
Results:
Compared to 2022 national data, DoD hospitals in 2021 had a similar proportion of facilities with a SAAR statistically significantly > 1.0. Leadership, Action, and Tracking CEs followed a more normal score distribution, while Reporting and Education were somewhat left-skewed. Unadjusted models often showed a positive relationship with higher CE scores associated with worse outcomes for the SAAR and pathogen incidence. Adjusted models indicated that procedural CEs, particularly Priority Reporting, were associated with better ASP-related outcomes.
Conclusions:
CEs should be more quantitatively assessed. Results provide initial evidence to prioritize procedural CE implementation within the DoD; however, additional investigation for structural CEs is needed. Patient outcome data should be collected as an important indicator of ASP performance.
Characterization and assessment of Department of Defense’s (DoD’s) Antibiotic Stewardship Programs (ASPs) to determine adherence to Centers for Disease Control and Prevention (CDC) Core Elements and compare to national adherence
Design:
Retrospective, observational with supplemental survey
Methods:
Facility characteristics and CDC Core Elements (CE) adherence data for 2017–2021 were retrieved from the National Healthcare Safety Network’s (NHSN) annual hospital survey with DoD data from the Defense Health Agency and national data from the Antibiotic Resistance and Patient Safety Portal. An online supplemental survey was administered to DoD hospitals. Descriptive statistics and bivariate analyses were completed for facility characteristics and supplemental survey questions to determine correlations between variables. A framework analysis compared DoD ASPs to CEs and Priority Elements.
Results:
Supplemental surveys were completed for 85.1% of DoD’s hospitals. DoD’s hospitals were smaller on average than national hospitals. ASP leaders were assigned more often than volunteer and typically served in the role for less than four years. Staffing mix differed, with more equivalent proportions of civilian/contractor to military at larger hospitals in the U.S. Most DoD ASPs consisted of ≤ 25% pharmacists. ASP leaders were largely available on a daily basis; pharmacist leaders spent more time on ASP activities than physicians. CE adherence was high, but in 2021 DoD lagged national adherence in the structural CEs of Leadership, Accountability, and Pharmacy Expertise.
Conclusions:
DoD hospitals lagged in national adherence to the structural CEs, presenting opportunities for ASP improvement. Refinement of CE adherence measurements, coupled with impact on health outcomes, could aid in better-identifying areas for improvement.
Female fertility is a complex trait with age-specific changes in spontaneous dizygotic (DZ) twinning and fertility. To elucidate factors regulating female fertility and infertility, we conducted a genome-wide association study (GWAS) on mothers of spontaneous DZ twins (MoDZT) versus controls (3273 cases, 24,009 controls). This is a follow-up study to the Australia/New Zealand (ANZ) component of that previously reported (Mbarek et al., 2016), with a sample size almost twice that of the entire discovery sample meta-analysed in the previous article (and five times the ANZ contribution to that), resulting from newly available additional genotyping and representing a significant increase in power. We compare analyses with and without male controls and show unequivocally that it is better to include male controls who have been screened for recent family history, than to use only female controls. Results from the SNP based GWAS identified four genomewide significant signals, including one novel region, ZFPM1 (Zinc Finger Protein, FOG Family Member 1), on chromosome 16. Previous signals near FSHB (Follicle Stimulating Hormone beta subunit) and SMAD3 (SMAD Family Member 3) were also replicated (Mbarek et al., 2016). We also ran the GWAS with a dominance model that identified a further locus ADRB2 on chr 5. These results have been contributed to the International Twinning Genetics Consortium for inclusion in the next GWAS meta-analysis (Mbarek et al., in press).
Functional somatic disorders (FSDs) are characterized by persistent and disabling physical symptoms that cannot be attributed to well-defined somatic disorders. In adolescents, the prevalence is around 4-10%. Evidence from adult populations suggests that cortisol plays a role in the development and perpetuation of FSDs, but little is known regarding adolescents. As cortisol accumulates in hair over time, hair cortisol concentration (HCC) is a promising new biomarker for long-term physiological stress. Moreover, adult studies have found associations between HCC levels and self-perceived stress.
Objectives
To compare HCC levels between adolescents with severe FSDs and adolescents from the general population. Furthermore, to investigate the association between HCC and self-perceived stress.
Methods
The data are retrieved from two projects: the AHEAD trial, including 91 15-19-year-old adolescents diagnosed with a severe FSD, and the Copenhagen Child Cohort 2000 (CCC2000), including data on 1455 16-17-year-old adolescents. Hair samples were collected for HCC analysis, and web-based questionnaires were used to asses self-perceived stress. Functional somatic symptoms were assessed with the Bodily Distress Syndrome (BDS) checklist.
Results
The data have been collected and will be analysed and presented at the congress.
Conclusions
This study can contribute with knowledge about the potential role of cortisol in FSDs in adolescents, and whether self-perceived stress can be used as a marker for physiological stress measured by HCC. Treatments for adolescents with FSDs still need to be improved. The current study may help to understand whether future treatment strategies should include a greater focus on stress management.
To investigate associations between multimodal analgesia and post-operative pain among patients undergoing transoral robotic surgery for oropharyngeal squamous cell carcinoma.
Methods
Records of patients who underwent surgery from 5 September 2012 to 30 November 2016 were abstracted. Associations were assessed using multivariable analysis.
Results
A total of 216 patients (mean age of 59.1 years, 89.4 per cent male) underwent transoral robotic surgery (92.6 per cent were human papilloma virus positive, 87.5 per cent had stage T1–T2 tumours, and 82.9 per cent had stage N0–N1 nodes). Gabapentin (n = 86) was not associated with a reduction in severe pain. Ibuprofen (n = 72) was administered less often in patients with severe pain. Gabapentin was not associated with increased post-operative sedation (p = 0.624) and ibuprofen was not associated with increased bleeding (p = 0.221). Post-operative opioid usage was not associated with surgical duration, pharyngotomy, bilateral neck dissections, tumour stage, tumour size, subsite or gabapentin.
Conclusion
Scheduled low-dose gabapentin was not associated with improved pain control or increased respiratory depression. Ibuprofen was not associated with an increased risk of bleeding and may be under-utilised.
The radiocarbon (14C) calibration curve so far contains annually resolved data only for a short period of time. With accelerator mass spectrometry (AMS) matching the precision of decay counting, it is now possible to efficiently produce large datasets of annual resolution for calibration purposes using small amounts of wood. The radiocarbon intercomparison on single-year tree-ring samples presented here is the first to investigate specifically possible offsets between AMS laboratories at high precision. The results show that AMS laboratories are capable of measuring samples of Holocene age with an accuracy and precision that is comparable or even goes beyond what is possible with decay counting, even though they require a thousand times less wood. It also shows that not all AMS laboratories always produce results that are consistent with their stated uncertainties. The long-term benefits of studies of this kind are more accurate radiocarbon measurements with, in the future, better quantified uncertainties.
The concentration of radiocarbon (14C) differs between ocean and atmosphere. Radiocarbon determinations from samples which obtained their 14C in the marine environment therefore need a marine-specific calibration curve and cannot be calibrated directly against the atmospheric-based IntCal20 curve. This paper presents Marine20, an update to the internationally agreed marine radiocarbon age calibration curve that provides a non-polar global-average marine record of radiocarbon from 0–55 cal kBP and serves as a baseline for regional oceanic variation. Marine20 is intended for calibration of marine radiocarbon samples from non-polar regions; it is not suitable for calibration in polar regions where variability in sea ice extent, ocean upwelling and air-sea gas exchange may have caused larger changes to concentrations of marine radiocarbon. The Marine20 curve is based upon 500 simulations with an ocean/atmosphere/biosphere box-model of the global carbon cycle that has been forced by posterior realizations of our Northern Hemispheric atmospheric IntCal20 14C curve and reconstructed changes in CO2 obtained from ice core data. These forcings enable us to incorporate carbon cycle dynamics and temporal changes in the atmospheric 14C level. The box-model simulations of the global-average marine radiocarbon reservoir age are similar to those of a more complex three-dimensional ocean general circulation model. However, simplicity and speed of the box model allow us to use a Monte Carlo approach to rigorously propagate the uncertainty in both the historic concentration of atmospheric 14C and other key parameters of the carbon cycle through to our final Marine20 calibration curve. This robust propagation of uncertainty is fundamental to providing reliable precision for the radiocarbon age calibration of marine based samples. We make a first step towards deconvolving the contributions of different processes to the total uncertainty; discuss the main differences of Marine20 from the previous age calibration curve Marine13; and identify the limitations of our approach together with key areas for further work. The updated values for ΔR, the regional marine radiocarbon reservoir age corrections required to calibrate against Marine20, can be found at the data base http://calib.org/marine/.
Mild cognitive impairment (MCI) may gradually worsen to dementia, but often remains stable for extended periods of time. Little is known about the predictors of decline to help explain this variation. We aimed to explore whether this heterogeneous course of MCI may be predicted by the presence of Lewy body (LB) symptoms in a prospectively-recruited longitudinal cohort of MCI with Lewy bodies (MCI-LB) and Alzheimer's disease (MCI-AD).
Methods
A prospective cohort (n = 76) aged ⩾60 years underwent detailed assessment after recent MCI diagnosis, and were followed up annually with repeated neuropsychological testing and clinical review of cognitive status and LB symptoms. Latent class mixture modelling identified data-driven sub-groups with distinct trajectories of global cognitive function.
Results
Three distinct trajectories were identified in the full cohort: slow/stable progression (46%), intermediate progressive decline (41%) and a small group with a much faster decline (13%). The presence of LB symptomology, and visual hallucinations in particular, predicted decline v. a stable cognitive trajectory. With time zeroed on study end (death, dementia or withdrawal) where available (n = 39), the same subgroups were identified. Adjustment for baseline functioning obscured the presence of any latent classes, suggesting that baseline function is an important parameter in prospective decline.
Conclusions
These results highlight some potential signals for impending decline in MCI; poorer baseline function and the presence of probable LB symptoms – particularly visual hallucinations. Identifying people with a rapid decline is important but our findings are preliminary given the modest cohort size.
Introduction: Although oral rehydration therapy is recommended for children with acute gastroenteritis (AGE) with none to some dehydration, intravenous (IV) rehydration is still commonly administered to these children in high-income countries. IV rehydration is associated with pain, anxiety, and emergency department (ED) revisits in children with AGE. A better understanding of the factors associated with IV rehydration is needed to inform knowledge translation strategies. Methods: This was a planned secondary analysis of the Pediatric Emergency Research Canada (PERC) and Pediatric Emergency Care Applied Research Network (PECARN) randomized, controlled trials of oral probiotics in children with AGE-associated diarrhea. Eligible children were aged 3-48 months and reported > 3 watery stools in a 24-hour period. The primary outcome was administration of IV rehydration at the index ED visit. We used mixed-effects logistic regression model to explore univariable and multivariable relationships between IV rehydration and a priori risk factors. Results: From the parent study sample of 1848 participants, 1846 had data available for analysis: mean (SD) age of 19.1 ± 11.4 months, 45.4% females. 70.2% (1292/1840) vomited within 24 hours of the index ED visit and 34.1% (629/1846) received ondansetron in the ED. 13.0% (240/1846) were administered IV rehydration at the index ED visit, and 3.6% (67/1842) were hospitalized. Multivariable predictors of IV rehydration were Clinical Dehydration Scale (CDS) score [compared to none: mild to moderate (OR: 8.1, CI: 5.5-11.8); severe (OR: 45.9, 95% CI: 20.1-104.7), P < 0.001], ondansetron in the ED (OR: 1.8, CI: 1.2-2.6, P = 0.003), previous healthcare visit for the same illness [compared to no prior visit: prior visit with no IV (OR: 1.9, 95% CI: 1.3-2.9); prior visit with IV (OR: 10.5, 95% CI: 3.2-34.8), P < 0.001], and country [compared to Canada: US (OR: 4.1, CI: 2.3-7.4, P < 0.001]. Significantly more participants returned to the ED with symptoms of AGE within 3 days if IV fluids were administered at the index visit [30/224 (13.4%) versus 88/1453 (6.1%), P < 0.001]. Conclusion: Higher CDS scores, antiemetic use, previous healthcare visits and country were independent predictors of IV rehydration which was also associated with increased ED revisits. Knowledge translation focused on optimizing the use of antiemetics (i.e. for those with dehydration) and reducing the geographic variation in IV rehydration use may improve the ED experience and reduce ED-revisits.
Limited data exist regarding combination therapy for Clostridium difficile infection (CDI). After adjusting for confounders in a cohort of patients with CDI and≥1 year old, combination therapy was not associated with significant differences in clinical outcomes, but it was associated with prolonged duration of therapy (1.22 days; 95% confidence interval, 1.03–1.44 days; P=.02).
Childbirth is a potent trigger for the onset of psychiatric illness in women including postpartum depression (PPD) and postpartum psychosis (PP). Medical complications occurring during pregnancy and/or childbirth have been linked to postpartum psychiatric illness and sociodemographic factors. We evaluated if pregnancy and obstetrical predictors have similar effects on different types of postpartum psychiatric disorders.
Method
A population-based cohort study using Danish registers was conducted in 392 458 primiparous women with a singleton delivery between 1995 and 2012 and no previous psychiatric history. The main outcome was first-onset postpartum psychiatric episodes. Incidence rate ratios (IRRs) were calculated for any psychiatric contact in four quarters for the first year postpartum.
Results
PPD and postpartum acute stress reactions were associated with pregnancy and obstetrical complications. For PPD, hyperemesis gravidarum [IRR 2.69, 95% confidence interval (CI) 1.93–3.73], gestational hypertension (IRR 1.84, 95% CI 1.33–2.55), pre-eclampsia (IRR 1.45, 95% CI 1.14–1.84) and Cesarean section (C-section) (IRR 1.32, 95% CI 1.13–1.53) were associated with increased risk. For postpartum acute stress, hyperemesis gravidarum (IRR 1.93, 95% CI 1.38–2.71), preterm birth (IRR 1.51, 95% CI 1.30–1.75), gestational diabetes (IRR 1.42, 95% CI 1.03–1.97) and C-section (IRR 1.36, 95% CI 1.20–1.55) were associated with increased risk. In contrast, risk of PP was not associated with pregnancy or obstetrical complications.
Conclusions
Pregnancy and obstetrical complications can increase the risk for PPD and acute stress reactions but not PP. Identification of postpartum women requiring secondary care is needed to develop targeted approaches for screening and treatment. Future work should focus on understanding the contributions of psychological stressors and underlying biology on the development of postpartum psychiatric illness.
One case of hospital-acquired listeriosis was linked to milkshakes produced in a commercial-grade shake freezer machine. This machine was found to be contaminated with a strain of Listeria monocytogenes epidemiologically and molecularly linked to a contaminated pasteurized, dairy-based ice cream product at the same hospital a year earlier, despite repeated cleaning and sanitizing. Healthcare facilities should be aware of the potential for prolonged Listeria contamination of food service equipment. In addition, healthcare providers should consider counselling persons who have an increased risk for Listeria infections regarding foods that have caused Listeria infections. The prevalence of persistent Listeria contamination of commercial-grade milkshake machines in healthcare facilities and the risk associated with serving dairy-based ice cream products to hospitalized patients at increased risk for invasive L. monocytogenes infections should be further evaluated.
Dietary long-chain n-3 PUFA (n-3 LCPUFA) in infancy may have long-term effects on lifestyle disease risk. The present follow-up study investigated whether maternal fish oil (FO) supplementation during lactation affected growth and blood pressure in adolescents and whether the effects differed between boys and girls. Mother–infant pairs (n 103) completed a randomised controlled trial with FO (1·5 g/d n-3 LCPUFA) or olive oil (OO) supplements during the first 4 months of lactation; forty-seven mother–infant pairs with high fish intake were followed-up for 4 months as the reference group. We also followed-up 100 children with assessment of growth, blood pressure, diet by FFQ and physical activity by 7-d accelerometry at 13·5 (sd 0·4) years of age. Dried whole-blood fatty acid composition was analysed in a subgroup (n 49). At 13 years of age, whole-blood n-3 LCPUFA, diet, physical activity and body composition did not differ between the three groups. The children from the FO group were 3·4 (95 % CI 0·2, 6·6) cm shorter (P=0·035) than those from the OO group, and tended to have less advanced puberty (P=0·068), which explained the difference in height. There was a sex-specific effect on diastolic blood pressure (Psex×group=0·020), which was driven by a 3·9 (95 % CI 0·2, 7·5) mmHg higher diastolic blood pressure in the FO compared with the OO group among boys only (P=0·041). Our results indicate that early n-3 LCPUFA intake may reduce height in early adolescence due to a delay in pubertal maturation and increase blood pressure specifically in boys, thereby tending to counteract existing sex differences.
Universal screening for postpartum depression is recommended in many countries. Knowledge of whether the disclosure of depressive symptoms in the postpartum period differs across cultures could improve detection and provide new insights into the pathogenesis. Moreover, it is a necessary step to evaluate the universal use of screening instruments in research and clinical practice. In the current study we sought to assess whether the Edinburgh Postnatal Depression Scale (EPDS), the most widely used screening tool for postpartum depression, measures the same underlying construct across cultural groups in a large international dataset.
Method
Ordinal regression and measurement invariance were used to explore the association between culture, operationalized as education, ethnicity/race and continent, and endorsement of depressive symptoms using the EPDS on 8209 new mothers from Europe and the USA.
Results
Education, but not ethnicity/race, influenced the reporting of postpartum depression [difference between robust comparative fit indexes (∆*CFI) < 0.01]. The structure of EPDS responses significantly differed between Europe and the USA (∆*CFI > 0.01), but not between European countries (∆*CFI < 0.01).
Conclusions
Investigators and clinicians should be aware of the potential differences in expression of phenotype of postpartum depression that women of different educational backgrounds may manifest. The increasing cultural heterogeneity of societies together with the tendency towards globalization requires a culturally sensitive approach to patients, research and policies, that takes into account, beyond rhetoric, the context of a person's experiences and the context in which the research is conducted.
Improvements in colorectal cancer (CRC) detection and treatment have led to greater numbers of CRC survivors, for whom there is limited evidence on which to provide dietary guidelines to improve survival outcomes. Higher intake of red and processed meat and lower intake of fibre are associated with greater risk of developing CRC, but there is limited evidence regarding associations with survival after CRC diagnosis. Among 3789 CRC cases in the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort, pre-diagnostic consumption of red meat, processed meat, poultry and dietary fibre was examined in relation to CRC-specific mortality (n 1008) and all-cause mortality (n 1262) using multivariable Cox regression models, adjusted for CRC risk factors. Pre-diagnostic red meat, processed meat or fibre intakes (defined as quartiles and continuous grams per day) were not associated with CRC-specific or all-cause mortality among CRC survivors; however, a marginal trend across quartiles of processed meat in relation to CRC mortality was detected (P 0·053). Pre-diagnostic poultry intake was inversely associated with all-cause mortality among women (hazard ratio (HR)/20 g/d 0·92; 95 % CI 0·84, 1·00), but not among men (HR 1·00; 95 % CI 0·91, 1·09) (Pfor heterogeneity=0·10). Pre-diagnostic intake of red meat or fibre is not associated with CRC survival in the EPIC cohort. There is suggestive evidence of an association between poultry intake and all-cause mortality among female CRC survivors and between processed meat intake and CRC-specific mortality; however, further research using post-diagnostic dietary data is required to confirm this relationship.
Overall IDSA/SIS intra-abdominal infection guideline compliance was not associated with improved outcomes; however, there was a longer time to active therapy (P=.024) and higher mortality (P=.077) if empiric therapy was too narrow per guidelines. These findings support the need for the implementation of customized institutional guidelines adapted from the IDSA/SIS guidelines.
Although cognitive deficits in patients with schizophrenia are rooted early in development, the impact of psychosis on the course of cognitive functioning remains unclear. In this study a nested case-control design was used to examine the relationship between emerging psychosis and the course of cognition in individuals ascertained as clinical high-risk (CHR) who developed psychosis during the study (CHR + T).
Method
Fifteen CHR + T subjects were administered a neurocognitive battery at baseline and post-psychosis onset (8.04 months, s.d. = 10.26). CHR + T subjects were matched on a case-by-case basis on age, gender, and time to retest with a group of healthy comparison subjects (CNTL, n = 15) and two groups of CHR subjects that did not transition: (1) subjects matched on medication treatment (i.e. antipsychotics and antidepressants) at both baseline and retesting (Meds-matched CHR + NT, n = 15); (2) subjects unmedicated at both assessments (Meds-free CHR + NT, n = 15).
Results
At baseline, CHR + T subjects showed large global neurocognitive and intellectual impairments, along with specific impairments in processing speed, verbal memory, sustained attention, and executive function. These impairments persisted after psychosis onset and did not further deteriorate. In contrast, CHR + NT subjects demonstrated stable mild to no impairments in neurocognitive and intellectual performance, independent of medication treatment.
Conclusions
Cognition appears to be impaired prior to the emergence of psychotic symptoms, with no further deterioration associated with the onset of psychosis. Cognitive deficits represent trait risk markers, as opposed to state markers of disease status and may therefore serve as possible predictors of schizophrenia prior to the onset of the full illness.
A total of 111 clinical and environmental O1, O139 and non-O1/O139 Vibrio cholerae strains isolated between 1978 and 2008 from different geographical areas were typed using a combination of methods: antibiotic susceptibility, biochemical test, serogroup, serotype, biotype, sequences containing variable numbers of tandem repeats (VNTRs) and virulence genes ctxA and tcpA amplification. As a result of the performed typing work, the strains were organized into four clusters: cluster A1 included clinical O1 Ogawa and O139 serogroup strains (ctxA+ and tcpA+); cluster A2 included clinical non-O1/O139 strains (ctxA− and tcpA−), as well as environmental O1 Inaba and non-O1/O139 strains (ctxA− and tcpA−/tcpA+); cluster B1 contained two clinical O1 strains and environmental non-O1/O139 strains (ctxA− and tcpA+/tcpA−); cluster B2 contained clinical O1 Inaba and Ogawa strains (ctxA+ and tcpA+). The results of this work illustrate the advantage of combining several typing methods to discriminate between clinical and environmental V. cholerae strains.
During the first half of the twentieth century, widespread regulatory efforts to control cattle brucellosis due to Brucella abortus in the Union of Soviet Socialist Republics were essentially non-existent, and control was limited to selective test and slaughter of serologic agglutination reactors. By the 1950s, 2–3 million cattle were being vaccinated annually with the strain 19 vaccine, but because this vaccine induced strong, long-term titers on agglutination tests that interfered with identification of cattle infected with field strains of B. abortus, its use in cattle was discontinued in 1970. Soviet scientists then began a comprehensive program of research to identify vaccines with high immunogenicity, weak responses on agglutination tests and low pathogenicity in humans, as a foundation for widespread control of cattle brucellosis. While several new vaccines that induced weak or no responses on serologic agglutination tests were identified by experiments in guinea pigs and cattle, a large body of experimental and field studies suggested that the smooth–rough strain SR82 vaccine combined the desired weak agglutination test responses with comparatively higher efficacy against brucellosis. In 1974, prior to widespread use of strain SR82 vaccine, over 5300 cattle farms across the Russian Federation were known to be infected with B. abortus. By January 2008, only 68 cattle farms in 18 regions were known to be infected with B. abortus, and strain SR82 continues to be the most widely and successfully used vaccine in many regions of the Russian Federation.