We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
0.5 molal iron(III) chloride solutions were hydrolysed at room temperature by base additions in the range OH/Fe mole ratio 0–2.75. After an ageing period the hydrolysed solutions were used to produce amorphous hydroxide gels from which crystalline products were grown at 65°C, at low pH or high pH. Examination of crystal composition and morphology and comparison with similarly treated nitrate solutions showed that the nucleation of hematite and goethite is inhibited in chloride containing solutions, which allow growth of small rod shaped β-FeOOH to predominate or occur exclusively in gels at pH 1–2. The addition of seed crystals of hematite and goethite allows competitive growth of all three minerals. The transformations β-FeOOH → α-Fe2O3 and β-FeOOH → α-FeOOH at pH 1–2 proceed by dissolution and reprecipitation and are promoted by adding seed crystals.
In 2016, the National Center for Advancing Translational Science launched the Trial Innovation Network (TIN) to address barriers to efficient and informative multicenter trials. The TIN provides a national platform, working in partnership with 60+ Clinical and Translational Science Award (CTSA) hubs across the country to support the design and conduct of successful multicenter trials. A dedicated Hub Liaison Team (HLT) was established within each CTSA to facilitate connection between the hubs and the newly launched Trial and Recruitment Innovation Centers. Each HLT serves as an expert intermediary, connecting CTSA Hub investigators with TIN support, and connecting TIN research teams with potential multicenter trial site investigators. The cross-consortium Liaison Team network was developed during the first TIN funding cycle, and it is now a mature national network at the cutting edge of team science in clinical and translational research. The CTSA-based HLT structures and the external network structure have been developed in collaborative and iterative ways, with methods for shared learning and continuous process improvement. In this paper, we review the structure, function, and development of the Liaison Team network, discuss lessons learned during the first TIN funding cycle, and outline a path toward further network maturity.
To determine the relationship between severe acute respiratory syndrome coronavirus 2 infection, hospital-acquired infections (HAIs), and mortality.
Design:
Retrospective cohort.
Setting:
Three St. Louis, MO hospitals.
Patients:
Adults admitted ≥48 hours from January 1, 2017 to August 31, 2020.
Methods:
Hospital-acquired infections were defined as those occurring ≥48 hours after admission and were based on positive urine, respiratory, and blood cultures. Poisson interrupted time series compared mortality trajectory before (beginning January 1, 2017) and during the first 6 months of the pandemic. Multivariable logistic regression models were fitted to identify risk factors for mortality in patients with an HAI before and during the pandemic. A time-to-event analysis considered time to death and discharge by fitting Cox proportional hazards models.
Results:
Among 6,447 admissions with subsequent HAIs, patients were predominantly White (67.9%), with more females (50.9% vs 46.1%, P = .02), having slightly lower body mass index (28 vs 29, P = .001), and more having private insurance (50.6% vs 45.7%, P = .01) in the pre-pandemic period. In the pre-pandemic era, there were 1,000 (17.6%) patient deaths, whereas there were 160 deaths (21.3%, P = .01) during the pandemic. A total of 53 (42.1%) coronavirus disease 2019 (COVID-19) patients died having an HAI. Age and comorbidities increased the risk of death in patients with COVID-19 and an HAI. During the pandemic, Black patients with an HAI and COVID-19 were more likely to die than White patients with an HAI and COVID-19.
Conclusions:
In three Midwestern hospitals, patients with concurrent HAIs and COVID-19 were more likely to die if they were Black, elderly, and had certain chronic comorbidities.
While regular monitoring of stun quality in abattoirs is now required by EU law, guidelines specific to species and stun method have not been adequately developed. Carbon dioxide (CO2) gas stunning of pigs in groups is widely used because of efficiency and reduced pre-slaughter stress. However, some pigs may recover from the stun process if it is not correctly managed. In light of these concerns, this study aimed to develop and implement a standardised assessment for stun quality for use in commercial pig abattoirs. Eight abattoirs and 9,520 slaughter pigs were assessed for stun group size, stick time and stun quality. The stun system, CO2 concentrations and exposure times were also investigated. A stun-quality protocol (SQP) identified and risk-rated symptoms signifying recovery of consciousness. In abattoirs using paternoster stun-boxes, pigs consistently showed no stun-quality problems despite 65% with stick times between 70 and 100 s. Stun-quality problems were detected in 1.7 to 3.3% of pigs in abattoirs using dip-lift stun-boxes and 75% of stick times were below 60 s. In 36 of 38 cases of inadequately stunned pigs, a combination of symptoms from the SQP was seen. Regular gasping preceded other symptoms in 31 cases and was a valid indicator of inadequate stunning. In response to the stun-quality assessments, two abattoirs serviced the stun machines (increasing CO2 concentrations and exposure times). All pigs were adequately stunned in follow-up studies. Implementation of stun-quality assessments, such as developed in this study, can assure monitoring of animal welfare at slaughter, beneficial not only to the industry and relevant authorities but also the concerned consumer.
Moles are widely trapped as pests on farms and amenity land in Britain. Spring traps for killing mammals generally require welfare approval in the UK, but mole traps are exempt. Previous research demonstrated wide variation in the mechanical performance of mole traps. In this context, we aimed to produce new data on the welfare impact of kill-trapping moles in the field. We collected 50 moles trapped in southern England (November 2008-August 2009). Captures peaked during the peak in male breeding activity, when captures were almost exclusively male. Post mortem and x-ray (radiation) examinations were conducted to determine injuries and likely cause of death. No moles sustained damaged skulls or upper cervical vertebrae (which could cause unconsciousness immediately). The primary identifiable cause of death for all but one mole was acute haemorrhage; this contrasts with the findings of the only previous such study, in which only one mole showed clear evidence of haemorrhaging. Some moles may have asphyxiated although it was not possible to determine this. Moles most likely became unconscious before death, but times to unconsciousness, and death, can be determined only through killing trials and further investigation is urgently needed. This should be done through the spring traps approval process; this could improve the welfare standards of trapping for many thousands of moles each year. Mole trapping for long-term population control might be better targeted after the peak in male breeding activity, when females are more likely to be caught, but this would threaten the welfare of dependent young underground.
Background: Poorly-defined cases (PDCs) of focal epilepsy are cases with no/subtle MRI abnormalities or have abnormalities extending beyond the lesion visible on MRI. Here, we evaluated the utility of Arterial Spin Labeling (ASL) MRI perfusion in PDCs of pediatric focal epilepsy. Methods: ASL MRI was obtained in 25 consecutive children presenting with poorly-defined focal epilepsy (20 MRI- positive, 5 MRI-negative). Qualitative visual inspection and quantitative analysis with asymmetry and Z-score maps were used to detect perfusion abnormalities. ASL results were compared to the hypothesized epileptogenic zone (EZ) derived from other clinical/imaging data and the resection zone in patients with Engel I/II outcome and >18 month follow-up. Results: Qualitative analysis revealed perfusion abnormalities in 17/25 total cases (68%), 17/20 MRI-positive cases (85%) and none of the MRI-negative cases. Quantitative analysis confirmed all cases with abnormalities on qualitative analysis, but found 1 additional true-positive and 4 false-positives. Concordance with the surgically-proven EZ was found in 10/11 cases qualitatively (sensitivity=91%, specificity=50%), and 11/11 cases quantitatively (sensitivity=100%, specificity=23%). Conclusions: ASL perfusion may support the hypothesized EZ, but has limited localization benefit in MRI-negative cases. Nevertheless, owing to its non-invasiveness and ease of acquisition, ASL could be a useful addition to the pre-surgical MRI evaluation of pediatric focal epilepsy.
Background: Poorly-defined cases (PDCs) of focal epilepsy are cases with no/subtle MRI abnormalities or have abnormalities extending beyond the lesion visible on MRI. Here, we evaluated the utility of Arterial Spin Labeling (ASL) MRI perfusion in PDCs of pediatric focal epilepsy. Methods: ASL MRI was obtained in 25 consecutive children presenting with poorly-defined focal epilepsy (20 MRI- positive, 5 MRI-negative). Qualitative visual inspection and quantitative analysis with asymmetry and Z-score maps were used to detect perfusion abnormalities. ASL results were compared to the hypothesized epileptogenic zone (EZ) derived from other clinical/imaging data and the resection zone in patients with Engel I/II outcome and >18 month follow-up. Results: Qualitative analysis revealed perfusion abnormalities in 17/25 total cases (68%), 17/20 MRI-positive cases (85%) and none of the MRI-negative cases. Quantitative analysis confirmed all cases with abnormalities on qualitative analysis, but found 1 additional true-positive and 4 false-positives. Concordance with the surgically-proven EZ was found in 10/11 cases qualitatively (sensitivity=91%, specificity=50%), and 11/11 cases quantitatively (sensitivity=100%, specificity=23%). Conclusions: ASL perfusion may support the hypothesized EZ, but has limited localization benefit in MRI-negative cases. Nevertheless, owing to its non-invasiveness and ease of acquisition, ASL could be a useful addition to the pre-surgical MRI evaluation of pediatric focal epilepsy.
The incidence of surgical site infections may be underreported if the data are not routinely validated for accuracy. Our goal was to investigate the communicated SSI rate from a large network of Swiss hospitals compared with the results from on-site surveillance quality audits.
Design:
Retrospective cohort study.
Patients:
In total, 81,957 knee and hip prosthetic arthroplasties from 125 hospitals and 33,315 colorectal surgeries from 110 hospitals were included in the study.
Methods:
Hospitals had at least 2 external audits to assess the surveillance quality. The 50-point standardized score per audit summarizes quantitative and qualitative information from both structured interviews and a random selection of patient records. We calculated the mean National Healthcare Safety Network (NHSN) risk index adjusted infection rates in both surgery groups.
Results:
The median NHSN adjusted infection rate per hospital was 1.0% (interquartile range [IQR], 0.6%–1.5%) with median audit score of 37 (IQR, 33–42) for knee and hip arthroplasty, and 12.7% (IQR, 9.0%–16.6%), with median audit score 38 (IQR, 35–42) for colorectal surgeries. We observed a wide range of SSI rates and surveillance quality, with discernible clustering for public and private hospitals, and both lower infection rates and audit scores for private hospitals. Infection rates increased with audit scores for knee and hip arthroplasty (P value for the slope = .002), and this was also the case for planned (P = .002), and unplanned (P = .02) colorectal surgeries.
Conclusions:
Surveillance systems without routine evaluation of validity may underestimate the true incidence of SSIs. Audit quality should be taken into account when interpreting SSI rates, perhaps by adjusting infection rates for those hospitals with lower audit scores.
Introduction: The New Brunswick Trauma Registry is a database of injury admissions from eight hospitals throughout the province. Data tracks individuals in-hospital. By linking this information with vital statistics, we are able to observe outcomes post-discharge and can model health outcomes for participants. We want to know how outcomes for trauma patients compare with the general population post discharge. Methods: Using data from 2014-15, we followed over 2100 trauma registry observations for one year and tracked mortality rate per 1,000 people by age-group. We also compared the outcomes of this group to all Discharge Abstract Database (DAD) entries in the province (circa. 7500 total). We tracked mortality in-hospital, at six months, and one year after discharge. We truncated age into groups aged 40-64, 65-84, and 85 or older. Results: In-hospital mortality among those in the trauma registry is approximately 20 per 1,000 people for those age 40-64, 50 per 1,000 people for those aged 65-84, and 150 per 1,000 people aged 85 or older. For the oldest age group this is in line with the expected population mortality rate, for the younger two groups these estimates are approximately 2-4 times higher than expected mortality. The mortality at six-month follow-up for both of the younger groups remains higher than expected. At one-year follow-up, the mortality for the 65-84 age group returns to the expected population baseline, but is higher for those age 40-64. Causes of death for those who die in hospital are injury for nearly 50% of observations. After discharge, neoplasms and heart disease are the most common causes of death. Trends from the DAD are similar, with lower mortality overall. Of note, cardiac causes of death account for nearly as many deaths in the 6 months after the injury in the 40 -64 age group as the injury itself. Conclusion: Mortality rates remain high upon discharge for up to a year later for some age groups. Causes of death are not injury-related. Some evidence suggests that the injury could have been related to the eventual cause of death (e.g., dementia), but questions remain about the possibility for trauma-mitigating care increasing the risk of mortality from comorbidities. For example, cardiac death, which is largely preventable, is a significant cause of death in the 40-64 age group after discharge. Including an assessment of Framingham risk factors as part of the patients rehabilitation prescription may reduce mortality.
Introduction: Electronic medical records (EMR) have placed increasing demand on emergency physicians and may contribute to physician burnout and stress. The use of scribes to reduce workload and increase productivity in emergency departments (ED) has been reported. This objective of this study was to evaluate the educational and experiential value of scribing among medical and undergraduate students. We asked: “Will undergraduates be willing to scribe in exchange for clinical exposure and experience?”; and, “Should scribing be integrated into the medical school curriculum?” Methods: A mixed-methods model was employed. The study population included 5 undergraduate, and 5 medical students. Scribes received technical training on how to take physician notes. Undergraduate students were provided with optional resources to familiarize themselves with common medical terminology. Scribes were assigned to physicians based on availability. An exit interview and semi-structured interviews were conducted at the conclusion of the study. Interviews were transcribed and coded into thematic coding trees. A constructivist grounded theory approach was used to analyze the results. Themes were reviewed and verified by two members of the research team. Results: Undergraduate students preferred volunteering in the ED over other volunteer experiences (5/5); citing direct access to the medical field (5/5), demystification of the medical profession (4/5), resume building (5/5), and perceived value added to the health care team (5/5) as main motivators to continue scribing. Medical students felt scribing should be integrated into their curriculum (4/5) because it complemented their shadowing experience by providing unique value that shadowing did not. Based on survey results, five undergraduate students would be required to cover 40 volunteer hours per week. Conclusion: A student volunteer model of scribing is worthwhile to students and may be feasible; however, scribe availability, potentially high scribe turnover, and limited time to develop a rapport with their physician may impact any efficiency benefit scribes might provide. Importantly, scribing may be an invaluable experience for directing career goals and ensuring that students intrinsically interested in medicine pursue the profession. Medical students suggested that scribing could be added to the year one curriculum to help them develop a framework for how to take histories and manage patients.
Introduction: Buprenorphine/naloxone (buprenorphine) has proven to be a life-saving intervention amidst the ongoing opioid epidemic in Canada. Research has shown benefits to initiating buprenorphine from the emergency department (ED) including improved treatment retention, systemic health care savings and fewer drug-related visits to the ED. Despite this, there has been little to no uptake of this evidence-based practice in our department. This qualitative study aimed to determine the local barriers and potential solutions to initiating buprenorphine in the ED and gain an understanding of physician attitudes and behaviours regarding harm reduction care and opioid use disorder management. Methods: ED physicians at a midsize Atlantic hospital were recruited by convenience sampling to participate in semi-structured privately conducted interviews. Audio recordings were transcribed verbatim and de-identified transcripts were uploaded to NVivo 12 plus for concept driven and inductive coding and a hierarchy of open, axial and selective coding was employed. Transcripts were independently reviewed by a local qualitative research expert and themes were compared for similarity to limit bias. Interview saturation was reached after 7 interviews. Results: Emergent themes included a narrow scope of harm reduction care that primarily focused on abstinence-based therapies and a multitude of biases including feelings of deception, fear of diversion, feeling buprenorphine induction was too time consuming for the ED and differentiating patients with opioid use disorder from ‘medically ill’ patients. Several barriers and proposed solutions to initiating buprenorphine from the ED were elicited including lack of training and need for formal education, poor familiarity with buprenorphine, the need for an algorithm and community bridge program and formal supports such as an addictions consult team for the ED. Conclusion: This study elicited several opportunities for improved care for patients with addictions presenting to our ED. Future education will focus on harm reduction care, specifically strategies for managing patients desiring to continue to use substances. Education will focus on addressing the multitude of biases elicited and dispelling common myths. A locally informed buprenorphine pathway will be developed. In future, this study may be used to advocate for improved formal supports for our department including an addictions consult team.
Introduction: Vaginal bleeding in early pregnancy is a common emergency department (ED) presentation, with many of these episodes resulting in poor obstetrical outcome. These outcomes have been extensively studied, but there have been few evaluations of what variables are associated predictors. This study aimed to identify predictors of less than optimal obstetrical outcomes for women who present to the ED with early pregnancy bleeding. Methods: A regional centre health records review included pregnant females who presented to the ED with vaginal bleeding at <20 weeks gestation. This study investigated differences in presenting features between groups with subsequent optimal outcomes (OO; defined as a full-term live birth >37 weeks) and less than optimal outcomes (LOO; defined as a miscarriage, stillbirth or pre-term live birth). Predictor variables included: maternal age, gestational age at presentation, number of return ED visits, socioeconomic status (SES), gravida-para-abortus status, Rh status, Hgb level and presence of cramping. Rates and results of point of care ultrasound (PoCUS) and ultrasound (US) by radiology were also considered. Results: Records for 422 patients from Jan 2017 to Nov 2018 were screened and 180 patients were included. Overall, 58.3% of study participants had a LOO. The only strong predictor of outcome was seeing an Intra-Uterine Pregnancy (IUP) with Fetal Heart Beat (FHB) on US; OO rate 74.3% (95% CI 59.8-88.7; p < 0.01). Cramping (with bleeding) trended towards a higher rate of LOO (62.7%, 95% CI 54.2-71.1; p = 0.07). SES was not a reliable predictor of LOO, with similar clinical outcome rates above and below the poverty line (57.5% [95% CI 46.7-68.3] vs 59% [95% CI 49.3-68.6] LOO). For anemic patients, the non-live birth rate was 100%, but the number with this variable was small (n = 5). Return visits (58.3%, 95% CI 42.2-74.4), previous abortion (58.8%, 95% CI 49.7-67.8), no living children (60.2%, 95% CI 50.7-69.6) and past pregnancy (55.9%, 95% CI 46.6-65.1) were not associated with higher rates of LOO. Conclusion: Identification of a live IUP, anemia, and cramping have potential as predictors of obstetrical outcome in early pregnancy bleeding. This information may provide better guidance for clinical practice and investigations in the emergency department and the predictive value of these variables support more appropriate counseling to this patient population.
Introduction: Distal radial fractures (DRF) remain the most commonly encountered fracture in the Emergency Department (ED). The initial management of displaced DRFs by Emergency Physicians (EP) poses considerable resource allocation. We wished to determine the adequacy of reduction, both initially and at follow up. This data updates previously presented high level findings. Methods: We performed a mixed-methods study including patients who underwent procedural sedation and manipulation by an EP for a DRF. Radiological images performed at initial assessment, post-reduction, and clinic follow up were reviewed by a panel of orthopedic surgeons and radiologists blinded to outcomes, and assessed for evidence of displacement. Demographic data were pooled from patient records and included in statistical analysis. Results: Seventy patients were included and had follow-up completed. Initial reduction was deemed to be adequate in 37 patients (53%; 95% CI 41.32 to 64.10%). At clinic follow-up assessment, 26 reductions remained adequate; a slippage rate of 30% (95% CI of 17.37 to 45.90). Overall 7 patients (10%; 95% CI 4.65 to 19.51%) required revision of the initial reduction in the operating room. Agreement on adequacy of reduction on post-reduction radiographs between radiologists and orthopedic surgeons was 38.6% (95% CI -38.3 to -7.4, Kappa -0.229). The statistical strength of this agreement is worse than what would be expected by chance alone. There was no association found between age, sex, or of time of initial presentation and final outcomes. Conclusion: Although blinded review by specialists determined only half of initial EP DRF reductions to be radiographically adequate, only 10 percent actually required further intervention. Agreement between specialists on adequacy was poor. The majority of DRFs reduced by EPs do not require further surgical intervention.
Introduction: Determining fluid status prior to resuscitation provides a more accurate guide for appropriate fluid administration in the setting of undifferentiated hypotension. Emergency Department (ED) point of care ultrasound (PoCUS) has been proposed as a potential non-invasive, rapid, repeatable investigation to ascertain inferior vena cava (IVC) characteristics. Our goal was to determine the feasibility of using PoCUS to measure IVC size and collapsibility. Methods: This was a planned secondary analysis of data from a prospective multicentre international study investigating PoCUS in ED patients with undifferentiated hypotension. We prospectively collected data on IVC size and collapsibility using a standard data collection form in 6 centres. The primary outcome was the proportion of patients with a clinically useful (determinate) scan defined as a clearly visible intrahepatic IVC, measurable for size and collapse. Descriptive statistics are provided. Results: A total of 138 scans were attempted on 138 patients; 45.7% were women and the median age was 58 years old. Overall, one hundred twenty-nine scans (93.5%; 95% CI 87.9 to 96.7%) were determinate. 131 (94.9%; 89.7 to 97.7%) were determinate for IVC size, and 131 (94.9%; 89.7 to 97.7%) were determinate for collapsibility. Conclusion: In this analysis of 138 ED patients with undifferentiated hypotension, the vast majority of PoCUS scans to investigate IVC characteristics were determinate. Future work should include analysis of the value of IVC size and collapsibility in determining fluid status in this group.
Introduction: Crowding is associated with poor patient outcomes in emergency departments (ED). Measures of crowding are often complex and resource-intensive to score and use in real-time. We evaluated single easily obtained variables to establish the presence of crowding compared to more complex crowding scores. Methods: Serial observations of patient flow were recorded in a tertiary Canadian ED. Single variables were evaluated including total number of patients in the ED (census), in beds, in the waiting room, in the treatment area waiting to be assessed, and total inpatient admissions. These were compared with Crowding scores (NEDOCS, EDWIN, ICMED, three regional hospital modifications of NEDOCS) as predictors of crowding. Predictive validity was compared to the reference standard of physician perception of crowding, using receiver operator curve analysis. Results: 144 of 169 potential events were recorded over 2 weeks. Crowding was present in 63.9% of the events. ED census (total number of patients in the ED) was strongly correlated with crowding (AUC = 0.82 with 95% CI = 0.76 - 0.89) and its performance was similar to that of NEDOCS (AUC = 0.80 with 95% CI = 0.76 - 0.90) and a more complex local modification of NEDOCS, the S-SAT (AUC = 0.83, 95% CI = 0.74 - 0.89). Conclusion: The single indicator, ED census was as predictive for the presence of crowding as more complex crowding scores. A two-stage approach to crowding intervention is proposed that first identifies crowding with a real-time ED census statistic followed by investigation of precipitating and modifiable factors. Real time signalling may permit more standardized and effective approaches to manage ED flow.
Introduction: Patients presenting to the emergency department (ED) with hypotension have a high mortality rate and require careful yet rapid resuscitation. The use of cardiac point of care ultrasound (PoCUS) in the ED has progressed beyond the basic indications of detecting pericardial fluid and activity in cardiac arrest. We examine if finding left ventricular dysfunction (LVD) on emergency physician performed PoCUS reliably predicts the presence of cardiogenic shock in hypotensive ED patients. Methods: We prospectively collected PoCUS findings performed in 135 ED patients with undifferentiated hypotension as part of an international study. Patients with clearly identified etiologies for hypotension were excluded, along with other specific presumptive diagnoses. LVD was defined as identification of a generally hypodynamic LV in the setting of shock. PoCUS findings were collected using a standardized protocol and data collection form. All scans were performed by PoCUS-trained emergency physicians. Final shock type was defined as cardiogenic or non-cardiogenic by independent specialist blinded chart review. Results: All 135 patients had complete follow up. Median age was 56 years, 53% of patients were male. Disease prevalence for cardiogenic shock was 12% and the mortality rate was 24%. The presence of LVD on PoCUS had a sensitivity of 62.50% (95%CI 35.43% to 84.80%), specificity of 94.12% (88.26% to 97.60%), positive-LR 10.62 (4.71 to 23.95), negative-LR 0.40 (0.21 to 0.75) and accuracy of 90.37% (84.10% to 94.77%) for detecting cardiogenic shock. Conclusion: Detecting left ventricular dysfunction on PoCUS in the ED may be useful in confirming the underlying shock type as cardiogenic in otherwise undifferentiated hypotensive patients.
We consider the life cycle of an axisymmetric laminar thermal starting from the initial condition of a Gaussian buoyant blob. We find that, as time progresses, the thermal transitions through a number of distinct stages, undergoing several morphological changes before ending up as a vortex ring. Whilst each stage is interesting in its own right, one objective of this study is to set out a consistent mathematical framework under which the entire life cycle can be studied. This allows examination of the transition between the different stages, as well as shedding light on some unsolved questions from previous works. We find that the early stages of formation are key in determining the properties of the final buoyant vortex ring and that, since they occur on a time scale where viscosity has little effect, the final properties of the ring display an independence above a critical Reynolds number. We also find that rings consistently contain the same proportion of the initial heat and have a consistent vorticity flux. By considering the effect of Prandtl number, we show that thermal diffusion can have a significant impact on development, smoothing out the temperature field and inhibiting the generation of vorticity. Finally, by considering the wake left behind as well as the vortex ring that is generated, we observe that the wake can itself roll up to form a second mushroom cap and subsequently a secondary vortex ring that follows the first.
Introduction: Bleeding in the first trimester of pregnancy is a common presentation to the Emergency Department (ED) with half going on to miscarry. Currently there is no local consensus on key quality markers of care for such cases. Point of Care Ultrasound (PoCUS) is increasingly utilized in the ED to detect life threating pathology such as an ectopic pregnancy or fetal viability. PoCUS leads to improved patient satisfaction, quicker diagnosis and treatment. The purpose for this study was to examine the rates of formal ultrasound and PoCUS when compared to reported and recommended rates, and also to understand the use of other diagnostic tests. Methods: A retrospective cohort study of pregnant females presenting to the ED with first trimester bleeding over one year (June 2016 – June2017) was completed. A sample size of 108 patients was required to detect a moderate departure from baseline reported rates (67.8 – 77.6%). The primary outcome was the PoCUS rate in the ED. The main secondary outcome was the formal ultrasound rate. The literature recommends PoCUS in all early pregnancy bleeding in the ED, with a target of 100% of patients receiving PoCUS. Additional data recorded included the live birth rate, pelvic and speculum examination rate and lab tests. There is no clearly defined ideal practice for the additional data so these rates will be recorded without comparison. Results: Records of 168 patients were screened for inclusion. 65 cases were excluded because they were not pregnant or had confirmed miscarriage or other, leaving a total of 103 patients included in the analysis. The PoCUS rate was 51.5% (95% CI 42%-61%), lower than previously reported PoCUS rates of 73% (67.8 – 77.6%). The formal ultrasound rate was 67% (57%-75%). Both approaches were significantly lower than the recommended rate of 100% (95.7 – 100%). Rates for other key markers of care will also be presented. Conclusion: Fewer PoCUS exams were performed at our centre compared with reported and recommended rates for ultrasound. Further results will explore our current practice in the management of first trimester pregnancy complications. We plan to use this information to suggest improvements in the management of this patient population.
Introduction: There is currently no protocol for the initiation of extra corporeal cardiopulmonary resuscitation (ECPR) in out of hospital cardiac arrest (OHCA) in Atlantic Canada. Advanced care paramedics (ACPs) perform advanced cardiac life support in the prehospital setting often completing the entire resuscitation on-scene. Implementation of ECPR will present a novel intervention that is only available at the receiving hospital, altering how ACPs manage selected patients. Our objective is to determine if an educational program can improve paramedic identification of ECPR candidates. Methods: An educational program was delivered to paramedics including a short seminar and pocket card coupled with simulations of OHCA cases. A before and after study design using a case-based survey was employed. Paramedics were scored on their ability to correctly identify OHCA patients who met the inclusion criteria for our ECPR protocol. Scores before and after the education delivery were compared using a two tailed t-test. A 6-month follow-up is planned to assess knowledge retention. Qualitative data was also collected from paramedics during simulation to help identify potential barriers to implementation of our protocol in the prehospital setting. Results: Nine advanced care paramedics participated in our educational program. Mean score pre-education was 9.7/16 (61.1%) compared to 14/16 (87.5%) after education delivery. The mean difference between groups was 4.22 (CI = 2.65-5.80, p = 0.0003). There was a significant improvement in the paramedics’ ability to correctly identify ECPR candidates after completing our educational program. Conclusion: Paramedic training through a didactic session coupled with a pocket card and simulation appears to be a feasible method of knowledge translation. 6-month retention data will help ensure knowledge retention is achieved. If successful, this pilot will be expanded to train all paramedics in our prehospital system as we seek to implement an ECPR protocol at our centre.