We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A growing number of publications focus on estimating Gaussian graphical models (GGM, networks of partial correlation coefficients). At the same time, generalizibility and replicability of these highly parameterized models are debated, and sample sizes typically found in datasets may not be sufficient for estimating the underlying network structure. In addition, while recent work emerged that aims to compare networks based on different samples, these studies do not take potential cross-study heterogeneity into account. To this end, this paper introduces methods for estimating GGMs by aggregating over multiple datasets. We first introduce a general maximum likelihood estimation modeling framework in which all discussed models are embedded. This modeling framework is subsequently used to introduce meta-analytic Gaussian network aggregation (MAGNA). We discuss two variants: fixed-effects MAGNA, in which heterogeneity across studies is not taken into account, and random-effects MAGNA, which models sample correlations and takes heterogeneity into account. We assess the performance of MAGNA in large-scale simulation studies. Finally, we exemplify the method using four datasets of post-traumatic stress disorder (PTSD) symptoms, and summarize findings from a larger meta-analysis of PTSD symptom.
Learning the meaning of a word is a difficult task due to the variety of possible referents present in the environment. Visual cues such as gestures frequently accompany speech and have the potential to reduce referential uncertainty and promote learning, but the dynamics of pointing cues and speech integration are not yet known. If word learning is influenced by when, as well as whether, a learner is directed correctly to a target, then this would suggest temporal integration of visual and speech information can affect the strength of association of word–referent mappings. Across two pre-registered studies, we tested the conditions under which pointing cues promote learning. In a cross-situational word learning paradigm, we showed that the benefit of a pointing cue was greatest when the cue preceded the speech label, rather than following the label (Study 1). In an eye-tracking study (Study 2), the early cue advantage was due to participants’ attention being directed to the referent during label utterance, and this advantage was apparent even at initial exposures of word–referent pairs. Pointing cues promote time-coupled integration of visual and auditory information that aids encoding of word–referent pairs, demonstrating the cognitive benefits of pointing cues occurring prior to speech.
Gaming disorder has become a global concern and it could have a variety of health and social consequences. The trauma model has been applied to the understanding of different types of addictions as behavioral addictions can sometimes be conceptualized as self-soothing strategies to avoid trauma-related stressors or triggers. However, much less is known about the relationship between trauma exposure and gaming disorder.
Objectives
To inform prevention and intervention strategies and to facilitate further research, we conducted the first scoping review to explore and summarize the literature on the relationship between trauma and gaming disorder.
Methods
A systematic search was conducted on the Web of Science, Scopus and ProQuest. We looked for original studies published in English that included a measure of trauma exposure and a measure of gaming disorder symptoms, as well as quantitative data regarding the relationship between trauma exposure and gaming disorder.
Results
The initial search generated 412 articles, of which 15 met the inclusion criteria. All of them were cross-sectional studies, recruiting participants from both clinical and non-clinical populations. Twelve of them (80%) reported significant correlations between trauma exposure and the severity of gaming disorder symptoms (r = 0.18 to 0.46, p < 0.010). Several potential mediators, including depressive symptoms and dissociative experiences, have been identified. One study found that parental monitoring moderated the relationship between trauma and gaming disorder symptoms. No studies reported the prevalence of trauma or trauma-related symptoms among people with gaming disorder.
Conclusions
There is some evidence supporting the association between trauma and gaming disorder, at small to medium effect sizes. Future studies should investigate the mediators and moderators underlying the relationship between trauma and gaming disorder. The longitudinal relationship between trauma exposure and the development of gaming disorder should be clarified. A trauma-informed approach may be a helpful strategy to alleviate gaming disorder symptoms.
There are various ways people cope with life events. One can expect generalized positive or negative outcomes across various life domains, called dispositional optimism. This can be explained by attribution theory: how people explain past events, their causations, and outcomes. Understanding attribution styles is important to help people reframe current circumstances and improve mental wellbeing. Our hypothesis is that people of different religious groups may exhibit various levels of optimism and pessimism based on their values, teachings, and practices. Previous research has found that people of Christian faith, or those with a religious faith in general, look to their religion as a way of coping during life adversities. Certain religious practices such as prayers and Church gatherings have been found to improve mental health through increasing dispositional optimism. While the relationship between religiosity and mental health has been previously examined in different religious populations, there are few studies that focused on comparing this relationship across religions.
Objectives
The objective of this scoping review is to understand the link between religiosity and mental health, focusing primarily on how people of the Christian religion demonstrate dispositional optimism or pessimism when coping with adverse life events, compared to other religious groups or atheists.
Methods
This scoping review included original peer reviewed study articles that studied mental health in terms of dispositional optimism or pessimism in people of Christian religion compared to other religious groups. This review used online databases, Ovid MEDLINE and PsycInfo, and used extraction tables to analyze the results of past research.
Results
The results of this scoping review revealed that people of Christian religion, especially those high in religiosity, use their religion as a method of coping. This population also showed higher dispositional optimism compared to atheists or those that believe in other religions. However, when compared to other religions such as Buddhism and Muslim, Christian populations showed lower dispositional optimism.
Conclusions
It is evident that religious involvement is linked to aspects of mental health, but comparing the effects of different religions is still a topic of exploration that can be investigated further to allow deeper understanding of their similarities and differences, as well as the mechanisms by which religion can affect mental health. In this review, a gap in the body of knowledge regarding the relationship between religion and pessimism was revealed. Future research directions could include examining whether dispositional pessimism varies across religious groups, as it does not necessarily have a perfectly inverse relationship with optimism.
This study investigated the association between early extubation (EE) and the degree of postoperative intensive care unit (ICU) support after the Fontan procedure, specifically evaluating the volume of postoperative intravenous fluid (IVF) and vasoactive-inotropic score (VIS).
Methods:
Retrospective analysis of patients who underwent Fontan palliation from 2008 to 2018 at a single center was completed. Patients were initially divided into pre-institutional initiative towards EE (control) and post-initiative (modern) cohorts. Differences between the cohorts were assessed using t-test, Wilcoxon, or chi-Square. Following stratification by early or late extubation, four groups were compared via ANOVA or Kruskal-Wallis Test.
Results:
There was a significant difference in the rate of EE between the control and modern cohorts (mean 42.6 versus 75.7%, p = 0.01). The modern cohort demonstrated lower median VIS (5 versus 8, p = 0.002), but higher total mean IVF (101±42 versus 82 ±27 cc/kg, p < 0.001) versus control cohort. Late extubated (LE) patients in the modern cohort had the highest VIS and IVF requirements. This group received 67% more IVF (140 ± 53 versus 84 ± 26 cc/kg, p < 0.001) and had a higher median VIS at 24 hours (10 (IQR, 5–10) versus 4 (IQR, 2–7), p < 0.001) versus all other groups. In comparison, all EE patients had a 5-point lower median VIS when compared to LE patients (3 versus 8, p= 0.001).
Conclusions:
EE following the Fontan procedure is associated with reduced post-operative VIS. LE patients in the modern cohort received more IVF, potentially identifying a high-risk subgroup of Fontan patients deserving of further investigation.
Cardiac involvement associated with multi-system inflammatory syndrome in children has been extensively reported, but the prevalence of cardiac involvement in children with SARS-CoV-2 infection in the absence of inflammatory syndrome has not been well described. In this retrospective, single centre, cohort study, we describe the cardiac involvement found in this population and report on outcomes of patients with and without elevated cardiac biomarkers. Those with multi-system inflammatory syndrome in children, cardiomyopathy, or complex CHD were excluded. Inclusion criteriaz were met by 80 patients during the initial peak of the pandemic at our institution. High-sensitivity troponin T and/or N-terminal pro-brain type natriuretic peptide were measured in 27/80 (34%) patients and abnormalities were present in 5/27 (19%), all of whom had underlying comorbidities. Advanced respiratory support was required in all patients with elevated cardiac biomarkers. Electrocardiographic abnormalities were identified in 14/38 (37%) studies. Echocardiograms were performed on 7/80 patients, and none demonstrated left ventricular dysfunction. Larger studies to determine the true extent of cardiac involvement in children with COVID-19 would be useful to guide recommendations for standard workup and management.
Introduction: In June 2019, The Ottawa Hospital launched the Epic EHR system, which transitioned all departments from a primarily paper-based system to an exclusively electronic system using a one-day “big bang” approach. All Emergency Physicians (EP) received online module training, personalization sessions, and at-the-elbow support during the transition. We sought to evaluate EP satisfaction with the implementation process and the system's impact on clinical practice in a tertiary care academic emergency medicine setting. Methods: Email surveys were distributed during the pre-implementation and go-live phases. Questions were developed by the research team and piloted for face validity and clarity. Surveys were sent to staff EPs, residents and fellows. Likert scales were used to evaluate agreement with statements and the modified Maslach Burnout Inventory was used to assess burnout. Pre-post groups were compared using chi-squared tests to assess for significant differences. Future surveys will be distributed in 2020 for continued implementation evaluation. Results: Response rates were 49% (78/160) in the pre and 48% (76/160) in the post period. The majority of respondents were staff (66% pre; 75% post) working 8-15 shifts/month. Prior to launch, 52% of EPs felt the pre-training modules provided sufficient preparation, however only 32% felt this way after go-live (p = 0.02). Providers did not feel there were enough personalization (21% pre vs. 24% post, p = 0.66) or hands-on sessions offered (51% pre vs. 39% post, p = 0.15) and this opinion did not change after go-live. Before Epic, EPs were most concerned with productivity/efficiency, documentation time, and lack of support/training. Although documentation was reported to be easier after go-live by 69% of EPs, reviewing documents, using standardized workups/protocols, patient monitoring/follow-up, efficiency and billing were reported by >50% of EPs to be more difficult. Overall, there was a 22% increase in feeling confident to use Epic (28% pre vs. 50% post, p < 0.01); however, only 38% of providers were satisfied with the system. Notably, 82% of EPs reported experiencing moderate or high burnout in the post implementation period. Conclusion: Despite receiving standard EHR training and support, the majority of clinicians did not feel adequately trained or confident using Epic and reported moderate to high burnout. These findings will inform optimization efforts and they represent key considerations for other EDs planning future implementations.
Introduction: Point-of-care ultrasound (POCUS) has become standard practice in emergency departments ranging from remote rural hospitals to well-resourced academic centres. To facilitate quality assurance, the Canadian Association of Emergency Physicians (CAEP) recommends image archiving. Due in part to poor infrastructure and lack of a national standard, however, archiving remains uncommon. Our objective was to establish a minimum standard archiving protocol for the core emergency department POCUS indications. Methods: Itemization of potential archiving standards was created through an extensive literature review. An online, three-round, modified Delphi survey was conducted with the thirteen POCUS experts on the national CAEP Emergency Ultrasound Committee tasked with representing diverse practice locations and experiences. Participants were surveyed to determine the images or clips, measurements, mode, and number of views that should comprise the minimum standard for archiving. Consensus was pre-defined as 80%. Results: All thirteen experts participated fully in the three rounds. In establishing minimum image archiving standards for emergency department POCUS, complete consensus was achieved for first trimester pregnancy, hydronephrosis, cardiac activity versus standstill, lower extremity deep venous thrombosis, and ultrasound-guided central line placement. Consensus was achieved for the majority of statements regarding abdominal aortic aneurysm, extended focused assessment with sonography in trauma, pericardial effusion, left and right ventricular function, thoracic B-line assessment, cholelithiasis and cholecystitis scans. In total, consensus was reached for 58 of 69 statements (84.1%). This included agreement on 41 of 43 statements (95.3%) describing mandatory images for archiving in the above indications. Conclusion: Our modified Delphi-derived consensus represents the first national standard archiving requirements for emergency department POCUS. Depending on the clinical context, additional images may be required beyond this minimum standard to support a diagnosis.
Introduction: In 2018, Canadian postgraduate specialist Emergency Medicine (EM) programs began implementing a competency-based medical education (CBME) assessment system. To support improvement of this assessment program, we sought to evaluate its short-term educational outcomes nationally and within individual programs. Methods: Program-level data from the 2018 resident cohort were amalgamated and analyzed. The number of Entrustable Professional Activity (EPA) assessments (overall and for each EPA) and the timing of resident promotion through program stages was compared between programs and to the guidelines provided by the national EM specialty committee. Total EPA observations from each program were correlated with the number of EM and pediatric EM rotations. Results: Data from 15 of 17 (88.2%) EM programs containing 9,842 EPA observations from 68 of the 77 (88.3%) Canadian EM specialist residents in the 2018 cohort were analyzed. The average number of EPAs observed per resident in each program varied from 92.5 to 229.6 and correlated strongly with the number of blocks spent on EM and pediatric EM (r = 0.83, p < 0.001). Relative to the guidelines outlined by the specialty committee, residents were promoted later than expected and with fewer EPA observations than suggested. Conclusion: We present a new approach to the amalgamation of national and program-level assessment data. There was demonstrable variation in both EPA-based assessment numbers and promotion timelines between programs and with national guidelines. This evaluation data will inform the revision of local programs and national guidelines and serve as a starting point for further reaching outcome evaluation. This process could be replicated by other national assessment programs.
Introduction: Cricothyrotomy is an intervention performed to salvage “can't intubate, can't ventilate” situations. Studies have shown poor accuracy landmarking the cricothyroid membrane, particularly in female patients by surgeons and anesthesiologists. There is less data available about emergency physician performance. This study examines the perceived versus actual success rate of landmarking the cricothyroid membrane by resident and staff emergency physicians using obese and non-obese models. Methods: Five male and female volunteers were selected as models. Each model was placed supine, and a point-of-care ultrasound expert landmarked the borders of each cricothyroid membrane. 20 residents and 15 staff emergency physicians were given one attempt to landmark five models. Data was gathered on each participant's perceived likelihood of success and attempt difficulty. Overall accuracy and accuracy stratified by sex and obesity status were calculated. Results: Overall landmarking accuracy amongst all participants was 58% (SD 18%). A difference in accuracy was found for obese males (88%) versus obese females (40%) (difference = 48%, 95% CI = 30-65%, p < 0.0001); and non-obese males (77%) versus non-obese females (46%) (difference = 31%, 95% CI = 12-51%, p = 0.004). There was no association between perceived difficulty and success (correlation = 0.07, 95% CI=−0.081-0.214, p = 0.37). Confidence levels overall were higher amongst staff physicians (3.0) than residents (2.7) (difference = 0.3, 95% CI = 0.1-0.6, p = 0.02), but there was no correlation between confidence in an attempt and its success (p = 0.33). Conclusion: We found that physicians demonstrate significantly lower accuracy when landmarking cricothyroid membranes of females. Emergency physicians were unable to predict their own accuracy while landmarking, which can potentially lead to increased failed attempts and longer time to secure the airway. Improved training techniques and a modified approach to cricothyrotomy may reduce failed attempts and improve the time to secure the airway.
Introduction: Workplace based assessments (WBAs) are integral to emergency medicine residency training. However many biases undermine their validity, such as an assessor's personal inclination to rate learners leniently or stringently. Outlier assessors produce assessment data that may not reflect the learner's performance. Our emergency department introduced a new Daily Encounter Card (DEC) using entrustability scales in June 2018. Entrustability scales reflect the degree of supervision required for a given task, and are shown to improve assessment reliability and discrimination. It is unclear what effect they will have on assessor stringency/leniency – we hypothesize that they will reduce the number of outlier assessors. We propose a novel, simple method to identify outlying assessors in the setting of WBAs. We also examine the effect of transitioning from a norm-based assessment to an entrustability scale on the population of outlier assessors. Methods: This was a prospective pre-/post-implementation study, including all DECs completed between July 2017 and June 2019 at The Ottawa Hospital Emergency Department. For each phase, we identified outlier assessors as follows: 1. An assessor is a potential outlier if the mean of the scores they awarded was more than two standard deviations away from the mean score of all completed assessments. 2. For each assessor identified in step 1, their learners’ assessment scores were compared to the overall mean of all learners. This ensures that the assessor was not simply awarding outlying scores due to working with outlier learners. Results: 3927 and 3860 assessments were completed by 99 and 116 assessors in the pre- and post-implementation phases respectively. We identified 9 vs 5 outlier assessors (p = 0.16) in the pre- and post-implementation phases. Of these, 6 vs 0 (p = 0.01) were stringent, while 3 vs 5 (p = 0.67) were lenient. One assessor was identified as an outlier (lenient) in both phases. Conclusion: Our proposed method successfully identified outlier assessors, and could be used to identify assessors who might benefit from targeted coaching and feedback on their assessments. The transition to an entrustability scale resulted in a non-significant trend towards fewer outlier assessors. Further work is needed to identify ways to mitigate the effects of rater cognitive biases.
Introduction: A critical component for successful implementation of any innovation is an organization's readiness for change. Competence by Design (CBD) is the Royal College's major change initiative to reform the training of medical specialists in Canada. The purpose of this study was to measure readiness to implement CBD among the 2019 launch disciplines. Methods: An online survey was distributed to program directors of the 2019 CBD launch disciplines one month prior to implementation. Questions were developed based on the R = MC2 framework for organizational readiness. They addressed program motivation to implement CBD, general capacity for change, and innovation-specific capacity. Questions related to motivation and general capacity were scored using a 5-point scale of agreement. Innovation-specific capacity was measured by asking participants whether they had completed 33 key pre-implementation tasks (yes/no) in preparation for CBD. Bivariate correlations were conducted to examine the relationship between motivation, general capacity and innovation specific capacity. Results: Survey response rate was 42% (n = 79). A positive correlation was found between all three domains of readiness (motivation and general capacity, r = 0.73, p < 0.01; motivation and innovation specific capacity, r = 0.52, p < 0.01; general capacity and innovation specific capacity, r = 0.47, p < 0.01). Most respondents agreed that successful launch of CBD was a priority (74%). Fewer felt that CBD was a move in the right direction (58%) and that implementation was a manageable change (53%). While most programs indicated that their leadership (94%) and faculty and residents (87%) were supportive of change, 42% did not have experience implementing large-scale innovation and 43% indicated concerns about adequate support staff. Programs had completed an average of 72% of pre-implementation tasks. No difference was found between disciplines (p = 0.11). Activities related to curriculum mapping, competence committees and programmatic assessment had been completed by >90% of programs, while <50% of programs had engaged off-service rotations. Conclusion: Measuring readiness for change aids in the identification of factors that promote or inhibit successful implementation. These results highlight several areas where programs struggle in preparation for CBD launch. Emergency medicine training programs can use this data to target additional implementation support and ongoing faculty development initiatives.
Introduction: Biliary colic is a frequent cause for emergency department visits. Ultrasound is the initial test of choice for gallstone disease. We evaluated the effectiveness of a brief online educational module aimed to improve Emergency Physicians’ (EP) and General Surgeons’ (GS) accuracy in interpreting gallbladder ultrasound. Methods: EPs and GSs (resident/fellow and attending) from a single academic tertiary care hospital were invited to participate in a pre- and post- assessment of the interpretation of gallbladder ultrasound. Demographic information was obtained in a standardized survey. All questions developed for the pre- and post- assessment were reviewed for content and clarity by 3 EP and GS experts. Participants were asked 22 multiple-choice questions and then directed to a 7-minute video-tutorial on gallbladder ultrasound interpretation. After a 3-week period, participants then completed a post-intervention assessment. Following pre- and post- assessment, participants were surveyed on their confidence in gallbladder ultrasound interpretation. Data was analyzed using descriptive statistics and paired t-test. Results: The overall response rate of the pre-intervention was 50.9% (116/228) and 40.8% (93/228) for the post-intervention. In pre-intervention assessment, 27.7% of participants reported they were “not at all confident” in interpreting gallbladder ultrasound. This contrasted with post-intervention confidence level, where only minority (7.8%) reported “not at all confident”. There was a significant increase from the pre- to post- intervention (75.7% to 85.4%; p < 0.01) in correct interpretations. The greatest improvement was seen in those with previous experience interpreting gallbladder ultrasound (from 79.6% to 91.1%; p < 0.01). EPs scored significantly higher than GSs in the pre-intervention (EPs 78.2% compared to GSs 71.0%; p < 0.01). This trend was also observed in post-intervention, although the difference was no longer significant (EPs 88.9% compared to GSs 82.8%; p = 0.05). There was no significant difference in performance between residents/fellows compared to attendings. Conclusion: This brief, online intervention improved the accuracy of EPs’ and GSs’ interpretation of gallbladder ultrasound. This is an easily accessible tutorial that can be used as part of a comprehensive ultrasound educational program. Further studies are required to determine if EPs’ and GSs’ interpretations of gallbladder ultrasound impacts patient-oriented outcomes.
Introduction: Little is known about how Royal College emergency medicine (RCEM) residency programs are selecting their residents. This creates uncertainty regarding alignment between our current selection processes and known best practices and results in a process that is difficult to navigate for prospective candidates. We seek to describe the current selection processes of Canadian RCEM programs. Methods: An online survey was distributed to all RCEM program directors and assistant directors. The survey instrument included 22 questions consisting of both open-ended (free text) and closed-ended (Likert scale) elements. Questions sought qualitative and quantitative data from the following 6 domains; paper application, letters of reference, elective selection, interview, rank order, and selection process evaluation. Descriptive statistics were used. Results: We received responses from 13/14 programs for an aggregate response rate of 92.9%. A candidate's letter of reference was identified as the single most important item from the paper application (38.5%). Having a high level of familiarity with the applicant was considered to be the most important characteristic of a reference letter author (46.2%). Respondents found that providing a percentile rank of the applicant was useful when reviewing candidate reference letters. Once the interview stage is reached, 76.9% of programs stated that the interview was weighted at least as heavily as the paper application; 53.8% weighted the interview more heavily. Once final candidate scores are established for both the paper application and the interview, 100% of programs indicated that further adjustment is made to the rank order list. Only 1/13 programs reported ever having completed a formal evaluation of their selection process. Conclusion: The information gained from this study helps to characterize the landscape of the RCEM residency selection process. We identified significant heterogeneity between programs with respect to which application elements were most valued. Canadian emergency medicine residency programs should re-evaluate their selection processes to achieve improved consistency and better alignment with selection best practices.
Introduction: The Emergency Medicine Specialty Committee of the Royal College of Physicians and Surgeons of Canada (RCPSC) has specified that resuscitation Entrustable Professional Activities (EPAs) can be assessed in either the workplace or simulation environments; however, there is minimal evidence that such clinical performance correlates. We sought to determine the relationship between assessments in the workplace versus simulation environments among junior emergency medicine residents. Methods: We conducted a prospective observational study to compare workplace and simulation resuscitation performance among all first-year residents (n = 9) enrolled in the RCPSC-Emergency Medicine program at the University of Ottawa. All scores from Foundations EPA #1 (F1) were collected during the 2018-2019 academic year; this EPA focuses on initiating and assisting in the resuscitation of critically ill patients. Workplace performance was assessed by clinical supervisors by direct observation during clinical shifts. Simulation performance was assessed by trained simulation educators during regularly-scheduled sessions. We present descriptive statistics and within-subjects analyses of variance. Results: We collected a total of 104 workplace and 36 simulation assessments. Interobserver reliability of simulation assessments was high (ICC = 0.863). We observed no correlation between mean EPA scores assigned in the workplace and simulation environments (Spearman's rho=−0.092, p = 0.813). Scores in both environments improved significantly over time (F(1,8) = 18.79, p < 0.001, ηp2 = 0.70), from 2.9(SD = 1.2) in months 1-4 to 3.5(0.2) in months 9-12 (p = 0.002). Workplace scores (3.4(0.1)) were consistently higher than simulation scores (2.9(0.2)) (F(1,8) = 7.16, p = 0.028, ηp2 = 0.47). Conclusion: We observed no correlation between EPA F1 ratings of resuscitation performance between the workplace and simulation environments. Further studies should seek to clarify this relationship to inform our ongoing use of simulation to assess clinical competence.
Introduction: The Ottawa Emergency Department Shift Observation Tool (O-EDShOT) was recently developed to assess a resident's ability to safely run an ED shift and is supported by multiple sources of validity evidence. The O-EDShOT uses entrustability scales, which reflect the degree of supervision required for a given task. It was found to discriminate between learners of different levels, and to differentiate between residents who were rated as able to safely run the shift and those who were not. In June 2018 we replaced norm-based daily encounter cards (DECs) with the O-EDShOT. With the ideal assessment tool, most of the score variability would be explained by variability in learners’ performances. In reality, however, much of the observed variability is explained by other factors. The purpose of this study is to determine what proportion of total score variability is accounted for by learner variability when using norm-based DECs vs the O-EDShOT. Methods: This was a prospective pre-/post-implementation study, including all daily assessments completed between July 2017 and June 2019 at The Ottawa Hospital ED. A generalizability analysis (G study) was performed to determine what proportion of total score variability is accounted for by the various factors in this study (learner, rater, form, pgy level) for both the pre- and post- implementation phases. We collected 12 months of data for each phase, because we estimated that 6-12 months would be required to observe a measurable increase in entrustment scale scores within a learner. Results: A total of 3908 and 3679 assessments were completed by 99 and 116 assessors in the pre- and post- implementation phases respectively. Our G study revealed that 21% of total score variance was explained by a combination of post-graduate year (PGY) level and the individual learner in the pre-implementation phase, compared to 59% in the post-implementation phase. An average of 51 vs 27 forms/learner are required to achieve a reliability of 0.80 in the pre- and post-implementation phases respectively. Conclusion: A significantly greater proportion of total score variability is explained by variability in learners’ performances with the O-EDShOT compared to norm-based DECs. The O-EDShOT also requires fewer assessments to generate a reliable estimate of the learner's ability. This study suggests that the O-EDShOT is a more useful assessment tool than norm-based DECs, and could be adopted in other emergency medicine training programs.
Introduction: Competency based medical education (CBME) has triggered widespread utilization of workplace-based assessment (WBA) tools in postgraduate training programs. These WBAs predominately use rating scales with entrustment anchors, such as the Ottawa Surgical Competency Operating Room Evaluation (O-SCORE). However, little is known about the factors that influence a supervising physician's decision to assign a particular rating on scales using entrustment anchors. This study aimed to identify the factors that influence supervisors’ ratings of trainees using WBA tools with entrustment anchors at the time of assessment and to explore the experiences with and challenges of using entrustment anchors in the emergency department (ED). Methods: A convenience sample of full-time emergency medicine (EM) faculty were recruited from two sites within a single academic Canadian EM hospital system. Fifty semi-structured interviews were conducted with EM physicians within two hours of completing a WBA for an EM trainee. Interviews were audio-recorded, transcribed verbatim, and independently analyzed by two members of the research team. Themes were stratified by trainee level, rating and task. Results: Interviews involved 73% (27/37) of all EM staff and captured assessments completed on 83% (37/50) of EM trainees. The mean WBA rating of studied samples was 4.34 ± 0.77 (2 to 5), which was similar to the mean rating of all WBAs completed during the study period. Overall, six major factors were identified that influenced staff WBA ratings: amount of guidance required, perceived competence through discussion and questioning, trainee experience, clinical context, past experience working with the trainee, and perceived confidence. The majority of staff denied struggling to assign ratings. However, when they did struggle, it involved the interpretation of WBA anchors and their application to the clinical context in the ED. Conclusion: Several factors appear to be taken into account by clinical supervisors when they make decisions regarding the particular rating that they will assign a trainee on a WBA that uses entrustment anchors. Not all of these factors are specific to that particular clinical encounter. The results from this study further our understanding on the use of entrustment anchors within the ED and may facilitate faculty development regarding WBA completion as we move forward in CBME.
Two common approaches to identify subgroups of patients with bipolar disorder are clustering methodology (mixture analysis) based on the age of onset, and a birth cohort analysis. This study investigates if a birth cohort effect will influence the results of clustering on the age of onset, using a large, international database.
Methods:
The database includes 4037 patients with a diagnosis of bipolar I disorder, previously collected at 36 collection sites in 23 countries. Generalized estimating equations (GEE) were used to adjust the data for country median age, and in some models, birth cohort. Model-based clustering (mixture analysis) was then performed on the age of onset data using the residuals. Clinical variables in subgroups were compared.
Results:
There was a strong birth cohort effect. Without adjusting for the birth cohort, three subgroups were found by clustering. After adjusting for the birth cohort or when considering only those born after 1959, two subgroups were found. With results of either two or three subgroups, the youngest subgroup was more likely to have a family history of mood disorders and a first episode with depressed polarity. However, without adjusting for birth cohort (three subgroups), family history and polarity of the first episode could not be distinguished between the middle and oldest subgroups.
Conclusion:
These results using international data confirm prior findings using single country data, that there are subgroups of bipolar I disorder based on the age of onset, and that there is a birth cohort effect. Including the birth cohort adjustment altered the number and characteristics of subgroups detected when clustering by age of onset. Further investigation is needed to determine if combining both approaches will identify subgroups that are more useful for research.
Congenital renal and urinary tract anomalies are common, accounting for up to 21% of all congenital abnormalities [1]. The reported incidence is approximately 1:250–1:1000 pregnancies [2] and the routine use of prenatal ultrasonography allows relatively early detection, particularly for the obstructive uropathies, which account for the majority. According to the latest UK renal registry report in 2015, ‘obstructive uropathy’ was the second leading cause (19%) of chronic renal failure in children under 16 years of age after renal dysplasia +/− reflux [3]. The obstructions may occur within the upper or lower urinary tract, and their prognosis varies significantly, with obstructions at the level of the bladder neck being associated with the majority of neonatal mortality and renal failure. In untreated cases, perinatal mortality is high (up to 45%, often because of associated severe oligohydramnios and pulmonary hypoplasia) [4], and 30% of the survivors suffer from end-stage renal failure (ESRF) requiring dialysis and renal transplantation before the age of 5 [5]. The overall chance of survival in childhood is lowest if renal support therapy or transplantation is commenced before 2 years old when compared with starting at 12–16 years old (hazard ratio [HR] of 4.1, 95% confidence interval [CI] 1.7–9.9, P = 0.002) [3]. Therefore, in utero intervention, by the insertion of a vesicoamniotic shunt, or therapeutic treatment by fetal cystoscopy and valvular ablation, has been attempted to attenuate in utero progression of these pathologies (and their consequences) and to alter the natural history of congenital bladder neck obstruction in childhood. In this chapter, we discuss the etiology, pathophysiology, prenatal presentation and diagnosis of congenital bladder neck obstruction. Suggested algorithms for screening and the prenatal prognostic evaluation in selecting candidates for in utero therapy will be discussed.