We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Persistent discrimination and identity threats contribute to adverse health outcomes in minoritized groups, mediated by both structural racism and physiological stress responses.
Objective:
This study aims to evaluate the feasibility of recruiting African American volunteers for a pilot study of race-based stress, the acceptability of a mindfulness intervention designed to reduce racism-induced stress, and to evaluate preliminary associations between race-based stress and clinical, psychosocial, and biological measures.
Methods:
A convenience sample of African Americans aged 18–50 from New York City’s Tri-state area underwent assessments for racial discrimination using the Everyday Discrimination Scale (EDS) and Race-Based Traumatic Stress Symptom Scale. Mental health was evaluated using validated clinical scales measuring depression, anxiety, stress, resilience, mindfulness, resilience, sleep, interpersonal connection, and coping. Biomarkers were assessed through clinical laboratory tests, allostatic load assessment, and blood gene expression analysis.
Results:
Twenty participants (12 females, 8 males) completed assessments after consent. Elevated EDS scores were associated with adverse lipid profiles, including higher cholesterol/high-density lipoprotein (HDL) ratios and lower HDL levels, as well as elevated inflammatory markers (NF-kB activity) and reduced antiviral response (interferon response factor). Those with high EDS reported poorer sleep, increased substance use, and lower resilience. Mindfulness was positively associated with coping and resilience but inversely to sleep disturbance. 90% showed interest in a mindfulness intervention targeting racism-induced stress.
Conclusions:
This study demonstrated an association between discrimination and adverse health effects among African Americans. These findings lay the groundwork for further research to explore the efficacy of mindfulness and other interventions on populations experiencing discrimination.
Globally, there is seasonal variation in tuberculosis (TB) incidence, yet the biological and behavioural or social factors driving TB seasonality differ across countries. Understanding season-specific risk factors that may be specific to the UK could help shape future decision-making for TB control. We conducted a time-series analysis using data from 152,424 UK TB notifications between 2000 and 2018. Notifications were aggregated by year, month, and socio-demographic covariates, and negative binomial regression models fitted to the aggregate data. For each covariate, we calculated the size of the seasonal effect as the incidence risk ratio (IRR) for the peak versus the trough months within the year and the timing of the peak, whilst accounting for the overall trend. There was strong evidence for seasonality (p < 0.0001) with an IRR of 1.27 (95% CI 1.23–1.30). The peak was estimated to occur at the beginning of May. Significant differences in seasonal amplitude were identified across age groups, ethnicity, site of disease, latitude and, for those born abroad, time since entry to the UK. The smaller amplitude in older adults, and greater amplitude among South Asians and people who recently entered the UK may indicate the role of latent TB reactivation and vitamin D deficiency in driving seasonality.
The UK Diabetes Remission Clinical Trial (DiRECT) demonstrated that a weight loss strategy consisting of: (1) 12 weeks total diet replacement; (2) 4 to 6 weeks food reintroduction; and (3) a longer period of weight loss maintenance, is effective in reducing body weight, improving glycaemic control, and facilitating type 2 diabetes remission(1). The DiRECT protocol is now funded for type 2 diabetes management in the UK(2). Type 2 diabetes is a growing problem in Aotearoa New Zealand(3), but the acceptability and feasibility of the DiRECT intervention in our diverse sociocultural context remains unclear. We conducted a randomised controlled trial of DiRECT within a Māori primary healthcare provider in O¯tepoti Dunedin. Forty participants with diabetes and obesity who wanted to lose weight were randomised to receive the DiRECT intervention or usual care. Both groups received the same level of individualised support from an in-house dietitian. We conducted individual, semi-structured interviews with 26 participants after 3 months. Questions explored perspectives and experiences, barriers and facilitators, and future expectations regarding dietary habits and weight loss. Interview transcripts were analysed using inductive thematic analysis(4). Participants struggled with weight management prior to the study. Advice from doctors, friends and whānau, and the internet was prolific, yet often impractical or unclear. The DiRECT intervention was mentally and physically challenging, but rapid weight loss and an improved sense of health and wellbeing enhanced motivation. Participants identified strategies which supported adaptation and adherence. Food reintroduction beyond 3 months was an exciting milestone, but the risk of reverting to previous habits was daunting. Participants feared weight regain and felt ongoing guidance was required for a successful transition to a real-food diet. Conversely, usual care participants described a gradual and ongoing process of health-focused dietary modification. While this approach did support behaviour change, a perceived slow rate of weight loss was often frustrating. Across both interventions, self-motivation and whānau support contributed to perceived success, whereas busy lifestyles, social and cultural norms, and financial concerns presented additional challenges. The role of individualised and non-judgemental dietetic support was a central theme across both groups. In addition to nutrition education and practical guidance, the in-house dietitian offered encouragement and promoted self-acceptance among participants. At 3 months, positive shifts in perspectives surrounding food, health, and sense of self were identified, which participants largely attributed to the level of nutrition support received: a new experience for many. The DiRECT protocol appears an acceptable weight loss approach among New Zealanders with diabetes and obesity, but tailored dietetic and behavioural support must be prioritised in its implementation. Future research should examine the broader health benefits associated with providing greater dietetic support and the cost-effectiveness of employing nutrition-trained health professionals within the primary care workforce.
Methods to reduce obesity and type 2 diabetes in Aotearoa New Zealand are desperately needed, with obesity one of the greatest predisposing factors for type 2 diabetes as well as heart disease, and certain cancers.1 A recent New Zealand report2 identified several interventions that might benefit people with established diabetes, the most promising being a period of rapid weight loss, followed by supported weight-loss maintenance. Such weight loss has shown to achieve what was previously thought impossible, diabetes remission,3 as well as appreciably reduce the risk of cardiovascular disease and prevent diabetes-related chronic kidney disease, retinopathy, nephropathy, and lower limb amputation.2 While the findings from the studies of low energy total meal replacement diets have stimulated great interest, their use in Aotearoa New Zealand has not been considered. The purpose of this primary-care led intervention therefore was to consider the acceptability and efficacy of such a weight loss programme, DiRECT, in Aotearoa New Zealand. Te Kāika DiRECT is a 12-month study conducted within a Māori primary healthcare provider in O¯tepoti Dunedin. The DiRECT protocol is three months of total meal replacement for rapid weight loss followed by food reintroduction and a longer period of supported weight loss maintenance. Participants were adults with prediabetes or T2 diabetes and obesity wanting to lose weight. Twenty participants (70% female, age 46 (SD 10), BMI 41 (9), HbA1c 51 (11)) were randomised to receive the DiRECT protocol, twenty more (70% female, age 50 (SD 8), BMI 40 (7), HbA1c 54 (14)) were randomised to receive best practice weight loss support (usual care). All participants had the same number of visits with the in-house Dietitian and free access to the onsite gym. Participants in the control group also received regular grocery vouchers to purchase the foods encouraged by healthy eating guidelines. Recruitment began in February, 2022. After the initial three month study period, DiRECT participants reported consuming 3.0MJ (95% CI 1.2 to 4.8MJ) less energy per day than those in usual care. Mean weight loss was 6kg (2.3-9.6kg) greater for DiRECT participants than usual care participants, while medication use and systolic blood pressure (12mmHg (0-24mmHg)) were lower. Continuous glucose monitoring identified that at baseline, participants on average only spent 10% of the day with a blood glucose reading under 8mmol/L (normoglycaemia). After three months, the usual care group spent on average 48% of the day within the normoglycaemic range, while DiRECT participants spent 78% of the day within the normoglycaemic range. Results at 12 months will enable comment on longer term markers of blood glucose control (HbA1c) and diabetes remission rates, as well as indicate if the body weight, medication, and blood pressure improvements observed at three months are sustained.
The global trend of urbanization coupled with an increasing awareness of the importance of food systems resilience, has led to an increasing interest in urban agriculture to sustainably feed the rapidly growing urban population and mitigate against food supply chain disruptions. While home and community gardens have been long studied, there has been relatively little empirical research focused specifically on commercial urban agriculture (CUA) operations. The purpose of this study was to characterize commercial urban farms, and to identify their primary barriers to business development and expansion, their perceptions of future opportunities, and their specific informational needs. Because CUA has received relatively less attention in previous empirical research, a qualitative approach was used for this needs assessment to collect rich, contextualized information to help differentiate the specific barriers, opportunities and needs of CUA operations as opposed to their rural counterparts. We conducted semi-structured interviews (n = 29) of CUA producers in Florida. These interviews revealed that CUA operations face many of the same barriers that are common to establishing and growing small farms, with additional barriers due to local government regulations and tensions associated with farming on land that is not traditionally used for agriculture. Despite these difficulties, CUA operators believe their urban location is a key benefit to their operation and they see a variety of opportunities for future business and market expansion.
Pre-eclampsia is a serious complication of pregnancy, and maternal nutritional factors may play protective roles or exacerbate risk. The tendency to focus on single nutrients as a risk factor obscures the complexity of possible interactions, which may be important given the complex nature of pre-eclampsia. An evidence review was conducted to compile definite, probable, possible and indirect nutritional determinants of pre-eclampsia to map a nutritional conceptual framework for pre-eclampsia prevention. Determinants of pre-eclampsia were first compiled through an initial consultation with experts. Second, an expanded literature review was conducted to confirm associations, elicit additional indicators and evaluate evidence. The strength of association was evaluated as definite relative risk (RR) < 0·40 or ≥3·00, probable RR 0·40–0·69 or 1·50–2·99, possible RR 0·70–0·89 or 1·10–1·49 or not discernible RR 0·90–1·09. The quality of evidence was evaluated using Grading of Recommendations, Assessment, Development and Evaluation. Twenty-five nutritional factors were reported in two umbrella reviews and twenty-two meta-analyses. Of these, fourteen were significantly associated with pre-eclampsia incidence. Higher serum Fe emerged as a definite nutritional risk factors for pre-eclampsia incidence across populations, while low serum Zn was a risk factor in Asia and Africa. Maternal vitamin D deficiency was a probable risk factor and Ca and/or vitamin D supplementation were probable protective nutritional factors. Healthy maternal dietary patterns were possibly associated with lower risk of developing pre-eclampsia. Potential indirect pathways of maternal nutritional factors and pre-eclampsia may exist through obesity, maternal anaemia and gestational diabetes mellitus. Research gaps remain on the influence of household capacities and socio-cultural, economic and political contexts, as well as interactions with medical conditions.
The purpose of this scoping review is two-fold: to assess the literature that quantitatively measures outcomes of mentorship programs designed to support research-focused junior faculty and to identify mentoring strategies that promote diversity within academic medicine mentoring programs.
Methods:
Studies were identified by searching Medline using MESH terms for mentoring and academic medicine. Eligibility criteria included studies focused on junior faculty in research-focused positions, receiving mentorship, in an academic medical center in the USA, with outcomes collected to measure career success (career trajectory, career satisfaction, quality of life, research productivity, leadership positions). Data were abstracted using a standardized data collection form, and best practices were summarized.
Results:
Search terms resulted in 1,842 articles for title and abstract review, with 27 manuscripts meeting inclusion criteria. Two studies focused specifically on women, and four studies focused on junior faculty from racial/ethnic backgrounds underrepresented in medicine. From the initial search, few studies were designed to specifically increase diversity or capture outcomes relevant to promotion within academic medicine. Of those which did, most studies captured the impact on research productivity and career satisfaction. Traditional one-on-one mentorship, structured peer mentorship facilitated by a senior mentor, and peer mentorship in combination with one-on-one mentorship were found to be effective strategies to facilitate research productivity.
Conclusion:
Efforts are needed at the mentee, mentor, and institutional level to provide mentorship to diverse junior faculty on research competencies and career trajectory, create a sense of belonging, and connect junior faculty with institutional resources to support career success.
We describe severe acute respiratory coronavirus virus 2 (SARS-CoV-2) IgG seroprevalence and antigenemia among patients at a medical center in January–March 2021 using residual clinical blood samples. The overall seroprevalences were 17% by infection and 16% by vaccination. Spent or residual samples are a feasible alternative for rapidly estimating seroprevalence or monitoring trends in infection and vaccination.
A crucial reckoning was initiated when the COVID-19 pandemic began to expose and intensify long-standing racial/ethnic health inequities, all while various sectors of society pursued racial justice reform. As a result, there has been a contextual shift towards broader recognition of systemic racism, and not race, as the shared foundational driver of both societal maladies. This confluence of issues is of particular relevance to Black populations disproportionately affected by the pandemic and racial injustice. In response, institutions have initiated diversity, equity, and inclusion (DEI) efforts as a way forward. This article considers how the dual pandemic climate of COVID-19-related health inequities and the racial justice movement could exacerbate the “time and effort tax” on Black faculty to engage in DEI efforts in academia and biomedicine. We discuss the impact of this “tax” on career advancement and well-being, and introduce an operational framework for considering the interconnected influence of systemic racism, the dual pandemics, and DEI work on the experience of Black faculty. If not meaningfully addressed, the “time and effort tax” could contribute to Black and other underrepresented minority faculty leaving academia and biomedicine – consequently, the very diversity, equity, and inclusion work meant to increase representation could decrease it.
Enhancing diversity in the scientific workforce is a long-standing issue. This study uses mixed methods to understand the feasibility, impact, and priority of six key strategies to promote diverse and inclusive training and contextualize the six key strategies across Clinical and Translational Science Awards (CTSAs) Program Institutions.
Methods:
Four breakout sessions were held at the NCATS 2020 CTSA Program annual meeting focused on diversity, equity, and inclusion (DEI) efforts. This paper focuses on the breakout session for Enhancing DEI in Translational Science Training Programs. Data were analyzed using a mixed methods convergent approach. The quantitative strand includes the online polling results. The qualitative strand includes the breakout session and the chat box in response to the training presentation.
Results:
Across feasibility, impact, and priority questions, prioritizing representation ranked number 1. Building partnerships ranked number 2 in feasibility and priority, while making it personal ranked number 2 for impact. Across each strategy, rankings supported the qualitative data findings in feasibility through shared experiences, impact in the ability to increase DEI, and priority rankings in comparison to the other strategies. No divergence was found across quantitative and qualitative data findings.
Conclusion:
Findings provide robust support for prioritizing representation as a number one strategy to focus on in training programs. Specifically, this strategy can be operationalized through integration of community representation, diversity advocates, and adopting a holistic approach to recruiting a diverse cadre of scholars into translational science training programs at the national level across CTSAs.
Diversity, equity, and inclusion (DEI) in clinical and translational science (CTS) are paramount to driving innovation and increasing health equity. One important area for improving diversity is among trainees in CTS programs. This paper reports on findings from a special session at the November 2020 Clinical and Translational Science Award (CTSA) national program meeting that focused on advancing diversity and inclusion within CTS training programs.
Methods:
Using qualitative content analysis, we identified approaches brought forth to increase DEI in KL2 career development and other training programs aimed at early-stage CTS investigators, beyond the six strategies put forth to guide the breakout session (prioritizing representation, building partnerships, making it personal, designing program structure, improving through feedback, and winning endorsement). We used an inductive qualitative content analysis approach to identify themes from a transcript of the panel of KL2 program leaders centered on DEI in training programs.
Results:
We identified four themes for advancing DEI within CTS training programs: 1) institutional buy-in; 2) proactive recruitment efforts; 3) an equitable application process; and 4) high-quality, diverse mentorship.
Conclusion:
Implementing these strategies in CTS and other training programs will be an important step for advancing DEI. However, processes need to be established to evaluate the implementation and effectiveness of these strategies through continuous quality improvement, a key component of the CTSA program. Training programs within the CTSA are well-positioned to be leaders in this critical effort to increase the diversity of the scientific workforce.
To prioritise and refine a set of evidence-informed statements into advice messages to promote vegetable liking in early childhood, and to determine applicability for dissemination of advice to relevant audiences.
Design:
A nominal group technique (NGT) workshop and a Delphi survey were conducted to prioritise and achieve consensus (≥70 % agreement) on thirty evidence-informed maternal (perinatal and lactation stage), infant (complementary feeding stage) and early years (family diet stage) vegetable-related advice messages. Messages were validated via triangulation analysis against the strength of evidence from an Umbrella review of strategies to increase children’s vegetable liking, and gaps in advice from a Desktop review of vegetable feeding advice.
Setting:
Australia.
Participants:
A purposeful sample of key stakeholders (NGT workshop, n 8 experts; Delphi survey, n 23 end users).
Results:
Participant consensus identified the most highly ranked priority messages associated with the strategies of: ‘in-utero exposure’ (perinatal and lactation, n 56 points) and ‘vegetable variety’ (complementary feeding, n 97 points; family diet, n 139 points). Triangulation revealed two strategies (‘repeated exposure’ and ‘variety’) and their associated advice messages suitable for policy and practice, twelve for research and four for food industry.
Conclusions:
Supported by national and state feeding guideline documents and resources, the advice messages relating to ‘repeated exposure’ and ‘variety’ to increase vegetable liking can be communicated to families and caregivers by healthcare practitioners. The food industry provides a vehicle for advice promotion and product development. Further research, where stronger evidence is needed, could further inform strategies for policy and practice, and food industry application.
Ecosystem modeling, a pillar of the systems ecology paradigm (SEP), addresses questions such as, how much carbon and nitrogen are cycled within ecological sites, landscapes, or indeed the earth system? Or how are human activities modifying these flows? Modeling, when coupled with field and laboratory studies, represents the essence of the SEP in that they embody accumulated knowledge and generate hypotheses to test understanding of ecosystem processes and behavior. Initially, ecosystem models were primarily used to improve our understanding about how biophysical aspects of ecosystems operate. However, current ecosystem models are widely used to make accurate predictions about how large-scale phenomena such as climate change and management practices impact ecosystem dynamics and assess potential effects of these changes on economic activity and policy making. In sum, ecosystem models embedded in the SEP remain our best mechanism to integrate diverse types of knowledge regarding how the earth system functions and to make quantitative predictions that can be confronted with observations of reality. Modeling efforts discussed are the Century ecosystem model, DayCent ecosystem model, Grassland Ecosystem Model ELM, food web models, Savanna model, agent-based and coupled systems modeling, and Bayesian modeling.
The legalization of hemp in the United States (U.S.) has created increased interest from agricultural and non-agricultural entities seeking to establish/expand hemp production and processing. As these entities begin to locate their production and processing operations, there is a potential for nearby residents to have concerns about these efforts. Using an online survey of residents from the southeastern U.S., concern levels and potential externalities associated with hemp production and processing were evaluated. Results show a majority of residents are concerned about hemp production and processing locating nearby with the externalities varying from the potential for illegal activity to environmental concerns.
Cognitive behavior therapy (CBT) is effective for most patients with a social anxiety disorder (SAD) but a substantial proportion fails to remit. Experimental and clinical research suggests that enhancing CBT using imagery-based techniques could improve outcomes. It was hypothesized that imagery-enhanced CBT (IE-CBT) would be superior to verbally-based CBT (VB-CBT) on pre-registered outcomes.
Methods
A randomized controlled trial of IE-CBT v. VB-CBT for social anxiety was completed in a community mental health clinic setting. Participants were randomized to IE (n = 53) or VB (n = 54) CBT, with 1-month (primary end point) and 6-month follow-up assessments. Participants completed 12, 2-hour, weekly sessions of IE-CBT or VB-CBT plus 1-month follow-up.
Results
Intention to treat analyses showed very large within-treatment effect sizes on the social interaction anxiety at all time points (ds = 2.09–2.62), with no between-treatment differences on this outcome or clinician-rated severity [1-month OR = 1.45 (0.45, 4.62), p = 0.53; 6-month OR = 1.31 (0.42, 4.08), p = 0.65], SAD remission (1-month: IE = 61.04%, VB = 55.09%, p = 0.59); 6-month: IE = 58.73%, VB = 61.89%, p = 0.77), or secondary outcomes. Three adverse events were noted (substance abuse, n = 1 in IE-CBT; temporary increase in suicide risk, n = 1 in each condition, with one being withdrawn at 1-month follow-up).
Conclusions
Group IE-CBT and VB-CBT were safe and there were no significant differences in outcomes. Both treatments were associated with very large within-group effect sizes and the majority of patients remitted following treatment.
In response to advancing clinical practice guidelines regarding concussion management, service members, like athletes, complete a baseline assessment prior to participating in high-risk activities. While several studies have established test stability in athletes, no investigation to date has examined the stability of baseline assessment scores in military cadets. The objective of this study was to assess the test–retest reliability of a baseline concussion test battery in cadets at U.S. Service Academies.
Methods:
All cadets participating in the Concussion Assessment, Research, and Education (CARE) Consortium investigation completed a standard baseline battery that included memory, balance, symptom, and neurocognitive assessments. Annual baseline testing was completed during the first 3 years of the study. A two-way mixed-model analysis of variance (intraclass correlation coefficent (ICC)3,1) and Kappa statistics were used to assess the stability of the metrics at 1-year and 2-year time intervals.
Results:
ICC values for the 1-year test interval ranged from 0.28 to 0.67 and from 0.15 to 0.57 for the 2-year interval. Kappa values ranged from 0.16 to 0.21 for the 1-year interval and from 0.29 to 0.31 for the 2-year test interval. Across all measures, the observed effects were small, ranging from 0.01 to 0.44.
Conclusions:
This investigation noted less than optimal reliability for the most common concussion baseline assessments. While none of the assessments met or exceeded the accepted clinical threshold, the effect sizes were relatively small suggesting an overlap in performance from year-to-year. As such, baseline assessments beyond the initial evaluation in cadets are not essential but could aid concussion diagnosis.
Introduction: This was a prospective observational study involving a convenience sample of low-risk trauma patients presenting to a Level 1 Trauma Centre under spinal motion restriction (SMR). To our knowledge no prior studies have objectively measured head-neck (H-N) motion in trauma patients with suspected spine injuries during emergency department (ED) care. The goal was to establish the feasibility of deploying non-invasive motion sensors on trauma patients in the ED and to provide initial estimates for H-N kinematics under SMR during different phases of treatment. Methods: Low-risk adult patients treated by Winnipeg Fire Paramedic Service who sustained non-life threatening trauma with the potential for spine injury were eligible for inclusion. Participants received usual pre-hospital care; application of spine board and/or cervical collar, as determined by local practice protocol. Inertial measurement units (IMUs) were placed on participant's forehead, sternum and stretcher upon arrival to the ED. Data was collected during three phases of care: patient handling (log rolls, transfers, clothing removal); stretcher movement (to imaging, etc); stretcher stationary. IMUs were removed upon disposition decision by the attending physician. IMUs yielded data on H-N motion in terms of linear acceleration (resultant) and angular displacement (rotation + flexion-extension + side-flexion = total). Peak (M +/- SE) displacements and accelerations are reported, with comparisons across treatment phases using repeated measures ANOVA. Results: Eleven patients were enrolled in the study (age: 49 +/- 16 years; Injury Severity Score 13.4 +/- 9.9; female = 2). Substantial H-N motion was observed during ED care. Total H-N displacement (28.6 +/- 3.6 deg) and acceleration (7.8 +/- 1.0 m/s2) were higher during patient handling compared to stretcher moving (13.0 +/- 2.5 deg; 4.6 +/- 0.9 m/s2; p < .05) but not while the stretcher was stationary (18.9 +/- 3.4 deg; 5.4 +/- 1.2 m/s2; p > .06). Similar differences were detected for side-flexion and flexion-extension (p < .05), with peak displacements of 11.4+/-1.5 deg and 14.6 +/- 2.2 deg during patient handling, respectively. Conclusion: IMU use on trauma patients safely described H-N motion kinematics in a small sample of patients with different spectrums of illness during their care in the ED. Future studies utilizing IMUs could inform ED spine motion restriction protocols and compare movement of patients in specific subsets (intoxicated, spinal tenderness, injury severity etc.).
Introduction: As part of our audit and feedback process, Emergency Physicians (EP) are provided feedback on flow metrics and resource utilization. We analysed the relationship between two specific metrics (adjusted workload measurement (AWM), with the number of patients seen per hour adjusted according to CTAS, and percentage of revisits within 72 hours and diagnostic imaging use. Unfortunately, we are unable to evaluate quality of care, nor appropriateness of DI indication at this stage. Methods: We used data from 86 physicians at an academic ED, from June 1, 2015 to May31, 2017. The Data Envelope Analysis (DEA) model incorporated performance quality measures as outputs and efficiency measures as inputs. DEA is a method widely used in physician performance analysis. The method provides a score (optimal performance efficiency-OPE) for each EP based on maximization of the performance (AWM) in proportion to the combination of efficient use of resources, diagnostic imaging (DI). The score was used to regress against demographic characteristics and training. Results: The median AWM was 6.8 (quartiles Q1-Q3 = 6.4-7.4) with the median diagnostic imaging use of percentages of CT (median = 10.1, 8.6-11.9), US (median = 4.7, 3.6-5.6) and x-ray (80, 74-84). The EPs who had highest AWM combined with least use of DI (OPE = 100%), provided median AWM of 9.1 (range 8.9-9.7) with percentage CT, US and x-ray medians at 5.8% (range 5.8-6.2), 2.7% (range 2.4-3.6) and 59% (range 59-72). These provided benchmarks for optimal performance indicators. We found statistically significant differences of OPE scores based on gender (men 4.1 times higher, p < 0.001) and degree (RCPS < CCFPEM, Other < CCFPEM, p < 0.001). Overall AWM diminishes at the rate of 14% (95%CI: 9-20%) for a combination of 100 DI tests ordered. In order to reach the optimal level of performance, to reach an OPE of 100%, the median CT use percentage needs to be reduced by 6% (quartile range 3.9- 7.7%), US by 2.2% (quartile range 1.5-3.4%) and x-rays by 37.2% (quartile range: 26.8-44.3%). Return visit rates were not associated with DI use, possibly due to homogeneity in the percentage of return visits. Conclusion: We found significant performance variations in terms of average workload measurement in proportion to the weighted average of diagnostic imaging use, with increased use of DI being associated with decreasing AWM. Percentage of return visits does not appear to be useful as a performance indicator.