Hostname: page-component-76fb5796d-vfjqv Total loading time: 0 Render date: 2024-04-25T23:23:32.980Z Has data issue: false hasContentIssue false

P069: Gestalt assessment of online educational resources is unreliable and inconsistent

Published online by Cambridge University Press:  02 June 2016

K. Krishnan
Affiliation:
University of Toronto, Markham, ON
S. Trueger
Affiliation:
University of Toronto, Markham, ON
B. Thoma
Affiliation:
University of Toronto, Markham, ON
M. Lin
Affiliation:
University of Toronto, Markham, ON
T.M. Chan
Affiliation:
University of Toronto, Markham, ON

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Introduction: The use of free open access medicine, particularly open educational resources (OERs), by medical educators and learners continues to increase. As OERs, especially blogs and podcasts, rise in popularity, their ease of dissemination raises concerns about their quality. While critical appraisal of primary research and journal articles is formally taught, no training exists for the assessment of OERs. Thus, the ability of educators and learners to effectively assess the quality of OERs using gestalt alone has been questioned. Our goal is to determine whether gestalt is sufficient for emergency medicine learners (EM) and physicians to consistently rate and reliably recommend OERs to their colleagues. We hypothesized that EM physicians and learners would differ substantively in their assessment of the same resources. Methods: Participants included 31 EM learners and 23 EM attending physicians from Canada and the U.S. A modified Dillman technique was used to administer 4 survey blocks of 10 blog posts per subject between April and August, 2015. Participants were asked whether they would recommend each OER to 1) a learner or 2) an attending physician. The ratings reliability was assessed using single measures intraclass correlations and their correlations amongst the groups were assessed using Spearman’s rho. Family-wise adjustments were made for multiple comparisons using the Bonferroni technique. Results: Learners demonstrated poor reliability when recommending resources for other learners (ICC= 0.21, 95% CI 0.13-0.39) and attending physicians (ICC = 0.16, 95% CI=0.09-0.30). Similarly, attendings had poor reliability when recommending resources for learners (ICC= 0.27, 95% CI 0.18-0.41) and other attendings (ICC=0.22, 95% CI 0.14-0.35). Learners and attendings demonstrated moderate consistency between them when recommending resources for learners (rs=0.494, p<.01) and attendings (rs=0.491, p<.01). Conclusion: Using a gestalt-based rating system is neither reliable nor consistent when recommending OERs to learners and attending physicians. Learners’ gestalt ratings for recommending resources for other learners and attendings were especially unreliable. Our findings suggests the need for structured rating systems to rate OERs.

Type
Posters Presentations
Copyright
Copyright © Canadian Association of Emergency Physicians 2016