Hostname: page-component-89b8bd64d-4ws75 Total loading time: 0 Render date: 2026-05-08T08:11:27.587Z Has data issue: false hasContentIssue false

Can we agree on the quality of clinical supervision? Inter-rater reliability of the Short–SAGE (Supervision: Adherence and Guidance Evaluation) scale

Published online by Cambridge University Press:  17 December 2020

Maria Beckman
Affiliation:
Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet & Stockholm Health Care Services, Stockholm, Sweden
Åsa Spännargård
Affiliation:
Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet & Stockholm Health Care Services, Stockholm, Sweden
Sven Alfonsson*
Affiliation:
Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet & Stockholm Health Care Services, Stockholm, Sweden Department of Women’s and Children’s Health, Uppsala University, Uppsala, Sweden
*
*Corresponding author. Email: sven.alfonsson@ki.se
Rights & Permissions [Opens in a new window]

Abstract

Clinical supervision is a cornerstone in psychotherapist training, but research in this area is hampered by a lack of validated tools for assessing supervision quality. Short–SAGE (Supervision: Adherence and Guidance Evaluation) is an observational instrument designed for evaluating supervision in cognitive behavioural therapy. The aim of this study was to evaluate the inter-rater reliability of Short–SAGE. Four experienced clinical psychologists participated in three 3-hour Short–SAGE coding training sessions, followed by an additional meeting and coding instructions. In a cross-over design, codings of 20 supervision sessions were then assessed with intraclass correlations (ICC), for both the 3- and 7-point scales of the instrument. In the single measure analyses for both scales, only one item showed ICC in the good range, and the rest of the 14 item ICCs were in the poor to fair range. Moreover, on the 3-point scale, five of the 14 inter-rater correlations were non-significant. For research and training purposes, validated tools to assess supervision quality are highly needed. However, instruments for measuring adherence and/or competence are of little value if the coders do not attain inter-rater reliability. Whether quality of supervision is associated with improvements in supervisees’ competencies is not yet clear. Short–SAGE provides a tool that may enable empirical research in this area. Further studies are needed to assess whether extensive training can improve the inter-rater reliability of Short–SAGE.

Key learning aims

  1. (1) Readers will be aware of the urgent need for validated tools to assess clinical supervision quality.

  2. (2) Readers will be familiar with some existing tools for assessing the quality of clinical supervision.

  3. (3) Readers will be able to identify common problems in the development of instruments for assessing clinical supervision.

Information

Type
Original Research
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© British Association for Behavioural and Cognitive Psychotherapies 2020
Figure 0

Table 1. Mean Short–SAGE scores for each item for each coder

Figure 1

Table 2. Range and intra-class correlation coefficients (ICC) for the Short–SAGE 7-point scale

Figure 2

Table 3. Range and intra-class correlation coefficients (ICC) for the Short–SAGE 3-point scale

Submit a response

Comments

No Comments have been published for this article.