Hostname: page-component-848d4c4894-75dct Total loading time: 0 Render date: 2024-05-16T15:45:02.524Z Has data issue: false hasContentIssue false

N = 1 Research Methodology in Counselling: Focusing upon the Individual Client

Published online by Cambridge University Press:  12 February 2016

Christopher F. Sharpley*
Affiliation:
Bond University, Australia
*
School of Health Sciences, Bond University, Gold Coast, QLD 4229, Australia
Get access

Abstract

Although the last 20years have seen a focus upon evidence-based therapies, there are arguments that much of the so-called “evidence” produced is, in fact, irrelevant to the mental health practitioner in the field, principally because of the use of large-scale group designs in clinical controlled studies of the effectiveness of one therapy over another. By contrast, and with particular relevance to the practitioner who is both scientist and therapist, single subject research designs and methodologies for data analysis can be applied in ways that allow for generalisation to everyday practice. To inform the readership, the rationale underlying n = 1 studies is described, with some explanation of the major designs and their application to typical cases in guidance and counselling. Issues of inferential deductions from data, variations of design, data analysis via visual and statistical procedures, and replication are discussed. Finally, a case is argued for the introduction of n = 1 reports within the Australian Journal of Guidance and Counselling to better inform the readership about clinical research findings relevant to their practices.

Type
Articles
Copyright
Copyright © Cambridge University Press 2003

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Barlow, D.H., & Hersen, M. (1984). Single case experimental designs. New York: Pergamon.Google Scholar
Bloom, M., Fischer, J., & Orme, J.G. (1995). Evaluating practice: Guidelines for the accountable professional (2nd ed.). Upper Saddle River, NJ: Prentice-Hall.Google Scholar
Cook, T.D., & Campbell, D.T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand McNally.Google Scholar
Crosbie, J. (1993). Interrupted time-series analysis with brief single-subject data. Journal of Consulting and Clinical Psychology, 61, 966974.CrossRefGoogle ScholarPubMed
Crosbie, J., & Sharpley, C.F. (1989). DMITSA: A simplified interrupted time-series analysis program. Behavior, Research Methods, Instruments, and Computers, 21, 639642.CrossRefGoogle Scholar
Garfield, S.L. (1998). Some comments on empirically supported treatments. Journal of Consulting and Clinical Psychology, 66, 121125.CrossRefGoogle ScholarPubMed
Glass, G.V., Willson, V.L., & Gottman, J.M. (1975). Design and analysis of time-series experiments. Boulder: Colorado Associated University Press.Google Scholar
Gottman, J.M. (1981). Time-series analysis: A comprehensive introduction for social scientists. Cambridge: Cambridge University Press.Google Scholar
Jones, R.R., Weinrott, M., & Vaught, R.S. (1978). Effects of serial dependency on the agreement between visual and statistical inference. Journal of Applied Behavior Analysis, 11, 277283.CrossRefGoogle ScholarPubMed
Kazdin, A.E. (1992). Research design in clinical psychology (2nd ed.). Boston: Allyn & Bacon.Google Scholar
Kendall, P.C. (1998). Empirically supported psychological therapies. Journal of Clinical and Consulting Psychology, 66, 36.CrossRefGoogle ScholarPubMed
King, R. (1998). Evidence-based practice: Where is the evidence? The case of cognitive behaviour therapy and depression. Australian Psychologist, 33, 8388.CrossRefGoogle Scholar
Luborsky, L., McLellan, A., Diguer, L., Woody, G., & Seligman, M. (1997). The psychotherapist matters: Comparison of outcomes across 22 therapists and 7 patient samples. Clinical Psychology: Science and Practice, 4, 5365.Google Scholar
Miltenberger, R. (2001). Behavior modification: Principles and procedures (2nd ed.). Boston: Brooks/Cole.Google Scholar
National Mental Health Strategy. (1996). National standards for mental health services project: Outcome oriented standards for mental health services in Australia. Canberra: Commonwealth of Australia, Department of Health and Family Services.Google Scholar
Persons, J.B., & Silberschatz, G. (1998). Are results of randomised controlled trials useful to psychotherapists? Journal of Consulting and Clinical Psychology, 66, 126135.CrossRefGoogle ScholarPubMed
Seligman, M. (1995). The effectiveness of psychotherapy: The Consumer Reports study. American Psychologist, 50, 965974.CrossRefGoogle ScholarPubMed
Shapiro, D., Firth-Cozens, J., & Stiles, W. (1989). Therapists' differential effectiveness: A Sheffield Psychotherapy project addendum. British Journal of Psychiatry, 154, 383385.CrossRefGoogle ScholarPubMed
Sharpley, C.F. (1986). Fallibility in the visual assessment of behavioural interventions: Time-series statistics to analyse time-series data. Behaviour Change, 3, 2633.CrossRefGoogle Scholar
Sharpley, C.F. (1988). Single-subject research. In Keeves, J.P. (Ed.), Educational research, methodology, and measurement: An international handbook (pp. 580586). Oxford: Pergamon Press.Google Scholar
Sharpley, C.F., & Alavosuis, M. P. (1988). Autocorrelation in behavioural data: An alternative perspective. Behavioral Assessment, 10, 243251.Google Scholar
Smith, M.L., & Glass, G.V. (1977). Meta-analysis of psychotherapy outcome studies. American Psychologist, 32, 752760.CrossRefGoogle ScholarPubMed
Truax, C.B., & Carkhuff, R.R. (1967). Towards effective counseling and psychotherapy: Training and practice. Chicago: Aldine.Google Scholar