Hostname: page-component-8448b6f56d-tj2md Total loading time: 0 Render date: 2024-04-24T11:31:03.270Z Has data issue: false hasContentIssue false

Factors Affecting Visual Inference in Single-Case Designs

Published online by Cambridge University Press:  10 January 2013

Verônica M. Ximenes
Affiliation:
Universitadade Federal do Ceará (Brazil)
Rumen Manolov*
Affiliation:
Universitat de Barcelona (Spain)
Antonio Solanas
Affiliation:
Universitat de Barcelona (Spain)
Vicenç Quera
Affiliation:
Universitat de Barcelona (Spain)
*
Correspondence concerning this article should be addressed to Rumen Manolov, Departament de Metodologia de les Ciències del Comportament, Facultat de Psicologia, Universitat de Barcelona, Passeig de la Vall d'Hebron, 171, 08035-Barcelona (Spain). Phone: +34-93-3125844. E-mail: rrumenov13@ub.edu

Abstract

Visual inspection remains the most frequently applied method for detecting treatment effects in single-case designs. The advantages and limitations of visual inference are here discussed in relation to other procedures for assessing intervention effectiveness. The first part of the paper reviews previous research on visual analysis, paying special attention to the validation of visual analysts' decisions, inter-judge agreement, and false alarm and omission rates. The most relevant factors affecting visual inspection (i.e., effect size, autocorrelation, data variability, and analysts' expertise) are highlighted and incorporated into an empirical simulation study with the aim of providing further evidence about the reliability of visual analysis. Our results concur with previous studies that have reported the relationship between serial dependence and increased Type I rates. Participants with greater experience appeared to be more conservative and used more consistent criteria when assessing graphed data. Nonetheless, the decisions made by both professionals and students did not match sufficiently the simulated data features, and we also found low intra-judge agreement, thus suggesting that visual inspection should be complemented by other methods when assessing treatment effectiveness.

La inspección visual sigue siendo el método más utilizado para detectar tratamientos efectivos en diseños de caso único. El presente trabajo comenta las ventajas y limitaciones de la inferencia visual en relación con otros procedimientos empleados para evaluar la efectividad de las intervenciones. La primera parte del manuscrito revisa investigaciones previas sobre el análisis visual, enfocando la validación de las decisiones de los analistas visuales, la concordancia entre jueces y las tasas de falsas alarmas y omisión. Se hace énfasis en los factores que más afectan a la inspección visual (i.e., tamaño del efecto, autocorrelación, variabilidad en los datos y experiencia de los analistas) y éstos se incluyen en un estudio de simulación que pretende aportar evidencias sobre la calidad del análisis visual. Nuestros resultados coinciden con estudios previos sobre la relación entre la dependencia serial y un incremento en las tasas de error Tipo I. Los participantes con mayor experiencia parecen ser más conservadores y utilizan criterios más consistentes al evaluar datos gráficos. No obstante, tanto las decisiones de los profesionales y como las de los estudiantes no se corresponden lo suficiente con los datos simulados. Además, se encontró una baja consistencia intra-jueces, sugiriendo que la inspección visual se debería complementar por otros métodos a la hora de evaluar la efectividad de los tratamientos.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2009

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Allison, D. B., Franklin, R. D., & Heshka, S. (1992). Reflections on visual inspection, response guided experimentation, and Type I error rate in single-case designs. The Journal of Experimental Education, 6, 4551.CrossRefGoogle Scholar
Baer, D. (1977). “Perhaps it would be better not to know everything”. Journal of Applied Behavior Analysis, 10, 167172.CrossRefGoogle Scholar
Bailey, D. B. (1984). Effects of lines of progress and semilogarithmic charts of ratings of charted data. Journal of Applied Behavior Analysis, 17, 359365.CrossRefGoogle ScholarPubMed
Barlow, D. H., & Hersen, M. (2008). Single case experimental designs. Strategies for studying behavior change (3rd ed.). Boston: Allyn & Bacon.Google Scholar
Brossart, D. F., Parker, R. I., Olson, E. A., & Mahadevan, L. (2006). The relationship between visual analysis and five statistical analyses in a simple AB single-case research design. Behavior Modification, 30, 531563.CrossRefGoogle Scholar
Busk, P. L., & Marascuilo, L. A. (1992). Statistical analysis in single-case research: Issues, procedures, and recommendations, with applications to multiple behaviors. In Kratochwill, T. R. & Levin, J. R. (Eds.), Single-case research designs and analysis: New directions for psychology and education (pp. 159185). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
Busk, P. L., & Serlin, R. C. (1992). Meta-analysis for single-case research. In Kratochwill, T. R. & Levin, J. R. (Eds.), Singlecase research designs and analysis: New directions for psychology and education (pp. 187212). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 3746.CrossRefGoogle Scholar
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd Ed.). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
Crosbie, J. (1987). The inability of the binomial test to control Type I error with single-subject data. Behavioral Assessment, 9, 141150.Google Scholar
DeProspero, A., & Cohen, S. (1979). Inconsistent visual analyses of intrasubject data. Journal of Applied Behavior Analysis, 12, 573579.CrossRefGoogle ScholarPubMed
Ferron, J., & Jones, P. K. (2006). Tests for the visual analysis of response-guided multiple-baseline data. The Journal of Experimental Education, 75, 6681.CrossRefGoogle Scholar
Ferron, J., & Ware, W. (1995). Analyzing single-case data: The power of randomization tests. The Journal of Experimental Education, 63, 167178.CrossRefGoogle Scholar
Fisch, G. S. (2001). Evaluating data from behavioral analysis: Visual inspection or statistical models? Behavioural Processes, 54, 137154.CrossRefGoogle ScholarPubMed
Fisher, W. W., Kelley, M. E., & Lomas, J. E. (2003). Visual aids and structured criteria for improving visual inspection and interpretation of single-case designs. Journal of Applied Behavior Analysis, 36, 387406.CrossRefGoogle ScholarPubMed
Franklin, R. D., Gorman, B. S., Beasley, T. M., & Allison, D. B. (1997). Graphical display and visual analysis. In Franklin, R. D., Allison, D. B., & Gorman, B. S. (Eds.), Design and analysis of single-case research (pp. 119158). Mahwah, NJ: Lawrence Erlbaum.Google Scholar
Furlong, M. J., & Wampold, B. E. (1982). Intervention effects and relative variation as dimensions in experts' use of visual inference. Journal of Applied Behavior Analysis, 15, 415421.CrossRefGoogle ScholarPubMed
Gibson, G., & Ottenbacher, K. (1988). Characteristics influencing the visual analysis of single-subject data: An empirical analysis. The Journal of Applied Behavioral Science, 24, 298314.CrossRefGoogle Scholar
Greenwood, K. M.& Matyas, T. A. (1990). Problems with application of interrupted time series analysis for brief singlesubject data. Behavioral Assessment, 12, 355370.Google Scholar
Hagopian, L. P., Fisher, W. W., Thompson, R. H., Owen-DeSchryver, J., Iwata, B. A., & Wacker, D. P. (1997). Toward the development of structured criteria for interpretation of functional analysis data. Journal of Applied Behavior Analysis, 30, 313326.CrossRefGoogle ScholarPubMed
Harbst, K. B., Ottenbacher, K. J., & Harris, R. S. (1991). Interrater reliability of therapists' judgments of graphed data. Physical Therapy, 71, 107115.CrossRefGoogle ScholarPubMed
Hojem, M. A., & Ottenbacher, K. J. (1988). Empirical investigation of visual-inspection versus trend-line analysis of single-subject data. Journal of the American Physical Association, 68, 983988.Google ScholarPubMed
Hugdahl, K., & Ost, L. -G. (1981). On the difference between statistical and clinical significance. Behavioral Assessment, 3, 289295.Google Scholar
Huitema, B. E., & McKean, J. W. (1991). Autocorrelation estimation and inference with small samples. Psychological Bulletin, 110, 291304.CrossRefGoogle Scholar
James, I. A., Smith, P. S., & Milne, D. (1996). Teaching visual analysis of time series data. Behavioural and Cognitive Psychotherapy, 24, 247262.CrossRefGoogle Scholar
Jones, R. J., Vaught, R. S., & Weinrott, M. R. (1977). Time-series analysis in operant research. Journal of Applied Behavior Analysis, 10, 151166.CrossRefGoogle ScholarPubMed
Jones, R. J., Weinrott, M. R., & Vaught, R. S., (1978). Effects of serial dependency on the agreement between visual and statistical inference. Journal of Applied Behavior Analysis, 11, 277283.CrossRefGoogle ScholarPubMed
Kazdin, A. E. (1982). Single-case research design: Methods for clinical and applied settings. New York: Oxford University Press.Google Scholar
Kendall, M., & Ord, J. K. (1990). Time series. London: Addison-Wesley Publishing Company.Google Scholar
Knapp., T. J. (1983). Behavioral analysts' visual appraisal of behavior change in graphic display. Behavioral Assessment, 5, 155164.Google Scholar
Kratochwill, T. R., & Brody, G. H. (1978). Single subject designs: A perspective on the controversy over employing statistical inference and implications for research and training in behavior modification. Behavior Modification, 2, 291307.CrossRefGoogle Scholar
Ma, H. H. (2006). An alternative method for quantitative synthesis of single-subject research: Percentage of data points exceeding the median. Behavior Modification, 30, 598617.CrossRefGoogle ScholarPubMed
Matyas, T. A., & Greenwood, K. M. (1990a). The effect of serial dependence on visual judgment in single-case charts: An addendum. The Occupational Therapy Journal of Research, 10, 208220.CrossRefGoogle Scholar
Matyas, T. A., & Greenwood, K. M. (1990b). Visual analysis for single-case time series: effects of variability, serial dependence, and magnitude of intervention effects. Journal of Applied Behavior Analysis, 23, 341351.CrossRefGoogle ScholarPubMed
Michael, J. (1974). Statistical inference for individual organism research: Mixed blessing or curse? Journal of Applied Behavior Analysis, 7, 647653.CrossRefGoogle ScholarPubMed
Morales Ortiz, M. (1992). Análisis cualitativo: Concepto y posibilidades mediante lenguaje gráfico. Unpublished doctoral dissertation, University of Seville, Spain.Google Scholar
Morley, S., & Adams, M. (1991). Graphical analysis of singlecase time series data. British Journal of Clinical Psychology, 30, 97115.CrossRefGoogle ScholarPubMed
Normand, M. P., & Bailey, J. S. (2006). The effects of celeration lines on visual data analysis. Behavior Modification, 30, 295314.CrossRefGoogle ScholarPubMed
Ottenbacher, K. J. (1986). Reliability and accuracy of visually analyzing graphed data from single-subject designs. American Journal of Occupational Therapy, 40, 464469.CrossRefGoogle ScholarPubMed
Ottenbacher, K. J. (1990a). Visual inspection of single-subject data: An empirical analysis. Mental Retardation, 28, 283290.Google ScholarPubMed
Park, H.-S., Marascuilo, L., & Gaylord-Ross, R. (1990). Visual inspection and statistical analysis in single-case designs. The Journal of Experimental Education, 58, 311320.CrossRefGoogle Scholar
Parker, R. I. (2006). Increased reliability for single-case research results: Is bootstrap the answer? Behavior Therapy, 37, 326338.CrossRefGoogle ScholarPubMed
Parker, R. I., & Brossart, D. F. (2003). Evaluating single-case research data: A comparison of seven statistical methods. Behavior Therapy, 34, 189211.CrossRefGoogle Scholar
Parker, R. I., & Brossart, D. F. (2006). Phase contrasts for multiphase single case intervention designs. School Psychology Quarterly, 21, 4661.CrossRefGoogle Scholar
Parker, R. I., Brossart, D. F., Vannest, K. J., Long, J. R., Garcia De-Alba, R., Baugh, F. G., & Sullivan, J. R. (2005). Effect sizes in single case research: How large is large? School Psychology Review, 34, 116132.CrossRefGoogle Scholar
Parker, R. I., Cryer, J., & Byrns, G. (2006). Controlling baseline trend in single-case research. School Psychology Quarterly, 21, 418443.CrossRefGoogle Scholar
Parker, R. I., & Hagan-Burke, S. (2007a). Median-based overlap analysis for single case data: A second study. Behavior Modification, 31, 919936.CrossRefGoogle Scholar
Parker, R. I., & Hagan-Burke, S. (2007b). Single case research results as clinical outcomes. Journal of School Psychology, 45, 637653.CrossRefGoogle Scholar
Parker, R. I., & Hagan-Burke, S. (2007c). Useful effect size interpretations for single case research. Behavior Therapy, 38, 95105.CrossRefGoogle ScholarPubMed
Parker, R. I., Hagan-Burke, S., & Vannest, K. (2007). Percentage of all non-overlapping data: An alternative to PND. Journal of Special Education, 40, 194204.CrossRefGoogle Scholar
Parsonson, B. S., & Baer, D. M. (1986). The graphic analysis of data. In Poling, A. & Fuqua, R. W. (Eds.), Research methods in applied behavior analysis: Issues and advances (pp. 157186). New York: Plenum Press.CrossRefGoogle Scholar
Parsonson, B. S., & Baer, D. M. (1992). The visual analysis of data, and current research into the stimuli controlling it. In Kratochwill, T. R. & Levin, J. R. (Eds.), Single-case research designs and analysis: New directions for psychology and education (pp. 1540). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
Richards, S. B., Taylor, R. L., & Ramasamy, R. (1997). Effects of subject and rater characteristics on the accuracy of visual analysis of single subject data. Psychology in the Schools, 34, 355362.3.0.CO;2-K>CrossRefGoogle Scholar
Rojahn, J., & Schulze, H. H. (1985). The linear regression line as a judgmental aid in visual analysis of serially dependent A-B time-series data. Journal of Psychopathology and Behavioral Assessment, 7, 191206.CrossRefGoogle Scholar
Skiba, R., Deno, S., Marston, D., & Casey, A. (1989). Influence of trend estimation and subject familiarity on practitioners judgements of intervention effectiveness. Journal of Special Education, 22, 433446.CrossRefGoogle Scholar
Sierra, V., Solanas, A., & Quera, V. (2005). Randomization tests for systematic single-case designs are not always appropriate. The Journal of Experimental Education, 73, 140160.CrossRefGoogle Scholar
Toothaker, L. E., Banz, M., Noble, C., Camp, J., & Davis, D. (1983). N = 1 designs: The failure of ANOVA-based tests. Journal of Educational Statistics, 4, 289309.CrossRefGoogle Scholar
Wampold, B. E., & Furlong, M. J. (1981). The heuristics of visual inference. Behavioral Assessment, 3, 7982.Google Scholar