Book contents
- Frontmatter
- Contents
- List of figures
- List of tables
- List of boxes
- Introduction
- Part I Effect sizes and the interpretation of results
- Part II The analysis of statistical power
- 3 Power analysis and the detection of effects
- 4 The painful lessons of power research
- Part III Meta-analysis
- Last word: thirty recommendations for researchers
- Appendices
- Bibliography
- Index
4 - The painful lessons of power research
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- List of figures
- List of tables
- List of boxes
- Introduction
- Part I Effect sizes and the interpretation of results
- Part II The analysis of statistical power
- 3 Power analysis and the detection of effects
- 4 The painful lessons of power research
- Part III Meta-analysis
- Last word: thirty recommendations for researchers
- Appendices
- Bibliography
- Index
Summary
Low-powered studies that report insignificant findings hinder the advancement of knowledge because they misrepresent the ‘true’ nature of the phenomenon of interest, and might thereby lead researchers to overlook an increasing number of effects.
~ Jürgen Brock (2003: 96–97)The low power of published research
How highly would you rate a scholarly journal where the majority of articles had more than a 50% chance of making a Type II error, where one out of every eight papers mistook randomness for a genuine effect, and where replication studies falsifying these Type I errors were routinely turned away by editors uninterested in reporting nonsignificant findings? You would probably think this was a low-grade journal indeed. Yet the characteristics just described could be applied to top-tier journals in virtually every social science discipline. This is the implicit verdict of studies that assess the statistical power of published research.
Power analyses can be done both for individual studies, as described in the previous chapter, and for sets of studies linked by a common theme or published in a particular journal. Scholars typically analyze the power of published research to gauge the “power of the field” and assess the likelihood of Type II errors. They avoid the usual perils of post hoc power analyses by using alpha instead of reported p values and by calculating power for a range of hypothetical effect sizes instead of observed effect sizes.
- Type
- Chapter
- Information
- The Essential Guide to Effect SizesStatistical Power, Meta-Analysis, and the Interpretation of Research Results, pp. 73 - 86Publisher: Cambridge University PressPrint publication year: 2010