We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Focusing on practical application, this textbook provides clear and concise explanations of statistical tests and techniques that students can apply in real-world situations. It has a dual emphasis: firstly, on doing statistics, and secondly, on understanding statistics, to do away with the mindset that statistics is difficult. Procedural explanations are provided so students know how to apply particular statistical tests and techniques in practical research situations. Conceptual understanding is encouraged to ensure students know not only when and how to apply appropriate techniques, but also why they are using them. Ancillary resources are available including sample answers to exercises, PowerPoint teaching slides, instructor manual, and a test bank. Illustrative figures, real-world data, practice exercises, and software instruction make this an essential resource for mastering statistics for undergraduate and graduate students in the social and behavioral sciences.
Chapter 8 examines the t-test and its assumptions as it applies to mean comparison between samples and populations and experimental designs such as between-subjects, within-subjects, and matched designs. The t-test or student’s test is the most simple and elegant test of significance. The chapter examines the t-test for a single sample used for simple test designs, as well as the t-test for independent means, used when two populations are being compared, followed by the t-test for dependent means, used when measurements are related or correlated.
Chapter 9 introduces students to one-way or one-factor analysis of variance (ANOVA) and factorial designs involving two or more factors. Step-by-step calculation demonstrations are provided. However, a greater emphasis is placed on conceptual understanding than on computation, especially for factorial designs or multifactor ANOVA. The first part of the chapter explores the logic of ANOVA and the steps required, after rejecting a false null hypothesis, to understand and apply ANOVA. The second part introduces factorial designs involving multiple factors.
Chapter 2 discusses the different graphic techniques for describing data. These include bar graphs, histograms, and frequency polygons, as well as shapes and patterns of distributions. Raw scores are often arranged into frequency distributions, which are orderly arrangements of intervals of a given size which contain frequencies. Frequency distributions organize data in ways that can be viewed and interpreted by investigators. Constructing intervals of specified sizes to encompass scores in a distribution enables investigators to capture certain features that suggest particular statistical techniques for data analysis. Large amounts of collected data from questionnaires, observations, or experiments can be summarized by a simple chart or graph. The meaningfulness, relevance, and understanding of this graphic representation cannot be overstated.
Chapter 7 introduces statistical power and effect size in hypothesis testing. Guidelines for interpretation of effect size, along with other sources of increasing statistical power, are provided. Point estimation and interval estimation and their relationship to population parameter estimates and the hypothesis-testing process are considered. Statistical significance is highly sensitive to large sample sizes. This means that researchers, in addition to selecting desired statistical significance p-values, need to know the magnitude of the treatment effect or the effect size of the behavior under consideration. Effect size determines sample size, and sample size is intimately related to statistical power or the likelihood of rejecting a false null hypothesis.
Chapter 10 examines correlation, the statistical procedure used to measure the degree of association or relationship between variables. It is bivariate, since we typically apply this statistical technique to measure or describe the association between two variables or groups. The correlation coefficient, which measures the degree and direction of an association, is discussed, as are some of the issues regarding the application and interpretation of correlations. The chapter also outlines the many different measures of association but focuses on Pearson’s r. It emphasizes the definitional formula and z-scores for understanding and computing the correlation coefficient.
Chapter 3 examines measures of central tendency and their correspondence to normality and skewness. The three measures of central tendency presented include the mode, median, and mean. The mean is typically thought of as the average. The mode is the score occurring most frequently in a distribution of scores; the median is the central score, or the point which divides a distribution into two equal parts. The median is a robust statistic. The level of measurement assumption is crucial in selecting the best measure of central tendency for specific analyses.
Chapter 4 examines measures of variability. Variability is the degree to which scores in a distribution are spread out or clustered together. Scores in a distribution not only focus around certain reference points or central tendency but also focus in particular ways around these central points. To understand the significance of any distribution of scores, we must know about both its central tendency and its variability. The index of qualitative variation (IQV), range, standard deviation, mean absolute deviation, and variance are all important measures of variability. Variability allows researchers to provide a full picture of a distribution of scores.
Chapter 5 examines the normal distribution, its relationship to z-scores, and its applicability to probability theory and statistical inference. z-scores or standardized scores are values depicting how far a particular score is from the mean in standard deviation units. Different proportions of the normal curve area are associated with z-scores. The conversions of raw scores to z-scores and z-scores to raw scores are illustrated. Nonnormal distributions which differ markedly from normal curve characteristics are also described. The importance of the normal curve as a probability distribution, along with a brief introduction to probability, is discussed.
Chapter 11 introduces students to bivariate (simple) regression and multiple regression. Students learn the importance of linear relationships and how linearity can be used to make predictions on one variable from the knowledge of another variable or multiple variables. Interpretation and conceptual understanding of critical concepts in regression are emphasized.
Chapter 6 introduces the hypothesis-testing process and relevance of standard error in reaching statistical conclusions about whether to accept or reject the null hypothesis using the z-test statistic. Type I and Type II errors, along with the types of statistical tests researchers apply in testing hypotheses, are presented; these include one-tailed (directional) versus two-tailed (nondirectional) tests. Three important decision rules are the sampling distribution of means, the level of significance, and critical regions. Type I and Type II errors influence the decisions we make about our predictions of relationships between variables. Statistical decision-making is never error-free, but we have some control in reducing these types of errors.
Chapter 12 examines some nonparametric statistical tests designed for data applications appropriate for the nominal or ordinal level of measurement. Chi-square tests have few restrictive assumptions underlying their application and are used for data which violate one or more of the formal assumptions regarding the use of parametric statistics. A presentation illustrating tabular construction for one variable, two variables, and k variables is provided, then chi-square tests for a single sample, two samples, and k samples are described. The chapter also presents the chi-square test for independence, which can be applied to frequency data that are cross-tabulated for two or more nominal variables. This test evaluates frequency data to determine the relationship between two variables in the population.