Book contents
- Frontmatter
- Contents
- Preface to the second edition
- Preface to the first edition
- 1 Introduction
- 2 Developing a coding scheme
- 3 Recording behavioral sequences
- 4 Assessing observer agreement
- 5 Representing observational data
- 6 Analyzing sequential data: First steps
- 7 Analyzing event sequences
- 8 Issues in sequential analysis
- 9 Analyzing time sequences
- 10 Analyzing cross-classified events
- 11 Epilogue
- Appendix: A Pascal program to compute kappa and weighted kappa
- References
- Index
8 - Issues in sequential analysis
Published online by Cambridge University Press: 13 October 2009
- Frontmatter
- Contents
- Preface to the second edition
- Preface to the first edition
- 1 Introduction
- 2 Developing a coding scheme
- 3 Recording behavioral sequences
- 4 Assessing observer agreement
- 5 Representing observational data
- 6 Analyzing sequential data: First steps
- 7 Analyzing event sequences
- 8 Issues in sequential analysis
- 9 Analyzing time sequences
- 10 Analyzing cross-classified events
- 11 Epilogue
- Appendix: A Pascal program to compute kappa and weighted kappa
- References
- Index
Summary
Independence
In classical parametric statistics, we assume that our observations are independent, and this assumption forms part of the basis of our distribution statistics. In the sequential analysis of observational data, on the other hand, we want to detect dependence in the observations. To do this we compare observed frequencies with those we would expect if the observations were independent. Thus, dependence in the data is not a “problem.” It is what we are trying to study.
The statistical problem of an appropriate test is not difficult to solve. It was solved in a classic paper in 1957 by Anderson and Goodman (see also Goodman, 1983, for an update). Their solution is based on the likelihood-ratio chi-square test.
The likelihood-ratio test applies to the comparison of any two statistical models if one (the “little” model) is a subcase of the other (the “big” model). The null-hypothesis model is usually the little model. In our case, this model is often the assumption that the data are independent (or quasi independent); i.e., that there is no sequential structure. Compared to this is the big, interesting model that posits a dependent sequential structure. As discussed in section 7.6, the difference between the G2 for the big model (e.g., [01]) and the G2 for the little model (e.g., [0][1]) is distributed asymptotically as chi-square, with degrees of freedom equal to the difference in the degrees of freedom for the big and little models. “Asymptotic” means that it becomes increasingly true for large N, where N is the number of observations.
- Type
- Chapter
- Information
- Observing InteractionAn Introduction to Sequential Analysis, pp. 136 - 149Publisher: Cambridge University PressPrint publication year: 1997