To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Survey methodology has a tradition in statistics of focusing on populations and samples. Samples of population units are selected according to probabilistic sampling designs. By controlling the design, selection bias and uncertainty of estimators and tests can be quantified so inference can be drawn with confidence. Early publications in the field were dedicated to explaining the benefits of probability sampling designs as opposed to convenience sampling of various sorts. Probability sampling is the term usually used when the selection probabilities are known for all samples and each population unit has a nonzero probability of being selected. The focus on controlled randomization can be contrasted with probabilistic uncertainty modeling. In many surveys, sampling variation is not the main source of uncertainty. There is variation due to measurement errors, response imperfections, observation difficulties, and other repetitive factors that can be specified by probabilistic assumptions. The superpopulation concept can also be seen as a way to include probabilistic modeling for such uncertainty that is not a consequence of imposed randomization or variation due to repetitive incidents. Modern statistical survey methodology distinguishes between design- and model-based approaches, and often uses an intermediate approach with model-assisted techniques in combination with design-based inference. A pure probabilistic model approach focuses on data and tries to imitate how data are generated. A good model fit is important for reliable inference, but does not necessarily mean that the sampling design is an explicit part of the model's data generating mechanism.
Diffusion of innovations theory attempts to explain how new ideas and practices spread within and between communities. The theory has its roots in anthropology, economics, geography, sociology, and marketing, among other disciplines (Hägerstrand 1967; Robertson 1971; Brown 1981; Rogers 2003), and has in some ways been adapted from epidemiology (e.g., Bailey 1975; Morris 1993). The premise, confirmed by empirical research, is that new ideas and practices spread through interpersonal contacts largely consisting of interpersonal communication (Ryan and Gross 1943; Beal and Bohlen 1955; Katz, Levine, and Hamilton 1963; Rogers 1995; Valente 1995; Valente and Rogers 1995).
In their pioneering study, Ryan and Gross (1943) laid the groundwork for the diffusion paradigm by showing that, among other things, social factors rather than economic ones were important influences on adoption (Valente and Rogers 1995). Hundreds of diffusion studies were conducted in the 1950s and early 1960s to examine the diffusion process in more detail across a variety of settings (Rogers 2003). Many studies sought to understand how information created in government or otherwise sponsored programs could be disseminated more effectively. Diffusion research peaked in the early 1960s, but has been reinvigorated more recently with the advent of more sophisticated network models and technology making it possible to study the diffusion process more explicitly.
Most diffusion studies focus on trying to understand the factors that lead some members of a population to adopt a new idea and others do not.
This chapter treats statistical methods for network evolution. It is argued that it is most fruitful to consider models where network evolution is represented as the result of many (usually nonobserved) small changes occurring between the consecutively observed networks. Accordingly, the focus is on models where a continuous-time network evolution is assumed, although the observations are made at discrete time points (two or more).
Three models are considered in detail, all based on the assumption that the observed networks are outcomes of a Markov process evolving in continuous time. The independent arcs model is a trivial baseline model. The reciprocity model expresses effects of reciprocity, but lacks other structural effects. The actor-oriented model is based on a model of actors changing their outgoing ties as a consequence of myopic stochastic optimization of an objective function. This framework offers the flexibility to represent a variety of network effects. An estimation algorithm is treated, based on a Markov chain Monte Carlo (MCMC) implementation of the method of moments.
Some Basic Ideas About Longitudinal Social Network Data
The statistical modeling of social networks is difficult because of the complicated dependence structures of the processes underlying their genesis and development. One might think that the statistical modeling of longitudinal data on social networks is more difficult than modeling single observations of social networks.
This chapter describes and illustrates methods for studying affiliation networks, with special attention to methods for spatial representations that jointly display the actors and events in the network. Although affiliation networks have been the focus of methodological research for decades (Levine 1972; Breiger 1974; Seidman 1981; McPherson 1982; Wilson 1982), more recent analyses of affiliation networks have raised a number of issues concerning appropriate methods for their study. At the same time, research has pointed to the empirical and theoretical generality of this perspective (Freeman and White 1993; Wasserman and Faust 1994; Borgatti and Everett 1997; Faust 1997; Skvoretz and Faust 1999; Breiger 2000; Mische and Pattison 2000; Roberts 2000; Brazill and Groffman 2002; Faust et al. 2002; Pattison and Breiger 2002).
Background
Representing the two modes in the affiliation network in a “joint space” in which both actors and events are depicted simultaneously is of particular interest in both earlier and more recent work on affiliation networks. Such graphic displays commonly use scaling (e.g., correspondence analysis) or algebraic approaches (e.g., lattices). An important, but often neglected, aspect of some applications is clear specification of the formal relationships embodied in the configuration and explicit description of how the result corresponds to the original data. These omissions produce rather casual depictions and consequent ambiguity in interpretation. They also contribute to misunderstanding and fuel debate about the usefulness of the approach. The following passages are typical of such descriptions for affiliation networks or similar two-mode data arrays.
Interest in social network analysis has grown massively in recent years. This growth has been matched by an increasing sophistication in the technical tools available to users. Models and Methods in Social Network Analysis (MMSNA) presents the most important of those developments in quantitative models and methods for analyzing social network data that have appeared during the 1990s. It is a collection of original chapters by leading methodologists, commissioned by the three editors to review recent advances in their particular areas of network methods.
As is well-known, social network analysis has been used since the mid-1930s to advance research in the social and behavioral sciences, but progressed slowly and linearly, until the end of the century. Sociometry (sociograms, sociomatrices), graph theory, dyads, triads, subgroups, and blockmodels – reflecting substantive concerns such as reciprocity, structural balance, transitivity, clusterability, and structural equivalence – all made their appearances and were quickly adopted by the relatively small number of “network analysts.” It was easy to trace the evolution of network theories and ideas from professors to students, from one generation to the next. The field of network analysis was even analyzed as a network (see, for example, Mullins 1973, as well as analyses by Burt in 1978, and Hummon and Carley in 1993). Many users eventually became analysts, and some even methodologists. A conference of methodologists, held at Dartmouth College in the mid-1970s, consisted of about thirty researchers (see Holland and Leinhardt 1979) and really did constitute a “who's who” of the field – an auspicious, but rather small gathering.
Several chapters in this book outline some of the significant advances that have been made in modeling networks and network-based processes (see, for example, Chapters 6, 7, 10, and 11). These models generally presuppose a single network of interest, such as a network of acquaintance ties or a network of advice-seeking ties, and they represent the interdependence of such ties with actor characteristics and other ties in some local network neighborhood (see, for example, Chapter 10). Yet, there are compelling theoretical and methodological reasons to extend these models to the case of multiple networks, and in this chapter we discuss the rationale and nature of these extensions, as well as a number of issues to which they give rise.
From a theoretical perspective, it is more than likely that network processes involve different kinds of relational ties; indeed, some well-known hypotheses about the nature of local network processes involve multiple types of tie. Cartwright and Harary's (1956) adaptation of Heider's (1946) balance model, for example, proposes a strong form of interdependence among positive and negative ties within triadic network structures, and Granovetter's (1973) “strength of weak ties” thesis involves an interdependence between strong, weak, and null ties. In addition, there is an impressive body of empirical work that points to the importance of multiplex ties, that is, those ties in which several types of relationships come together – such as friend and coworker, or advisor and supervisor – and also to the consequences of such ties for interpersonal processes.
Social network analysts study the structural patterning of the ties that link social actors. For the most part, they seek to uncover two kinds of patterns: (1) those that reveal subsets of actors that are organized into cohesive social groups, and (2) those that reveal subsets of actors that occupy equivalent social positions, or roles.
To uncover patterns of those kinds, network analysts collect and examine data on actor-to-actor ties. Such data record who is connected to whom and/or how closely they are connected. Typically, the data are organized into square, N-dimensional, N by-N matrices, where the N rows and the N columns both refer to the social actors being studied. Cell entries in these matrices indicate either the presence/absence or the strength of some social relationship linking the row actor to the column actor. In this chapter, we deal only with symmetric relationships where, given a connection from actor i to actor j, actor j is also connected to i in the same way.
Network analysts sometimes use standard statistical procedures in examining their actor-by-actor matrices. Although there are several statistical modeling tools that have been developed specifically for network data (Holland and Leinhardt 1981; Wasserman and Pattison 1996), these tools were designed primarily for testing hypotheses. They do not provide a simple direct way to explore the patterning of network data – one that will permit an investigator to “see” groups and positions.