To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Spectral methods are global methods, where the computation at any given point depends not only on information at neighboring points, but on information from the entire domain. To understand the idea of a global method, we begin by considering local methods, and present the global Fourier method as a limit of local finite difference approximations of increasing orders of accuracy. We will introduce phase error analysis, and using this tool we will show the merits of high-order methods, and in particular, their limit: the Fourier method. The phase error analysis leads to the conclusion that high-order methods are beneficial for problems requiring well resolved fine details of the solution or long time integrations.
Finite difference methods are obtained by approximating a function u(x) by a local polynomial interpolant. The derivatives of u(x) are then approximated by differentiating this local polynomial. In this context, local refers to the use of nearby grid points to approximate the function or its derivative at a given point.
For slowly varying functions, the use of local polynomial interpolants based on a small number of interpolating grid points is very reasonable. Indeed, it seems to make little sense to include function values far away from the point of interest in approximating the derivative. However, using low-degree local polynomials to approximate solutions containing very significant spatial or temporal variation requires a very fine grid in order to accurately resolve the function. Clearly, the use of fine grids requires significant computational resources in simulations of interest to science and engineering. In the face of such limitations we seek alternative schemes that will allow coarser grids, and therefore fewer computational resources.
Up to this point, we have mainly discussed the theoretical aspects of spectral methods. We now turn to some of the computational issues surrounding these methods, and discuss the tools needed for efficient implementation of trigonometric and polynomial spectral methods. We shall also discuss the practical problems of round-off errors and aliasing errors. Finally, we address problems requiring the use of mappings.
Fast computation of interpolation and differentiation
The appearance of the fast Fourier transform (FFT) in 1965 revolutionized entire fields within science and engineering. By establishing a fast way of evaluating discrete Fourier series and their inverses, this single algorithm allowed the use of methods that were previously impractical due to excessive computational cost.
Fourier spectral methods emerged as efficient methods of computation due to the introduction of the FFT, and a significant part of the fundamental theory of Fourier spectral methodswas developed in the decade immediately following the introduction of the FFT. In the following we briefly discuss the key idea behind the FFT. However, the idea behind the FFT is valid only when dealing with trigonometric polynomials, and by extension, Chebyshev polynomials; it is not, in general, applicable to polynomial spectral methods. Therefore, we shall continue by discussing alternative fast methods for the computation of interpolation and differentiation for polynomial based methods. We conclude with a brief section on how to compute the general Gaussian quadrature points and weights needed to compute the discrete polynomial expansion coefficients.
In social contexts, the diffusion of information and/or behavior often exhibits features that do not match well those of the epidemic models discussed in Chapter 3. This concerns, specifically, the transmission mechanism contemplated by those models, which was assumed to be independent of the local (neighborhood) conditions faced by the agents concerned.
In the epidemic formulation of diffusion, the transmission of infection (or information) to a healthy (or uniformed) agent is tailored to her total exposure, i.e. the absolute number of infected neighbors. But in the spread of many social phenomena – mainly if there is a factor of persuasion or coordination involved – relative considerations tend to be important in understanding whether some new behavior or belief is adopted. Generically, we shall speak of these relative considerations operating on the diffusion process as neighborhood effects. They are the object of the present chapter, where we study their implications in a number of different setups.
NEIGHBORHOOD-DEPENDENT DIFFUSION IN RANDOM NETWORKS
In this starting section, we revisit the epidemic models considered in Chapter 3, now under neighborhood effects. First, in Subsection 4.1.1, the focus is on a SIR-like context where diffusion is irreversible and thus one may naturally conceive of diffusion as occurring through a wave of a certain reach. Next, in Subsection 4.1.2, we turn to a setup where diffusion spreads in a way akin to SIS-epidemics, with adoption/infection being only temporary. In this latter case, the concern is the extent of long-run prevalence.
This chapter initiates our study of diffusion and models it as a process whose spreading mechanism is independent of any neighborhood considerations. This means that the procedure by which the process propagates from a certain node to any one of its neighbors is unaffected by the conditions prevailing in the neighborhoods of those two nodes – thus, in particular, it is unrelated to the states displayed by their other neighboring nodes. In this sense, one can conceive the phenomenon as akin to biological infection, a process that is often mediated through local contact at a rate that depends on the aggregate exposure to infected neighbors. With this analogy in mind, we shall label such a process as epidemic diffusion. But, of course, diffusion in socioeconomic environments is often different, subject to neighborhood (as well as payoff-related) considerations. To study it under these conditions, therefore, we need a different framework of analysis, which is introduced in Chapter 4.
ALTERNATIVE THEORETICAL SCENARIOS
Epidemiology is an old and well-established field of research, both empirical and theoretical. Its canonical models fall into two categories:
SIR (susceptible-infected-recovered), where the life history of each node (or agent) passes from being susceptible (S), to becoming infected (I), to finally being recovered (R), always moving in a unidirectional fashion.
SIS (susceptible-infected-susceptible), where each node passes from being susceptible (S) to turning infected (I), to becoming again susceptible (S), thus allowing for a bidirectional transition between the two possible states.
The previous chapter has modeled diffusion and play as “reactive” phenomena, i.e. processes that unfold while agents respond to their current neighborhood conditions. In a polar fashion, another important (but “proactive”) phenomenon that is often mediated through the social network is search. A paradigmatic instance of it arises when a certain agent/node faces a problem (or query) whose solution (answer) is to be found somewhere else in the social network. This is reminiscent of the famous experiment set up by Milgram (1967) [200] – recall Section 1.1 – where the task was to direct a letter to a “distant” individual through a chain of social acquaintances. In present times, the internet represents a search scenario where analogous issues appear. In this case, a typical problem consists of finding a desired piece of information by searching through the hyperlinks that connect the various webpages in the huge WWW network.
The effectiveness of search in these setups is inherently affected by network considerations. In general, of course, the way agents access disperse information must be crucially shaped by the architecture (topological characteristics) of the social network. Another consideration, equally important, is the knowledge that agents have on the social network itself. In line with our emphasis on complexity, the natural assumption to make in this respect is that such information is purely local. Sometimes, however, agents may be in the position to rely on some underlying “reference structure” (e.g. the arrangement of individuals along spatial or professional dimensions) to guide and thus improve their search.
Mean-field theory has been one of the main approaches traditionally used in the study of phase transitions of physical systems. It dates back to the early 20th century, when it was first applied by Pierre Weiss and others to the analysis of the phenomenon of ferromagnetism. (See, for example, the classical monograph by H. E. Stanley (1971) [264] for a historical account of these developments and an introduction to the field of phase transitions.)
Mean-field theory is usually applied to the analysis of complex systems where the interaction among a large number of individual “particles” proceeds along many dimensions. Under these conditions, the intuitive idea underlying the approach can be simply explained as follows. If the nature of interaction is rich (i.e. highly dimensional), it should be possible to capture the overall behavior of the system through a stylized model of the situation in which the host of effects impinging on each individual entity is replaced by a suitable mean field. In such a mean-field approach, the average description of the system is tailored to a suitable aggregate (or average) of the large number of individual effects exerted by the population at large. The self-referential nature of the exercise is thus apparent: the average state of the system is both an explanatory variable and the variable itself to be explained. This suggests that, in many cases, mean field theory must seek a self-consistent solution. This is why it is also often labeled self-consistent field theory.
In the last decade, the study of complex networks has become a booming field of research with a marked interdisciplinary character. Many different phenomena in the physical, biological, and social worlds can be understood as network based. They build upon some complex (as well as evolving) pattern of bilateral connections among individual entities and the overall performance of the system is largely shaped by the intricate architecture of those connections.
A brief review of alternative domains of application should serve to illustrate the rich diversity of phenomena that are distinctly governed by complex networks. This is the task carried out in Subsection 1.1.1, where the primary aim is to illustrate such diversity with empirical illustrations gathered from a large number of different areas. Next, in Subsection 1.1.2 we elaborate on the idea that, given the nature of the endeavour, a genuinely interdisciplinary approach is well in order in the field of complex networks.
Realms of Application and Empirical Evidence
We may start, as the most tangible, with transportation networks. These include the connections through which modern economies channel the physical movement of all sorts of commodities and signals. Pertaining, for example, to the conveyance of signals, a paradigmatic instance is of course the internet network, the huge mesh of bilateral connections through which bit-codifying electronic impulses across computers are transferred all around the world.