To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
All fields of science benefit from gathering and analyzing network data. This chapter summarizes a small portion of the ways networks are found in research fields thanks to increasing volumes of data and the computing resources needed to work with that data. Epidemiology, dynamical systems, materials science, and many more fields than we can discuss here, use networks and network data. Well encounter many more examples during the rest of this book.
While there are cases where it is straightforward and unambiguous to define a network given data, often a researcher must make choices in how they define the network and that those choices, preceding most of the work on analyzing the network, have outsized consequences for that subsequent analysis. Sitting between gathering the data and studying the network is the upstream task: how to define the network from the underlying or original data. Defining the network precedes all subsequent or downstream tasks, tasks we will focus on in later chapters. Often those tasks are the focus of network scientists who take the network as a given and focus their efforts on methods using those data. Envision the upstream task by asking, what are the nodes? and what are the links?, with the network following from those definitions. You will find these questions a useful guiding star as you work, and you can learn new insights by reevaluating their answers from time to time.
This chapter presents mathematical details relating to the Fourier transform (FT), Fourier series, and their inverses. These details were omitted in the preceding chapters in order to enable the reader to focus on the engineering side. The material reviewed in this chapter is fundamental and of lasting value, even though from the engineer’s viewpoint the importance may not manifest in day-to-day applications of Fourier representations. First the chapter discusses the discrete-time case, wherein two types of Fourier transform are distinguished, namely, l1-FT and l2-FT. A similar distinction between L1-FT and L2-FT for the continuous-time case is made next. When such FTs do not exist, it is still possible for a Fourier transform (or inverse) to exist in the sense of the so-called Cauchy principal value or improper Riemann integral, as explained. A detailed discussion on the pointwise convergence of the Fourier series representation is then given, wherein a number of sufficient conditions for such convergence are presented. This involves concepts such as bounded variation, one-sided derivatives, and so on. Detailed discussions of these concepts, along with several illuminating examples, are presented. The discussion is also extended to the case of the Fourier integral.
This chapter introduces recursive difference equations. These equations represent discrete-time LTI systems when the so-called initial conditions are zero. The transfer functions of such LTI systems have a rational form (ratios of polynomials in z). Recursive difference equations offer a computationally efficient way to implement systems whose outputs may depend on an infinite number of past inputs. The recursive property allows the infinite past to be remembered by remembering only a finite number of past outputs. Poles and zeros of rational transfer functions are introduced, and conditions for stability expressed in terms of pole locations. Computational graphs for digital filters, such as the direct-form structure, cascade-form structure, and parallel-form structure, are introduced. The partial fraction expansion (PFE) method for analysis of rational transfer functions is introduced. It is also shown how the coefficients of a rational transfer function can be identified by measuring a finite number of samples of the impulse response. The chapter also shows how the operation of polynomial division can be efficiently implemented in the form of a recursive difference equation.
Networks exhibit many common patterns. What causes them? Why are they present? Are they universal across all networks or only certain kinds of networks? One way to address these questions is with models. In this chapter, we explore in-depth the classic mechanistic models of network science. Random graph models underpin much of our understanding of network phenomena, from the small world path lengths to heterogeneous degree distributions and clustering. Mathematical tools help us understand what mechanisms or minimal ingredients may explain such phenomena, from basic heuristic treatments to combinatorial tools such as generating functions.
Network science is a broadly interdisciplinary field, pulling from computer science, mathematics, statistics, and more. The data scientist working with networks thus needs a broad base of knowledge, as network data calls for—and is analyzed with—many computational and mathematical tools. One needs good working knowledge in programming, including data structures and algorithms to effectively analyze networks. In addition to graph theory, probability theory is the foundation for any statistical modeling and data analysis. Linear algebra provides another foundation for network analysis and modeling because matrices are often the most natural way to represent graphs. Although this book assumes that readers are familiar with the basics of these topics, here we review the computational and mathematical concepts and notation that will be used throughout the book. You can use this chapter as a starting point for catching up on the basics, or as reference while delving into the book.
As we have seen, network data are necessarily imperfect. Missing and spurious nodes and edges can create uncertainty in what the observed data tell us about the original network. In this chapter, we dive deeper into tools that allow us to quantify such effects and probe more deeply into the nature of an unseen network from our observations. The fundamental challenge of measurement error in network data is capturing the error-producing mechanism accurately and then inferring the unseen network from the (imperfectly) observed data. Computational approaches can give us clues and insights, as can mathematical models. Mathematical models can also build up methods of statistical inference, whether in estimating parameters describing a model of the network or estimating the networks structure itself. But such methods quickly become intractable without taking on some possibly severe assumptions, such as edge independence. Yet, even without addressing the full problem of network inference, in this chapter, we show valuable ways to explore features of the unseen network, such as its size, using the available data.
Gives a brief overview of the book. Notations for signal representation in continuous time and discrete time are introduced. Both one-dimensional and two-dimensional signals are introduced, and simple examples of images are presented. Examples of noise removal and image smoothing (filtering) are demonstrated. The concept of frequency is introduced and its importance as well as its role in signal representation are explained, giving musical notes as examples. The history of signal processing, the role of theory, and the connections to real-life applications are mentioned in an introductory way. The chapter also draws attention to the impact of signal processing in digital communications (e.g., cell-phone communications), gravity wave detection, deep space communications, and so on.
In this chapter, we focus on statistics and measures that quantify a networks structure and characterize how it is organized. These measures have been central to much of network science, and a vast array of material is available to us, spanning across all scales of the network. The measures we discuss include general-purpose measures and those specialized to particular circumstances, which allow us to better get a handle on the network data. Network science has generated a dizzying array of valuable measures over the years. For example, we can measure local structures, motifs, patterns of correlations within the network, clusters and communities, hierarchy, and more. These measures are used for exploratory and confirmatory analyses, which we discussed in the previous chapter. With the measures of this chapter, we can understand the patterns in our networks, and using statistical models, we can put those patterns on a firm foundation.
Most scientists receive training in their domain of expertise but, with the possible exception of computer science, students of science receive little training in computer programming. While software engineering has brought forth sound principles for programming, training in software engineering only translates partially to scientific coding. Simply put, coding for science is not the same as coding for software. This chapter discusses best practices for writing correct, clear, and concise scientific code. We aim to ensure code is readable to others and supports data provenance, not hinders it. We also want the code to be a lasting recording of work performed, helping research reproducibility. Practices to address these concerns that we cover include clear variable names and code comments, favoring simple code, carefully documenting program dependencies and inputs, and using version control and logging. Together, these practices will enable your code to work better and more reliably for yourself and your collaborators.
This chapter introduces state-space descriptions for computational graphs (structures) representing discrete-time LTI systems. They are not only useful in theoretical analysis, but can also be used to derive alternative structures for a transfer function starting from a known structure. The chapter considers systems with possibly multiple inputs and outputs (MIMO systems); systems with a single input and a single output (SISO systems) are special cases. General expressions for the transfer matrix and impulse response matrix are derived in terms of state-space descriptions. The concept of structure minimality is discussed, and related to properties called reachability and observability. It is seen that state-space descriptions give a different perspective on system poles, in terms of the eigenvalues of the state transition matrix. The chapter also revisits IIR digital allpass filters and derives several equivalent structures for them using so-called similarity transformations on state-space descriptions. Specifically, a number of lattice structures are presented for allpass filters. As a practical example of impact, if such a structure is used to implement the second-order allpass filter in a notch filter, then the notch frequency and notch quality can be independently controlled by two separate multipliers.
Network science has exploded in popularity since the late 1990s. But it flows from a long and rich tradition of mathematical and scientific understanding of complex systems. We can no longer imagine the world without evoking networks. And network data is at the heart of it. In this chapter, we set the stage by highlighting network sciences ancestry and the exciting scientific approaches that networks have enabled, followed by a tour of the basic concepts and properties of networks.
This is a detailed chapter on digital filter design. Specific digital filters such as notch and antinotch filters, and sharp-cutoff lowpass filters such as Butterworth filters are discussed in detail. Also discussed are allpass filters and some of their applications, including the implementation of notch and antinotch filters. Computational graphs (structures) for allpass filters are presented. It is explained how continuous-time filters can be transformed into discrete time by using the bilinear transformation. A simple method for the design of linear-phase FIR filters, called the window-based method, is also presented. Examples include the Kaiser window and the Hamming window. A comparative discussion of FIR and IIR filters is given. It is demonstrated how nonlinear-phase filters can create visible phase distortion in images. Towards the end, a detailed discussion of steady-state and transient components of filter outputs is given. The dependence of transient duration on pole position is explained. The chapter concludes with a discussion of spectral factorization.
Much of the power of networks lies in their flexibility. Networks can successfully describe many different kinds of complex systems. These descriptions are useful in part because they allow us to organize data associated with the system in meaningful ways. These associated attributes and their connections to the network are often the key drivers behind new insights. For example, in a social network, these may be demographic features, such as the ages and occupations of members of a firm. In a protein interaction network, gene ontology terms may be gathered by biologists studying the human genome. We can gain insight by collecting data on those features and associating them with the network nodes or links. In this chapter, we study ways to associate data with the network elements, the nodes and links. We describe ways to gather and store these attributes, what analysis we can do using them, and the most crucial questions to ask about these attributes and their interplay with our networks.
Many tools exist to help scientists work computationally. In addition to general-purpose and domain-specific programming languages, a wide assortment of programs exist to accomplish specific tasks. We call attention to a number of tools in this chapter, focusing on good practices when using them, good practices computationally and good practices scientifically. Important computing tools for data scientists include computational notebooks, data pipelines and file transfer tools, UNIX-style operating systems, version control systems, and data backup systems. Of course, the world of computing moves fast, and tools are always coming and going, so we conclude with advice and a brief workflow to guide you through evaluating new tools.
This chapter provides an overview of matrices. Basic matrix operations are introduced first, such as addition, multiplication, transposition, and so on. Determinants and matrix inverses are then defined. The rank and Kruskal rank of matrices are defined and explained. The connection between rank, determinant, and invertibility is elaborated. Eigenvalues and eigenvectors are then reviewed. Many equivalent meanings of singularity (non-invertibility) of matrices are summarized. Unitary matrices are reviewed. Finally, linear equations are discussed. The conditions under which a solution exists and the condition for the solution to be unique are also explained and demonstrated with examples.