To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Every network has a corresponding matrix representation. This is powerful. We can leverage tools from linear algebra within network science, and doing so brings great insights. The branch of graph theory concerned with such connections is called spectral graph theory. This chapter will introduce some of its central principles as we explore tools and techniques that use matrices and spectral analysis to work with network data. Many matrices appear in different cases when studying networks, including the modularity matrix, nonbacktracking matrix, and the precision matrix. But one matrix stands out—the graph Laplacian. Not only does it capture dynamical processes unfolding over a networks structure, its spectral properties have deep connections to that structure. We show many relationships between the Laplacians eigendecomposition and network problems, such as graph bisection and optimal partitioning tasks. Combining the dynamical information and the connections with partitioning also motivates spectral clustering, a powerful and successful way to find groups of data in general. This kind of technique is now at the heart of machine learning, which well explore soon.
This chapter introduces structures and structural interconnections for LTI systems and then considers several examples of digital filters. Examples include moving average filters, difference operators, and ideal lowpass filters. It is then shown how to convert lowpass filters into other types, such as highpass, bandpass, and so on, by use of simple transformations. Phase distortion is explained, and linear-phase digital filters are introduced, which do not create phase distortion. The use of digital filters in noise removal (denoising) is also demonstrated for 1D signals and 2D images. The filtering of an image into low and high-frequency subbands is demonstrated, and the motivation for subband decomposition in audio and image compression is explained. Finally, it is shown that the convolution operation can be represented as a matrix vector multiplication, where the matrix has Toeplitz structure. The matrix representation also shows us how to undo a filtering operation through a process called deconvolution.
This chapter introduces different types of signals, and studies the properties of many kinds of systems that are encountered in signal processing. Signals discussed include the exponential signal, the unit step, single-frequency signals, rectangular pulses, Dirac delta signals, and periodic signals. Two-dimensional signals, especially 2D frequencies and sinusoids, are also demonstrated. Many types of systems are discussed, such as homogeneous systems, additive systems, linear systems, stable systems, time-invariant systems, and causal systems. Both continuous and discrete-time cases are discussed. Examples are presented throughout, such as music signals, ECG signals, and so on, to demonstrate the concepts. Subtle differences between discrete-time and continuous-time signals and systems are also pointed out.
This chapter introduces bandlimited signals, sampling theory, and the method of reconstruction from samples. Uniform sampling with a Dirac delta train is considered, and the Fourier transform of the sampled signal is derived. The reconstruction from samples is based on the use of a linear filter called an interpolator. When the sampling rate is not sufficiently large, the sampling process leads to a phenomenon called aliasing. This is discussed in detail and several real-world manifestations of aliasing are also discussed. In practice, the sampled signal is typically processed by a digital signal processing device, before it is converted back into a continuous-time signal. The building blocks in such a digital signal processing system are discussed. Extensions of the lowpass sampling theorem to the bandpass case are also presented. Also proved is the pulse sampling theorem, where the sampling pulse is spread out over a short duration, unlike the Dirac delta train. Bandlimited channels are discussed and it is explained how the data rate that can be transmitted over a channel is limited by channel bandwidth.
The fundamental practices and principles of network data are presented in this book, and the preface serves as an important starting point for readers to understand the goals and objectives of this text. The preface explains how the practical and fundamental aspects of network data are intertwined, and how they can be used to solve real-world problems. It also gives advice on how to use the book, including the boxes that will be featured throughout the book to highlight key concepts and provide practical examples of working with network data. Readers will find this preface to be a valuable resource as they begin their journey into the world of network science.
This chapter introduces the continuous-time Fourier transform (CTFT) and its properties. Many examples are presented to illustrate the properties. The inverse CTFT is derived. As one example of its application, the impulse response of the ideal lowpass filter is obtained. The derivative properties of the CTFT are used to derive many Fourier transform pairs. One result is that the normalized Gaussian signal is its own Fourier transform, and constitutes an eigenfunction of the Fourier transform operator. Many such eigenfunctions are presented. The relation between the smoothness of a signal in the time domain and its decay rate in the frequency domain is studied. Smooth signals have rapidly decaying Fourier transforms. Spline signals are introduced, which have provable smoothness properties in the time domain. For causal signals it is proved that the real and imaginary parts of the CTFT are related to each other. This is called the Hilbert transform, Poisson’’s transform, or the Kramers–Kronig transform. It is also shown that Mother Nature “computes” a Fourier transform when a plane wave is propagating across an aperture and impinging on a distant screen – a well-known result in optics, crystallography, and quantum physics.
This chapter presents the Laplace transform, which is as fundamental to continuous-time systems as the z-transform is to discrete-time systems. Several properties and examples are presented. Similar to the z-transform, the Laplace transform can be regarded as a generalization of the appropriate Fourier transform. In continuous time, the Laplace transform is very useful in the study of systems represented by linear constant-coefficient differential equations (i.e., rational LTI systems). Frequency responses, resonances, and oscillations in electric circuits (and in mechanical systems) can be studied using the Laplace transform. The application in electrical circuit analysis is demonstrated with the help of an LCR circuit. The inverse Laplace transformation is also discussed, and it is shown that the inverse is unique only when the region of convergence (ROC) of the Laplace transform is specified. Depending on the ROC, the inverse of a given Laplace transform expression may be causal, noncausal, two-sided, bounded, or unbounded. This is very similar to the theory of inverse z-transformation. Because of these similarities, the discussion of the Laplace transform in this chapter is brief.
In this chapter, we begin our dive into the fundamentals of network data. We delve deep into the strange world of networks by considering the friendship paradox, the apparently contradictory finding that most people (nodes) have friends (neighbors) who are more popular than themselves. How can this be? Where are all these friends coming from? We introduce network thinking to resolve this paradox. As we will see, It is due to constraints induced by the network structure: pick a node randomly and you are much more likely to land next to a high-degree node than on a high-degree node because high-degree nodes have many neighbors. This is unexpected, almost profoundly so; a local (node-level) view of a network will not accurately reflect the global network structure. This paradox highlights the care we need to take when thinking about networks and network data mathematically and practically.
Network studies follow an explicit form, from framing questions and gathering data, to processing those data and drawing conclusions. And data processing leads to new questions, leading to new data and so forth. Network studies follow a repeating lifecycle. Yet along the way, many different choices will confront the researcher, who must be mindful of the choices they are making with their data and the choices of tools and techniques they are using to study their data. In this chapter, we describe how studies of networks begin and proceed, the life-cycle of a network study
A number of properties relating to the inverse z-transform are discussed. The partial fraction expansion (PFE) of a rational z-transform plays a role in finding the inverse transform. It is shown that the inverse z-transform solution is not unique and depends on the region of convergence (ROC). Depending on the ROC, the solution may be causal, anticausal, two-sided, stable, or unstable. The condition for existence of a stable inverse transform is also developed. The interplay between causality, stability, and the ROC is established and illustrated with examples. The case of multiple poles is also considered. The theory and implementation of IIR linear-phase filters is discussed in detail. The connection between z-transform theory and analytic functions in complex variable theory is placed in evidence. Based on this connection, many intriguing examples of z-transform pairs are pointed out. In particular, closed-form expressions for radii of convergence of the z-transform can be obtained from complex variable theory. The case of unrealizable digital filters and their connection to complex variable theory is also discussed.
In this chapter, we introduce visualization techniques for networks, what problems we face, and solutions we use, to make those visualizations as effective as possible. Visualization is an essential tool for exploring network data, revealing patterns that may not be easily inferred from statistics alone. Although network visualization can be done in many ways, the most common approach is through two-dimensional node-link diagrams. Properly laying out nodes and choosing the mapping between network and visual properties is essential to create an effective visualization, which requires iteration and fine-tuning. For dense networks, filtering or aggregating the data may be necessary. Following an iterative, back-and-forth workflow is essential, trying different layout methods and filtering steps to show the networks structure best while keeping the original questions and goals in mind. Visualization is not always the endpoint of a network analysis but can also be a useful step in the middle of an exploratory data analysis pipeline, similar to traditional statistical visualization of non-network data.
This chapter gives a brief overview of sampling based on sparsity. The idea is that a signal which is not bandlimited can sometimes be reconstructed from a sampled version if we have a priori knowledge that the signal is sparse in a certain basis. These results are very different from the results of Shannon and Nyquist, and are sometimes referred to as sub-Nyquist sampling theories. They can be regarded as generalizations of traditional sampling theory, which was based on the bandlimited property. Examples include sampling of finite-duration signals whose DFTs are sparse. Sparse reconstruction methods are closely related to the theory of compressive sensing, which is also briefly introduced. These are major topics that have emerged in the last two decades, so the chapter provides important references for further reading.
This chapter covers data provenance or data lineage, the detailed history of how data was created and manipulated, as well as the process of ensuring the validity of such data by documenting the details of its origins and transformations. Data provenance is a central challenge when working with data. Computing helps but also hinders our ability to maintain records of our work with the data. The best science will result when we adopt strategies to carefully and consistently record and track the origin of data and any changes made along the way. For instance, we want to know where (by whom) a dataset was created and what was the process used to create it. Then, if there were any changes, such as fixing erroneous entries, we need to have a good record of such changes. With these goals in mind, we discuss best practices for tracking data provenance. While such practices generally take time and effort to implement, making them seem tedious in the short term, over time, your research will become more reliable, and you and your collaborators will be grateful.