To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter covers ways to explore your network data using visual means and basic summary statistics, and how to apply statistical models to validate aspects of the data. Data analysis can generally be divided into two main approaches, exploratory and confirmatory. Exploratory data analysis (EDA) is a pillar of statistics and data mining and we can leverage existing techniques when working with networks. However, we can also use specialized techniques for network data and uncover insights that general-purpose EDA tools, which neglect the network nature of our data, may miss. Confirmatory analysis, on the other hand, grounds the researcher with specific, preexisting hypotheses or theories, and then seeks to understand whether the given data either support or refute the preexisting knowledge. Thus, complementing EDA, we can define statistical models for properties of the network, such as the degree distribution, or for the network structure itself. Fitting and analyzing these models then recapitulates effectively all of statistical inference, including hypothesis testing and Bayesian inference.
This chapter discusses the Fourier series representation for continuous-time signals. This is applicable to signals which are either periodic or have a finite duration. The connections between the continuous-time Fourier transform (CTFT), the discrete-time Fourier transform (DTFT), and Fourier series are also explained. Properties of Fourier series are discussed and many examples presented. For real-valued signals it is shown that the Fourier series can be written as a sum of a cosine series and a sine series; examples include rectified cosines, which have applications in electric power supplies. It is shown that the basis functions used in the Fourier series representation satisfy an orthogonality property. This makes the truncated version of the Fourier representation optimal in a certain sense. The so-called principal component approximation derived from the Fourier series is also discussed. A detailed discussion of the properties of musical signals in the light of Fourier series theory is presented, and leads to a discussion of musical scales, consonance, and dissonance. Also explained is the connection between Fourier series and the function-approximation property of multilayer neural networks, used widely in machine learning. An overview of wavelet representations and the contrast with Fourier series representations is also given.
Realistic networks are rich in information. Often too rich for all that information to be easily conveyed. Summarizing the network then becomes useful, often necessary, for communication and understanding but, being wary, of course, that a summary necessarily loses information about the network. Further, networks often do not exist in isolation. Multiple networks may arise from a given dataset or multiple datasets may each give rise to different views of the same network. In such cases and more, researchers need tools and techniques to compare and contrast those networks. In this chapter, In this chapter, well show you how to summarize a network, using statistics, visualizations, and even other networks. From these summaries we then describe ways to compare networks, defining a distance between networks for example. Comparing multiple networks using the techniques we describe can help researchers choose the best data processing options and unearth intriguing similarities and differences between networks in diverse fields.
This chapter introduces the discrete Fourier transform (DFT), which is different from the discrete-time Fourier transform (DTFT) introduced earlier. The DFT transforms an N-point sequence x[n] in the time domain to an N-point sequence X[k] in the frequency domain by sampling the DTFT of x[n]. A matrix representation for this transformation is introduced, and the properties of the DFT matrix are studied. The fast Fourier transform (FFT), which is a fast algorithm to compute the DFT, is also introduced. The FFT makes the computation of the Fourier transforms of large sets of data practical. The digital signal processing revolution of the 1960s was possible because of the FFT. This chapter introduces the simplest form of FFT, called the radix-2 FFT, and a number of its properties. The chapter also introduces circular or cyclic convolution, which has a special place in DFT theory, and explains the connection to ordinary convolution. Circular convolution paves the way for fast algorithms for ordinary convolution, using the FFT. The chapter also summarizes the relationships between the four types of Fourier transform studied in this book: CTFT, DTFT, DFT, and Fourier series.
Machine learning, especially neural network methods, is increasingly important in network analysis. This chapter will discuss the theoretical aspects of network embedding methods and graph neural networks. As we have seen, much of the success of advanced machine learning is thanks to useful representations—embeddings—of data. Embedding and machine learning are closely aligned. Translating network elements to embedding vectors and sending those vectors as features to a predictive model often leads to a simpler, more performant model than trying to work directly with the network. Embeddings help with network learning tasks, from node classification to link prediction. We can even embed entire networks and then use models to summarize and compare networks. But not only does machine learning benefit from embeddings, but embeddings benefit from machine learning. Inspired by the incredible recent progress with natural language data, embeddings created by predictive models are becoming more useful and important. Often these embeddings are produced by neural networks of various flavors, and we explore current approaches for using neural networks on network data.
This chapter discusses record keeping, like maintaining a lab notebook. Historically, lab notebooks were analog, pen-and-paper affairs. With so much work being performed on the computer and with most scientific instruments creating digital data directly, most record-keeping efforts are digital. Therefore, we focus on strategies for establishing and maintaining records of computer-based work. Keeping good records of your work is essential. These records inform your future thoughts as you reflect on the work you have already done, acting as reminders and inspiration. They also provide important details for collaborators, and scientists working in large groups often have predefined standards for group members to use when keeping lab notebooks and the like. Computational work differs from traditional bench science, and this chapter describes practices for good record-keeping habits in the more slippery world of computer work.
Developing the theory up to the current state-of-the art, this book studies the minimal model of the Largest Suslin Axiom (LSA), which is one of the most important determinacy axioms and features prominently in Hugh Woodin's foundational framework known as the Ultimate L. The authors establish the consistency of LSA relative to large cardinals and develop methods for building models of LSA from other foundational frameworks such as Forcing Axioms. The book significantly advances the Core Model Induction method, which is the most successful method for building canonical inner models from various hypotheses. Also featured is a proof of the Mouse Set Conjecture in the minimal model of the LSA. It will be indispensable for graduate students as well as researchers in mathematics and philosophy of mathematics who are interested in set theory and in particular, in descriptive inner model theory.
Drawing examples from real-world networks, this essential book traces the methods behind network analysis and explains how network data is first gathered, then processed and interpreted. The text will equip you with a toolbox of diverse methods and data modelling approaches, allowing you to quickly start making your own calculations on a huge variety of networked systems. This book sets you up to succeed, addressing the questions of what you need to know and what to do with it, when beginning to work with network data. The hands-on approach adopted throughout means that beginners quickly become capable practitioners, guided by a wealth of interesting examples that demonstrate key concepts. Exercises using real-world data extend and deepen your understanding, and develop effective working patterns in network calculations and analysis. Suitable for both graduate students and researchers across a range of disciplines, this novel text provides a fast-track to network data expertise.
A negative pressure wall-climbing robot is a special robot for climbing vertical walls, which is widely used in construction, petrochemicals, nuclear energy, shipbuilding, and other industries. The mobility and adhesion of the wheel-track wall-climbing robot with steering-straight mode are significantly decreased on the cylindrical wall, especially during steering. The reason is that the suction chamber may separate from the wall and the required driving force for movement increases, during steering. In this paper, a negative pressure wall-climbing robot with omnidirectional movement mode is developed. By introducing a compliant adjusting suction mechanism and omni-belt wheels, an omnidirectional movement mode is formed instead of the steering-straight mode, and the performances of adhesion and mobility are improved. We establish the safety adhesion model for the robot on a cylindrical wall and obtain the safety adhesion forces. We designed and manufactured an experimental prototype based on the analysis. Experiments showed that the robot has the ability of full maneuverability in cylindrical walls.
Sustainability evaluations are increasingly relevant in the design of products. Within sustainability-related frameworks, circular economy (CE) has gained attention in the last few years, and this has vastly affected design, leading, for example, to design for circularity. This article deals with the wide range of product-level CE assessment tools, out of which some are applied to a case study from the building sector, namely a tiny house made with hemp bricks. Attention was specifically paid to those methods through which a single circularity indicator could be extrapolated. Overall, the objective of this work is to study the convergence of existing CE assessment methods in providing consistent circularity performances. The results show similarities in the overall circularity scores despite differences in the variables used to achieve that final score. Thus, despite the lack of standard methods, the results suggest that many of these tools are sufficiently interchangeable, also in consideration of consistent indications to improve the circularity of the tiny house. This means that consistent inputs are provided to anyone willing to redesign the tiny house with the objective of making it more circular irrespective of the assessment tool used.
The National Film Board documentary Bing Bang Boom (1969) depicts Canadian composer R. Murray Schafer (1933–2021) teaching seventh-grade students in a suburban public school in Scarborough, Ontario. A close study of the film informs the larger trajectory of the composer’s previous and later writings and compositions over the next several decades, while a deeper dive into archival materials and concurrent productions from Canada’s National Film Board (NFB) illuminates the organisation’s strategy of nation-building at a crucial moment in the country’s history. Together, Schafer and the NFB illuminate Canada’s problematic relationship to Indigenous peoples, places and sounds.
This introduction to robotics offers a distinct and unified perspective of the mechanics, planning and control of robots. Ideal for self-learning, or for courses, as it assumes only freshman-level physics, ordinary differential equations, linear algebra and a little bit of computing background. Modern Robotics presents the state-of-the-art, screw-theoretic techniques capturing the most salient physical features of a robot in an intuitive geometrical way. With numerous exercises at the end of each chapter, accompanying software written to reinforce the concepts in the book and video lectures aimed at changing the classroom experience, this is the go-to textbook for learning about this fascinating subject.
Colourised photographs have become a popular form of social media content, and this article examines how the digital sharing of colourised colonial photographs from the Sápmi region may develop into a kind of informal visual repatriation. This article presents a case study on the decolonial photographic practices of the Sámi colouriser Per Ivar Somby, who mines digitised photo archives, colourises selected photos, and subsequently shares them on his social media profiles. The article draws on a qualitative, netnographic study of Somby's Colour Your Past profiles in Facebook and Instagram and demonstrates how Somby and his followers reclaim photos of Sámi people produced during historical encounters with non-Sámi photographers. Drawing on Hirsch's (2008, 2012) concept affiliative postmemory, the analysis examines how historical information and affective responses becomes interwoven in reparative readings of colonial photos.
We propose a logic-based framework to model a system whose aim is to help provide the user with those pieces of information that are useful with respect to his/her current information need, as well as relevant to his/her query. More precisely, we propose three measures of information usefulness which take into account the fact that the user can be represented as a cognitive agent endowed with some beliefs—a partial “picture” about what it already knows—and goals—a certain state of affairs in which the agent would like to be. This paper extends a previous version of the framework by considering a more realistic hypothesis, according to which there are several ways to achieve goals. We present three different approaches: the binary approach, the ordinal approach, and the numerical approach. We take information retrieval (IR) as a particular application domain, and we compare some existing measures with the usefulness measure we introduce here.
Given a sequence of independent random vectors taking values in ${\mathbb R}^d$ and having common continuous distribution function F, say that the $n^{\rm \scriptsize}$th observation sets a (Pareto) record if it is not dominated (in every coordinate) by any preceding observation. Let $p_n(F) \equiv p_{n, d}(F)$ denote the probability that the $n^{\rm \scriptsize}$th observation sets a record. There are many interesting questions to address concerning pn and multivariate records more generally, but this short paper focuses on how pn varies with F, particularly if, under F, the coordinates exhibit negative dependence or positive dependence (rather than independence, a more-studied case). We introduce new notions of negative and positive dependence ideally suited for such a study, called negative record-setting probability dependence (NRPD) and positive record-setting probability dependence (PRPD), relate these notions to existing notions of dependence, and for fixed $d \geq 2$ and $n \geq 1$ prove that the image of the mapping pn on the domain of NRPD (respectively, PRPD) distributions is $[p^*_n, 1]$ (resp., $[n^{-1}, p^*_n]$), where $p^*_n$ is the record-setting probability for any continuous F governing independent coordinates.
Aging ships and offshore structures face harsh environmental and operational conditions in remote areas, leading to age-related damages such as corrosion wastage, fatigue cracking, and mechanical denting. These deteriorations, if left unattended, can escalate into catastrophic failures, causing casualties, property damage, and marine pollution. Hence, ensuring the safety and integrity of aging ships and offshore structures is paramount and achievable through innovative healthcare schemes. One such paradigm, digital healthcare engineering (DHE), initially introduced by the final coauthor, aims at providing lifetime healthcare for engineered structures, infrastructure, and individuals (e.g., seafarers) by harnessing advancements in digitalization and communication technologies. The DHE framework comprises five interconnected modules: on-site health parameter monitoring, data transmission to analytics centers, data analytics, simulation and visualization via digital twins, artificial intelligence-driven diagnosis and remedial planning using machine and deep learning, and predictive health condition analysis for future maintenance. This article surveys recent technological advancements pertinent to each DHE module, with a focus on its application to aging ships and offshore structures. The primary objectives include identifying cost-effective and accurate techniques to establish a DHE system for lifetime healthcare of aging ships and offshore structures—a project currently in progress by the authors.
We develop and demonstrate a computationally cheap framework to identify optimal experiments for Bayesian inference of physics-based models. We develop the metrics (i) to identify optimal experiments to infer the unknown parameters of a physics-based model, (ii) to identify optimal sensor placements for parameter inference, and (iii) to identify optimal experiments to perform Bayesian model selection. We demonstrate the framework on thermoacoustic instability, which is an industrially relevant problem in aerospace propulsion, where experiments can be prohibitively expensive. By using an existing densely sampled dataset, we identify the most informative experiments and use them to train the physics-based model. The remaining data are used for validation. We show that, although approximate, the proposed framework can significantly reduce the number of experiments required to perform the three inference tasks we have studied. For example, we show that for task (i), we can achieve an acceptable model fit using just 2.5% of the data that were originally collected.
We present a linearity theorem for a proof language of intuitionistic multiplicative additive linear logic, incorporating addition and scalar multiplication. The proofs in this language are linear in the algebraic sense. This work is part of a broader research program aiming to define a logic with a proof language that forms a quantum programming language.
This paper proposes and partially defends a novel philosophy of arithmetic—finitary upper logicism. According to it, the natural numbers are finite cardinalities—conceived of as properties of properties—and arithmetic is nothing but higher-order modal logic. Finitary upper logicism is furthermore essentially committed to the logicality of finitary plenitude, the principle according to which every finite cardinality could have been instantiated. Among other things, it is proved in the paper that second-order Peano arithmetic is interpretable, on the basis of the finite cardinalities’ conception of the natural numbers, in a weak modal type theory consisting of the modal logic $\mathsf {K}$, negative free quantified logic, a contingentist-friendly comprehension principle, and finitary plenitude. By replacing finitary plenitude for the axiom of infinity this result constitutes a significant improvement on Russell and Whitehead’s interpretation of second-order Peano arithmetic, itself based on the finite cardinalities’ conception of the natural numbers.