To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We investigated human understanding of different network visualizations in a large-scale online experiment. Three types of network visualizations were examined: node-link and two different sorting variants of matrix representations on a representative social network of either 20 or 50 nodes. Understanding of the network was quantified using task time and accuracy metrics on questions that were derived from an established task taxonomy. The sample size in our experiment was more than an order of magnitude larger (N = 600) than in previous research, leading to high statistical power and thus more precise estimation of detailed effects. Specifically, high statistical power allowed us to consider modern interaction capabilities as part of the evaluated visualizations, and to evaluate overall learning rates as well as ambient (implicit) learning. Findings indicate that participant understanding was best for the node-link visualization, with higher accuracy and faster task times than the two matrix visualizations. Analysis of participant learning indicated a large initial difference in task time between the node-link and matrix visualizations, with matrix performance steadily approaching that of the node-link visualization over the course of the experiment. This research is reproducible as the web-based module and results have been made available at: https://osf.io/qct84/.
Most network studies rely on a measured network that differs from the underlying network which is obfuscated by measurement errors. It is well known that such errors can have a severe impact on the reliability of network metrics, especially on centrality measures: a more central node in the observed network might be less central in the underlying network. Previous studies have dealt either with the general effects of measurement errors on centrality measures or with the treatment of erroneous network data. In this paper, we propose a method for estimating the impact of measurement errors on the reliability of a centrality measure, given the measured network and assumptions about the type and intensity of the measurement error. This method allows researchers to estimate the robustness of a centrality measure in a specific network and can, therefore, be used as a basis for decision-making. In our experiments, we apply this method to random graphs and real-world networks. We observe that our estimation is, in the vast majority of cases, a good approximation for the robustness of centrality measures. Beyond this, we propose a heuristic to decide whether the estimation procedure should be used. We analyze, for certain networks, why the eigenvector centrality is less robust than, among others, the pagerank. Finally, we give recommendations on how our findings can be applied to future network studies.
The study of complex brain networks, where structural or functional connections are evaluated to create an interconnected representation of the brain, has grown tremendously over the past decade. Many of the statistical network science tools for analyzing brain networks have been developed for cross-sectional studies and for the analysis of static networks. However, with both an increase in longitudinal study designs and an increased interest in the neurological network changes that occur during the progression of a disease, sophisticated methods for longitudinal brain network analysis are needed. We propose a paradigm for longitudinal brain network analysis over patient cohorts, with the key challenge being the adaptation of Stochastic Actor-Oriented Models to the neuroscience setting. Stochastic Actor-Oriented Models are designed to capture network dynamics representing a variety of influences on network change in a continuous-time Markov chain framework. Network dynamics are characterized through both endogenous (i.e. network related) and exogenous effects, where the latter include mechanisms conjectured in the literature. We outline an application to the resting-state functional magnetic resonance imaging setting with data from the Alzheimer’s Disease Neuroimaging Initiative study. We draw illustrative conclusions at the subject level and make a comparison between elderly controls and individuals with Alzheimer’s disease.
Consumers’ choice of services and the product platforms that deliver them, such as apps and mobile devices, or eBooks and eReaders, are becoming inextricably interrelated. Market viability demands that product–service combinations be compatible across multiple producers and service channels, and that the producers’ profitability must include both service and product design. Some services may be delivered contractually or physically, through a wider range of products than others. Thus, optimization of producers’ contingent products, services, and channel decisions becomes a combined decision problem. This article examines three common product–service design scenarios: exclusive, non-exclusive asymmetric, and non-exclusive symmetric. An enterprise-wide decision framework has been proposed to optimize integrated services and products for each scenario. Optimization results provide guidelines for strategies that are mutually profitable for partner–competitor firms. The article examines an example of an eBook service and tablet, with market-level information from four firms (Amazon, Apple, Barnes & Noble, and Google) and conjoint-based product–service choice data to illustrate the proposed framework using a scalable sequential optimization algorithm. The results suggest that firms in market equilibrium can markedly differ in the services they seek to provide via other firms’ products and demonstrate the interrelationship among marketing, services, and product design.
In this paper, a robust geometric navigation algorithm, designed on the special Euclidean group SE(3), of a quadrotor is proposed. The equations of motion for the quadrotor are obtained using the Newton–Euler formulation. The geometric navigation considers a guidance frame which is designed to perform autonomous flights with a convergence to the contour of the task with small normal velocity. For this purpose, a super twisting algorithm controls the nonlinear rotational and translational dynamics as a cascade structure in order to establish the fast and yet smooth tracking with the typical robustness of sliding modes. In this sense, the controller provides robustness against parameter uncertainty, disturbances, convergence to the sliding manifold in finite time, and asymptotic convergence of the trajectory tracking. The algorithm validation is presented through experimental results showing the feasibility of the proposed approach and illustrating that the tracking errors converge asymptotically to the origin.
The Hamming graph H(d, n) is the Cartesian product of d complete graphs on n vertices. Let ${m=d(n-1)}$ be the degree and $V = n^d$ be the number of vertices of H(d, n). Let $p_c^{(d)}$ be the critical point for bond percolation on H(d, n). We show that, for $d \in \mathbb{N}$ fixed and $n \to \infty$,
which extends the asymptotics found in [10] by one order. The term $O(m^{-1}V^{-1/3})$ is the width of the critical window. For $d=4,5,6$ we have $m^{-3} =O(m^{-1}V^{-1/3})$, and so the above formula represents the full asymptotic expansion of $p_c^{(d)}$. In [16] we show that this formula is a crucial ingredient in the study of critical bond percolation on H(d, n) for $d=2,3,4$. The proof uses a lace expansion for the upper bound and a novel comparison with a branching random walk for the lower bound. The proof of the lower bound also yields a refined asymptotics for the susceptibility of a subcritical Erdös–Rényi random graph.
In the reinforcement learning context, a landmark is a compact information which uniquely couples a state, for problems with hidden states. Landmarks are shown to support finding good memoryless policies for Partially Observable Markov Decision Processes (POMDP) which contain at least one landmark. SarsaLandmark, as an adaptation of Sarsa(λ), is known to promise a better learning performance with the assumption that all landmarks of the problem are known in advance.
In this paper, we propose a framework built upon SarsaLandmark, which is able to automatically identify landmarks within the problem during learning without sacrificing quality, and requiring no prior information about the problem structure. For this purpose, the framework fuses SarsaLandmark with a well-known multiple-instance learning algorithm, namely Diverse Density (DD). By further experimentation, we also provide a deeper insight into our concept filtering heuristic to accelerate DD, abbreviated as DDCF (Diverse Density with Concept Filtering), which proves itself to be suitable for POMDPs with landmarks. DDCF outperforms its antecedent in terms of computation speed and solution quality without loss of generality.
The methods are empirically shown to be effective via extensive experimentation on a number of known and newly introduced problems with hidden state, and the results are discussed.
The large-structure tools of cohomology including toposes and derived categories stay close to arithmetic in practice, yet published foundations for them go beyond ZFC in logical strength. We reduce the gap by founding all the theorems of Grothendieck’s SGA, plus derived categories, at the level of Finite-Order Arithmetic, far below ZFC. This is the weakest possible foundation for the large-structure tools because one elementary topos of sets with infinity is already this strong.
As a starting point we study finite-state automata, which represent the simplest devices for recognizing languages. The theory of finite-state automata has been described in numerous textbooks both from a computational and an algebraic point of view. Here we immediately look at the more general concept of a monoidal finite-state automaton, and the focus of this chapter is general constructions and results for finite-state automata over arbitrary monoids and monoidal languages. Refined pictures for the special (and more standard) cases where we only consider free monoids or Cartesian products of monoids will be given later.
The aim of this chapter is twofold. First, we recall a collection of basic mathematical notions that are needed for the discussions of the following chapters. Second, we have a first, still purely mathematical, look at the central topics of the book: languages, relations and functions between strings, as well as important operations on languages, relations and functions. We also introduce monoids, a class of algebraic structures that gives an abstract view on strings, languages, and relations.
Classical finite-state automata represent the most important class of monoidal finite-state automata. Since the underlying monoid is free, this class of automaton has several interesting specific features. We show that each classical finite-state automaton can be converted to an equivalent classical finite-state automaton where the transition relation is a function. This form of ‘deterministic’ automaton offers a very efficient recognition mechanism since each input word is consumed on at most one path. The fact that each classical finite-state automaton can be converted to a deterministic automaton can be used to show that the class of languages that can be recognized by a classical finite-state automaton is closed under intersections, complements, and set differences. The characterization of regular languages and deterministic finite-state automata in terms of the ‘Myhill–Nerode equivalence relation’ to be introduced in the chapter offers an algebraic view on these notions and leads to the concept of minimal deterministic automata.
This article begins with an outline of the Manovich general definition of borrowing followed by an introduction to the theme of borrowing in music, particularly within the context of acousmatic music. Two scenarios proposed by Navas in his taxonomy of borrowing are used to further the discussion in relation to material sampling and cultural citation. With reference to material sampling, some examples of remix, appropriation and quoting/sampling taking place within acousmatic music are highlighted. With regards to cultural citation, two levels of reference will be considered: cultural citation from sound arts, that is, intertextuality, and cultural citation from other media, that is, intermediality. The article closes with some reflections a posteriori about my own composition, Variation of Evan Parker’s Saxophone Solos, and how this relates to wider notions of musical borrowing.
A fundamental task in natural language processing is the efficient representation of lexica. From a computational viewpoint, lexica need to be represented in a way directly supporting fast access to entries, and minimizing space requirements. A standard method is to represent lexica as minimal deterministic (classical) finite-state automata. To reach such a representation it is of course possible to first build the trie of the lexicon and then to minimize this automaton afterwards. However, in general the intermediate trie is much larger than the resulting minimal automaton. Hence a much better strategy is to use a specialized algorithm to directly compute the minimal deterministic automaton in an incremental way. In this chapter we describe such a procedure.
Analysing electroacoustic music is a challenging task that can be approached by different strategies. In the last few decades, newly emerging computer environments have enabled analysts to examine the sound spectrum content in greater detail. This has resulted in new graphical representation of features extracted from audio recordings. In this article, we propose the use of representations from complex dynamical systems such as phase space graphics in musical analysis to reveal emergent timbre features in granular technique-based acousmatic music. It is known that granular techniques applied to musical composition generate considerable sound flux, regardless of the adopted procedures and available technological equipment. We investigate points of convergence between different aesthetics of the so-called Granular Paradigm in electroacoustic music, and consider compositions employing different methods and techniques. We analyse three works: Concret PH (1958) by Iannis Xenakis, Riverrun (1986) by Barry Truax, and Schall (1996) by Horacio Vaggione. In our analytical methodology, we apply such concepts as volume and emergence, as well as their graphical representation to the pieces. In conclusion we compare our results and discuss how they relate to the three composers’ specific procedures creating sound flux as well as to their compositional epistemologies and ontologies.
This article proposes a conception of sound as the material of artistic experimentation. It centres on a discussion of the nature of sound’s ontological status and aims to contribute to a new understanding of the role of materiality in artistic practices. A central point of discussion is Pierre Schaeffer’s notion of the sound object, which is critically examined. The phenomenological perspective that underlies the concept of the sound object depicts sound as an ideal unity constituted by a subject’s intentionality. Thus, it can barely grasp the physicality of sounds and their production or their reality beyond individual perception. This article aims to challenge the notion of the sound object as a purely perceptual phenomenon while trying to rethink experimentation as a practical form of thought that takes place through interacting with sonorous material. Against the background of recent object-oriented and materialist philosophical theories and by drawing on the Heideggerian concept of the thing and Gilbert Simondon’s theories of perception and individuation, this article strives to outline a conception of sound as a non-symbolic otherness. The proposed idea of thingness revolves around a morphogenetic conception of the becoming of sonorous forms that links their perception to their physicality.
Sonification presents some challenges in communicating information, particularly because of the large difference between possible data to sound mappings and cognitively valid mappings. It is an information transmission process which can be described through the Shannon-Weaver Theory of Mathematical Communication. Musical borrowing is proposed as a method in sonification which can aid the information transmission process as the composer’s and listener’s shared musical knowledge is used. This article describes the compositional process of Wasgiischwashäsch (2017) which uses Rossini’s William Tell Overture (1829) to sonify datasets relating to climate change in Switzerland. It concludes that the familiarity of audiences with the original piece, and the humorous effect produced by the distortion of a well-known piece, contribute to a more effective transmission process.