To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This is a sequel article to [10] where a hypersequent calculus (HC) for some temporal logics of linear frames including Kt4.3 and its extensions for dense and serial flow of time was investigated in detail. A distinctive feature of this approach is that hypersequents are noncommutative, i.e., they are finite lists of sequents in contrast to other hypersequent approaches using sets or multisets. Such a system in [10] was proved to be cut-free HC formalization of respective logics by means of semantical argument. In this article we present an equivalent variant of this calculus for which a constructive syntactical proof of cut elimination is provided.
Artificial intelligence, including machine learning, has emerged as a transformational science and engineering discipline. Artificial Intelligence: Foundations of Computational Agents presents AI using a coherent framework to study the design of intelligent computational agents. By showing how the basic approaches fit into a multidimensional design space, readers learn the fundamentals without losing sight of the bigger picture. The new edition also features expanded coverage on machine learning material, as well as on the social and ethical consequences of AI and ML. The book balances theory and experiment, showing how to link them together, and develops the science of AI together with its engineering applications. Although structured as an undergraduate and graduate textbook, the book's straightforward, self-contained style will also appeal to an audience of professionals, researchers, and independent learners. The second edition is well-supported by strong pedagogical features and online resources to enhance student comprehension.
Information retrieval (IR) aims at retrieving documents that are most relevant to a query provided by a user. Traditional techniques rely mostly on syntactic methods. In some cases, however, links at a deeper semantic level must be considered. In this paper, we explore a type of IR task in which documents describe sequences of events, and queries are about the state of the world after such events. In this context, successfully matching documents and query requires considering the events’ possibly implicit uncertain effects and side effects. We begin by analyzing the problem, then propose an action language-based formalization, and finally automate the corresponding IR task using answer set programming.
Multilayer graphs consist of several graphs, called layers, where the vertex set of all layers is the same but each layer has an individual edge set. They are motivated by real-world problems where entities (vertices) are associated via multiple types of relationships (edges in different layers). We chart the border of computational (in)tractability for the class of subgraph detection problems on multilayer graphs, including fundamental problems such as maximum-cardinality matching, finding certain clique relaxations, or path problems. Mostly encountering hardness results, sometimes even for two or three layers, we can also spot some islands of computational tractability.
We investigated human understanding of different network visualizations in a large-scale online experiment. Three types of network visualizations were examined: node-link and two different sorting variants of matrix representations on a representative social network of either 20 or 50 nodes. Understanding of the network was quantified using task time and accuracy metrics on questions that were derived from an established task taxonomy. The sample size in our experiment was more than an order of magnitude larger (N = 600) than in previous research, leading to high statistical power and thus more precise estimation of detailed effects. Specifically, high statistical power allowed us to consider modern interaction capabilities as part of the evaluated visualizations, and to evaluate overall learning rates as well as ambient (implicit) learning. Findings indicate that participant understanding was best for the node-link visualization, with higher accuracy and faster task times than the two matrix visualizations. Analysis of participant learning indicated a large initial difference in task time between the node-link and matrix visualizations, with matrix performance steadily approaching that of the node-link visualization over the course of the experiment. This research is reproducible as the web-based module and results have been made available at: https://osf.io/qct84/.
Most network studies rely on a measured network that differs from the underlying network which is obfuscated by measurement errors. It is well known that such errors can have a severe impact on the reliability of network metrics, especially on centrality measures: a more central node in the observed network might be less central in the underlying network. Previous studies have dealt either with the general effects of measurement errors on centrality measures or with the treatment of erroneous network data. In this paper, we propose a method for estimating the impact of measurement errors on the reliability of a centrality measure, given the measured network and assumptions about the type and intensity of the measurement error. This method allows researchers to estimate the robustness of a centrality measure in a specific network and can, therefore, be used as a basis for decision-making. In our experiments, we apply this method to random graphs and real-world networks. We observe that our estimation is, in the vast majority of cases, a good approximation for the robustness of centrality measures. Beyond this, we propose a heuristic to decide whether the estimation procedure should be used. We analyze, for certain networks, why the eigenvector centrality is less robust than, among others, the pagerank. Finally, we give recommendations on how our findings can be applied to future network studies.
The study of complex brain networks, where structural or functional connections are evaluated to create an interconnected representation of the brain, has grown tremendously over the past decade. Many of the statistical network science tools for analyzing brain networks have been developed for cross-sectional studies and for the analysis of static networks. However, with both an increase in longitudinal study designs and an increased interest in the neurological network changes that occur during the progression of a disease, sophisticated methods for longitudinal brain network analysis are needed. We propose a paradigm for longitudinal brain network analysis over patient cohorts, with the key challenge being the adaptation of Stochastic Actor-Oriented Models to the neuroscience setting. Stochastic Actor-Oriented Models are designed to capture network dynamics representing a variety of influences on network change in a continuous-time Markov chain framework. Network dynamics are characterized through both endogenous (i.e. network related) and exogenous effects, where the latter include mechanisms conjectured in the literature. We outline an application to the resting-state functional magnetic resonance imaging setting with data from the Alzheimer’s Disease Neuroimaging Initiative study. We draw illustrative conclusions at the subject level and make a comparison between elderly controls and individuals with Alzheimer’s disease.
Consumers’ choice of services and the product platforms that deliver them, such as apps and mobile devices, or eBooks and eReaders, are becoming inextricably interrelated. Market viability demands that product–service combinations be compatible across multiple producers and service channels, and that the producers’ profitability must include both service and product design. Some services may be delivered contractually or physically, through a wider range of products than others. Thus, optimization of producers’ contingent products, services, and channel decisions becomes a combined decision problem. This article examines three common product–service design scenarios: exclusive, non-exclusive asymmetric, and non-exclusive symmetric. An enterprise-wide decision framework has been proposed to optimize integrated services and products for each scenario. Optimization results provide guidelines for strategies that are mutually profitable for partner–competitor firms. The article examines an example of an eBook service and tablet, with market-level information from four firms (Amazon, Apple, Barnes & Noble, and Google) and conjoint-based product–service choice data to illustrate the proposed framework using a scalable sequential optimization algorithm. The results suggest that firms in market equilibrium can markedly differ in the services they seek to provide via other firms’ products and demonstrate the interrelationship among marketing, services, and product design.
In this paper, a robust geometric navigation algorithm, designed on the special Euclidean group SE(3), of a quadrotor is proposed. The equations of motion for the quadrotor are obtained using the Newton–Euler formulation. The geometric navigation considers a guidance frame which is designed to perform autonomous flights with a convergence to the contour of the task with small normal velocity. For this purpose, a super twisting algorithm controls the nonlinear rotational and translational dynamics as a cascade structure in order to establish the fast and yet smooth tracking with the typical robustness of sliding modes. In this sense, the controller provides robustness against parameter uncertainty, disturbances, convergence to the sliding manifold in finite time, and asymptotic convergence of the trajectory tracking. The algorithm validation is presented through experimental results showing the feasibility of the proposed approach and illustrating that the tracking errors converge asymptotically to the origin.
The Hamming graph H(d, n) is the Cartesian product of d complete graphs on n vertices. Let ${m=d(n-1)}$ be the degree and $V = n^d$ be the number of vertices of H(d, n). Let $p_c^{(d)}$ be the critical point for bond percolation on H(d, n). We show that, for $d \in \mathbb{N}$ fixed and $n \to \infty$,
which extends the asymptotics found in [10] by one order. The term $O(m^{-1}V^{-1/3})$ is the width of the critical window. For $d=4,5,6$ we have $m^{-3} =O(m^{-1}V^{-1/3})$, and so the above formula represents the full asymptotic expansion of $p_c^{(d)}$. In [16] we show that this formula is a crucial ingredient in the study of critical bond percolation on H(d, n) for $d=2,3,4$. The proof uses a lace expansion for the upper bound and a novel comparison with a branching random walk for the lower bound. The proof of the lower bound also yields a refined asymptotics for the susceptibility of a subcritical Erdös–Rényi random graph.
In the reinforcement learning context, a landmark is a compact information which uniquely couples a state, for problems with hidden states. Landmarks are shown to support finding good memoryless policies for Partially Observable Markov Decision Processes (POMDP) which contain at least one landmark. SarsaLandmark, as an adaptation of Sarsa(λ), is known to promise a better learning performance with the assumption that all landmarks of the problem are known in advance.
In this paper, we propose a framework built upon SarsaLandmark, which is able to automatically identify landmarks within the problem during learning without sacrificing quality, and requiring no prior information about the problem structure. For this purpose, the framework fuses SarsaLandmark with a well-known multiple-instance learning algorithm, namely Diverse Density (DD). By further experimentation, we also provide a deeper insight into our concept filtering heuristic to accelerate DD, abbreviated as DDCF (Diverse Density with Concept Filtering), which proves itself to be suitable for POMDPs with landmarks. DDCF outperforms its antecedent in terms of computation speed and solution quality without loss of generality.
The methods are empirically shown to be effective via extensive experimentation on a number of known and newly introduced problems with hidden state, and the results are discussed.
The large-structure tools of cohomology including toposes and derived categories stay close to arithmetic in practice, yet published foundations for them go beyond ZFC in logical strength. We reduce the gap by founding all the theorems of Grothendieck’s SGA, plus derived categories, at the level of Finite-Order Arithmetic, far below ZFC. This is the weakest possible foundation for the large-structure tools because one elementary topos of sets with infinity is already this strong.
As a starting point we study finite-state automata, which represent the simplest devices for recognizing languages. The theory of finite-state automata has been described in numerous textbooks both from a computational and an algebraic point of view. Here we immediately look at the more general concept of a monoidal finite-state automaton, and the focus of this chapter is general constructions and results for finite-state automata over arbitrary monoids and monoidal languages. Refined pictures for the special (and more standard) cases where we only consider free monoids or Cartesian products of monoids will be given later.
The aim of this chapter is twofold. First, we recall a collection of basic mathematical notions that are needed for the discussions of the following chapters. Second, we have a first, still purely mathematical, look at the central topics of the book: languages, relations and functions between strings, as well as important operations on languages, relations and functions. We also introduce monoids, a class of algebraic structures that gives an abstract view on strings, languages, and relations.