To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Several authors have investigated the question of whether canonical logic-based accounts of belief revision, and especially the theory of AGM revision operators, are compatible with the dynamics of Bayesian conditioning. Here we show that Leitgeb’s stability rule for acceptance, which has been offered as a possible solution to the Lottery paradox, allows to bridge AGM revision and Bayesian update: using the stability rule, we prove that AGM revision operators emerge from Bayesian conditioning by an application of the principle of maximum entropy. In situations of information loss, or whenever the agent relies on a qualitative description of her information state—such as a plausibility ranking over hypotheses, or a belief set—the dynamics of AGM belief revision are compatible with Bayesian conditioning; indeed, through the maximum entropy principle, conditioning naturally generates AGM revision operators. This mitigates an impossibility theorem of Lin and Kelly for tracking Bayesian conditioning with AGM revision, and suggests an approach to the compatibility problem that highlights the information loss incurred by acceptance rules in passing from probabilistic to qualitative representations of belief.
In this paper, a novel dynamic navigational planning strategy is proposed for single as well as multiple humanoids in intricate environments on a glowworm-based optimization method. The sensory information regarding the obstacle distances and target information are supplied as inputs to the navigational model. The essential turning angle is generated as the output of the controller to avoid obstacles present in the environment and reach the target location with ease. The proposed model is certified in a V-REP simulation software, and the simulation results are authenticated in a real-time setup arranged under testing conditions.
The docking simulators are significant ground test equipment for aerospace projects. The fidelity of docking simulation highly depends on the accuracy performance. This paper investigates the kinematic accuracy for the developed docking simulator. A novel kinematic calibration method which can reduce the number of parameters for error modeling is presented. The principle of parameters separation is studied. A simplified error model is derived based on Taylor series. This method can contribute to the simplification of the error model, fewer measurements, and easier convergence during the parameters identification. The calibration experiment validates this method for further accuracy enhancement.
Measurement errors are omnipresent in network data. Most studies observe an erroneous network instead of the desired error-free network. It is well known that such errors can have a severe impact on network metrics, especially on centrality measures: a central node in the observed network might be less central in the underlying, error-free network. The robustness is a common concept to measure these effects. Studies have shown that the robustness primarily depends on the centrality measure, the type of error (e.g., missing edges or missing nodes), and the network topology (e.g., tree-like, core-periphery). Previous findings regarding the influence of network size on the robustness are, however, inconclusive. We present empirical evidence and analytical arguments indicating that there exist arbitrary large robust and non-robust networks and that the average degree is well suited to explain the robustness. We demonstrate that networks with a higher average degree are often more robust. For the degree centrality and Erdős–Rényi (ER) graphs, we present explicit formulas for the computation of the robustness, mainly based on the joint distribution of node degrees and degree changes which allow us to analyze the robustness for ER graphs with a constant average degree or increasing average degree.
Algorithms are a fundamental building block of artificial intelligence - and, increasingly, society - but our legal institutions have largely failed to recognize or respond to this reality. The Cambridge Handbook of the Law of Algorithms, which features contributions from US, EU, and Asian legal scholars, discusses the specific challenges algorithms pose not only to current law, but also - as algorithms replace people as decision makers - to the foundations of society itself. The work includes wide coverage of the law as it relates to algorithms, with chapters analyzing how human biases have crept into algorithmic decision-making about who receives housing or credit, the length of sentences for defendants convicted of crimes, and many other decisions that impact constitutionally protected groups. Other issues covered in the work include the impact of algorithms on the law of free speech, intellectual property, and commercial and human rights law.
Understand both uncoded and coded caching techniques in future wireless network design. Expert authors present new techniques that will help you to improve backhaul, load minimization, deployment cost reduction, security, energy efficiency and the quality of the user experience. Covering topics from high-level architectures to specific requirement-oriented caching design and analysis, including big-data enabled caching, caching in cloud-assisted 5G networks, and security, this is an essential resource for academic researchers, postgraduate students and engineers working in wireless communications.
We define representations for downward-closed subsets of a rich family of well-quasi-orders, and more generally for closed subsets of an even richer family of Noetherian topological spaces. This includes the cases of finite words, of multisets, of finite trees, notably. Those representations are given as finite unions of ideals, or more generally of irreducible closed subsets. All the representations we explore are computable, in the sense that we exhibit algorithms that decide inclusion, and compute finite unions and finite intersections. The origin of this work lies in the need for computing finite representations of sets of successors of the downward closure of one state, or more generally of a downward-closed set of states, in a well-structured transition system, and this is where we start: we define adequate notions of completions of well-quasi-orders, and more generally, of Noetherian spaces. For verification purposes, we argue that the required completions must be ideal completions, or more generally sobrifications, that is, spaces of irreducible closed subsets.
The validity of network observations is sometimes of concern in empirical studies, since observed networks are prone to error and may not represent the population of interest. This lack of validity is not just a result of random measurement error, but often due to systematic bias that can lead to the misinterpretation of actors’ preferences of network selections. These issues in network observations could bias the estimation of common network models (such as those pertaining to influence and selection) and lead to erroneous statistical inferences. In this study, we proposed a simulation-based sensitivity analysis method that can evaluate the robustness of inferences made in social network analysis to six forms of selection mechanisms that can cause biases in network observations—random, homophily, anti-homophily, transitivity, reciprocity, and preferential attachment. We then applied this sensitivity analysis to test the robustness of inferences for social influence effects, and we derived two sets of analytical solutions that can account for biases in network observations due to random, homophily, and anti-homophily selection.
Aircraft performance models play a key role in airline operations, especially in planning a fuel-efficient flight. In practice, manufacturers provide guidelines which are slightly modified throughout the aircraft life cycle via the tuning of a single factor, enabling better fuel predictions. However, this has limitations, in particular, they do not reflect the evolution of each feature impacting the aircraft performance. Our goal here is to overcome this limitation. The key contribution of the present article is to foster the use of machine learning to leverage the massive amounts of data continuously recorded during flights performed by an aircraft and provide models reflecting its actual and individual performance. We illustrate our approach by focusing on the estimation of the drag and lift coefficients from recorded flight data. As these coefficients are not directly recorded, we resort to aerodynamics approximations. As a safety check, we provide bounds to assess the accuracy of both the aerodynamics approximation and the statistical performance of our approach. We provide numerical results on a collection of machine learning algorithms. We report excellent accuracy on real-life data and exhibit empirical evidence to support our modeling, in coherence with aerodynamics principles.
We study and classify proper q-colourings of the ℤd lattice, identifying three regimes where different combinatorial behaviour holds. (1) When $q\le d+1$, there exist frozen colourings, that is, proper q-colourings of ℤd which cannot be modified on any finite subset. (2) We prove a strong list-colouring property which implies that, when $q\ge d+2$, any proper q-colouring of the boundary of a box of side length $n \ge d+2$ can be extended to a proper q-colouring of the entire box. (3) When $q\geq 2d+1$, the latter holds for any $n \ge 1$. Consequently, we classify the space of proper q-colourings of the ℤd lattice by their mixing properties.
Let G be a graph on n vertices and with maximum degree Δ, and let k be an integer. The k-recolouring graph of G is the graph whose vertices are k-colourings of G and where two k-colourings are adjacent if they differ at exactly one vertex. It is well known that the k-recolouring graph is connected for $k\geq \Delta+2$. Feghali, Johnson and Paulusma (J. Graph Theory83 (2016) 340–358) showed that the (Δ + 1)-recolouring graph is composed by a unique connected component of size at least 2 and (possibly many) isolated vertices.
In this paper, we study the proportion of isolated vertices (also called frozen colourings) in the (Δ + 1)-recolouring graph. Our first contribution is to show that if G is connected, the proportion of frozen colourings of G is exponentially smaller in n than the total number of colourings. This motivates the use of the Glauber dynamics to approximate the number of (Δ + 1)-colourings of a graph. In contrast to the conjectured mixing time of O(nlog n) for $k\geq \Delta+2$ colours, we show that the mixing time of the Glauber dynamics for (Δ + 1)-colourings restricted to non-frozen colourings can be Ω(n2). Finally, we prove some results about the existence of graphs with large girth and frozen colourings, and study frozen colourings in random regular graphs.
We introduce a non-increasing tree growth process $((T_n,{\sigma}_n),\, n\ge 1)$, where Tn is a rooted labelled tree on n vertices and σn is a permutation of the vertex labels. The construction of (Tn, σn) from (Tn−1, σn−1) involves rewiring a random (possibly empty) subset of edges in Tn−1 towards the newly added vertex; as a consequence Tn−1 ⊄ Tn with positive probability. The key feature of the process is that the shape of Tn has the same law as that of a random recursive tree, while the degree distribution of any given vertex is not monotone in the process.
We present two applications. First, while couplings between Kingman’s coalescent and random recursive trees were known for any fixed n, this new process provides a non-standard coupling of all finite Kingman’s coalescents. Second, we use the new process and the Chen–Stein method to extend the well-understood properties of degree distribution of random recursive trees to extremal-range cases. Namely, we obtain convergence rates on the number of vertices with degree at least $c\ln n$, c ∈ (1, 2), in trees with n vertices. Further avenues of research are discussed.
Isolation is a concept originally conceived in the context of clique enumeration in static networks, mostly used to model communities that do not have much contact to the outside world. Herein, a clique is considered isolated if it has few edges connecting it to the rest of the graph. Motivated by recent work on enumerating cliques in temporal networks, we transform the isolation concept to the temporal setting. We discover that the addition of the time dimension leads to six distinct natural isolation concepts. Our main contribution is the development of parameterized enumeration algorithms for five of these six isolation types for clique enumeration, employing the parameter “degree of isolation.” In a nutshell, this means that the more isolated these cliques are, the faster we can find them. On the empirical side, we implemented and tested these algorithms on (temporal) social network data, obtaining encouraging results.
X-ray tomography has applications in various industrial fields such as sawmill industry, oil and gas industry, as well as chemical, biomedical, and geotechnical engineering. In this article, we study Bayesian methods for the X-ray tomography reconstruction. In Bayesian methods, the inverse problem of tomographic reconstruction is solved with the help of a statistical prior distribution which encodes the possible internal structures by assigning probabilities for smoothness and edge distribution of the object. We compare Gaussian random field priors, that favor smoothness, to non-Gaussian total variation (TV), Besov, and Cauchy priors which promote sharp edges and high- and low-contrast areas in the object. We also present computational schemes for solving the resulting high-dimensional Bayesian inverse problem with 100,000–1,000,000 unknowns. We study the applicability of a no-U-turn variant of Hamiltonian Monte Carlo (HMC) methods and of a more classical adaptive Metropolis-within-Gibbs (MwG) algorithm to enable full uncertainty quantification of the reconstructions. We use maximum a posteriori (MAP) estimates with limited-memory BFGS (Broyden–Fletcher–Goldfarb–Shanno) optimization algorithm. As the first industrial application, we consider sawmill industry X-ray log tomography. The logs have knots, rotten parts, and even possibly metallic pieces, making them good examples for non-Gaussian priors. Secondly, we study drill-core rock sample tomography, an example from oil and gas industry. In that case, we compare the priors without uncertainty quantification. We show that Cauchy priors produce smaller number of artefacts than other choices, especially with sparse high-noise measurements, and choosing HMC enables systematic uncertainty quantification, provided that the posterior is not pathologically multimodal or heavy-tailed.
Starting from this chapter, we discuss the main research topics of sentiment analysis and their state-of-the-art algorithms. Document sentiment classification (or document-level sentiment analysis) is perhaps the most extensively studied topic in the field of sentiment analysis so far, especially in its early days (see the surveys by Pang and Lee, 2008a; Liu, 2012). It aims to classify an opinion document (e.g., a product review) as expressing a positive or a negative opinion (or sentiment), which are called sentiment orientations or polarities. This task is referred to as document-level analysis because it considers each document as a whole and does not study entities or aspects inside the document or determine sentiments expressed about them. Arguably, this task is the one that popularized sentiment analysis research. Its limitations also motivated the fine-grained task of aspect-based sentiment analysis (Hu and Liu, 2004) (Chapters 5 and 6), which is widely used in practice today.
Opinion documents come in many different forms. So far, we have implicitly assumed that individual documents are independent of each other or have no relationships. In this chapter, we move on to two forms of social media contexts that involve extensive interactions of their participants and are also full of expressions of sentiments and opinions: debates/discussions and comments. However, the key characteristic of the documents in such media forms is that they are not independent of each other, in contrast to stand-alone documents such as reviews and blog posts. The interactive exchanges and discussions among participants make these media forms much richer targets for analysis. Such interactions can be seen as relationships or links both among participants and among posts. Thus, we can not only perform sentiment analysis, as discussed in previous chapters, but also carry out other types of analyses that are characteristic of interactions – for example, grouping people into camps, discovering contentious issues of debates, mining agreement and disagreement expressions, discovering pairwise arguing nature, and so on. Because debates are exchanges of arguments and reasoning among participants who may be engaged in some kind of deliberation to achieve a common goal, it is interesting to study whether each participant in online debate forums gives reasoned arguments with justifiable claims via constructive debates, or whether a participant just exhibits dogmatism and egotistical clashes of ideologies. These tasks are important for many fields of social science, such as political science and communications. Central to these tasks are the sentiments of agreement and disagreement, which are instrumental to these analyses. These additional types of analyses are the focus of this chapter.