To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The exponential growth in digital technology is leading us to a future in which all things and all people are connected all the time, something we refer to as The Infinite Network (TIN), which will cause profound changes in every industry. Here, we focus on the impact it will have in healthcare. TIN will change the essence of healthcare to a data-driven continuous approach as opposed to the event-driven discrete approach used today. At a micro or individual level, smart sensing will play a key role, in the form of embedded sensors, wearable sensors, and sensing from smart medical devices. At a macro or aggregate level, healthcare will be provided by Intelligent Telehealth Networks that evolve from the telehealth networks that are available today. Traditional telemedicine has delivered remote care to patients in the area where doctors are not readily available, but has not achieved at large scale. New advanced networks will deliver care at a much larger scale. The long-term future requires intelligent hybrid networks that combine artificial intelligence with human intelligence to provide continuity of care at higher quality and lower cost than is possible today.
In a paper, entitled Binary lambda calculus and combinatory logic, John Tromp presents a simple way of encoding lambda calculus terms as binary sequences. In what follows, we study the numbers of binary strings of a given size that represent lambda terms and derive results from their generating functions, especially that the number of terms of size n grows roughly like 1.963447954. . .n. In a second part we use this approach to generate random lambda terms using Boltzmann samplers.
We relate the so-called powercone models of mixed non-deterministic and probabilistic choice proposed by Tix, Keimel, Plotkin, Mislove, Ouaknine, Worrell, Morgan and McIver, to our own models of previsions. Under suitable topological assumptions, we show that they are isomorphic. We rely on Keimel's cone-theoretic variants of the classical Hahn–Banach separation theorems, using functional analytic methods, and on the Schröder–Simpson Theorem.
A k-uniform hypergraph H = (V, E) is called ℓ-orientable if there is an assignment of each edge e ∈ E to one of its vertices v ∈ e such that no vertex is assigned more than ℓ edges. Let Hn,m,k be a hypergraph, drawn uniformly at random from the set of all k-uniform hypergraphs with n vertices and m edges. In this paper we establish the threshold for the ℓ-orientability of Hn,m,k for all k ⩾ 3 and ℓ ⩾ 2, that is, we determine a critical quantity c*k,ℓ such that with probability 1 − o(1) the graph Hn,cn,k has an ℓ-orientation if c < c*k,ℓ, but fails to do so if c > c*k,ℓ.
Our result has various applications, including sharp load thresholds for cuckoo hashing, load balancing with guaranteed maximum load, and massive parallel access to hard disk arrays.
This paper first presents a method of motion planning and implementation for the self-recovery of an overturned six-legged robot. Previous studies aimed at the static and dynamic stabilization of robots for preventing them from overturning. However, no one can guarantee that an overturn accident will not occur during various applications of robots. Therefore, the problems involving overturning should be considered and solved during robot design and control. The design inspirations of multi-legged robots come from nature, especially insects and mammals. In addition, the self-recovery approach of an insect could also be imitated by robots. In this paper, such a self-recovery mechanism is reported. The inertial forces of the dangling legs are used to bias some legs to touch the ground, and the ground reaction forces exerted on the feet of landing legs are achieved to support and push the body to enable recovery without additional help. By employing the mechanism, a self-recovery approach named SSR (Sidewise-Self-Recovery) is presented and applied to multi-legged robots. Experiments of NOROS are performed to validate the effectiveness of the self-recovery motions. The results show that the SSR is a suitable method for multi-legged robots and that the hemisphere shell of robots can help them to perform self-recovery.
The book Das Interpretationsproblem der Formalisierten Zahlentheorie und ihre Formale Widerspruchsfreiheit by Erik Stenius published in 1952 contains a consistency proof for infinite ω-arithmetic based on a semantical interpretation. Despite the proof’s reference to semantics the truth definition is in fact equivalent to a syntactical derivability or reduction condition. Based on this reduction condition Stenius proves that the complexity of formulas in a derivation can be limited by the complexity of the conclusion. This independent result can also be proved by cut elimination for ω-arithmetic which was done by Schütte in 1951.
In this paper we interpret the syntactic reduction in Stenius’ work as a method for cut elimination based on invertibility of the logical rules. Through this interpretation the constructivity of Stenius’ proof becomes apparent. This improvement was explicitly requested from Stenius by Paul Bernays in private correspondence (In a letter from Bernays begun on the 19th of September 1952 (Stenius & Bernays, 1951–75)). Bernays, who took a deep interest in Stenius’ manuscript, applied the described method in a proof Herbrand’s theorem. In this paper we prove Herbrand’s theorem, as an application of Stenius’ work, based on lecture notes of Bernays (Bernays, 1961). The main result completely resolves Bernays’ suggestions for improvement by eliminating references to Stenius’ semantics and by showing the constructive nature of the proof. A comparison with Schütte’s cut elimination proof shows how Stenius’ simplification of the reduction of universal cut formulas, which in Schütte’s proof requires duplication and repositioning of the cuts, shifts the problematic case of reduction to implications.
According to propositional contingentism, it is contingent what propositions there are. This paper presents two ways of modeling contingency in what propositions there are using two classes of possible worlds models. The two classes of models are shown to be equivalent as models of contingency in what propositions there are, although they differ as to which other aspects of reality they represent. These constructions are based on recent work by Robert Stalnaker; the aim of this paper is to explain, expand, and, in one aspect, correct Stalnaker’s discussion.
Triadic closure has been conceptualized and measured in a variety of ways, most famously the clustering coefficient. Existing extensions to affiliation networks, however, are sensitive to repeat group attendance, which does not reflect common interpersonal interpretations of triadic closure. This paper proposes a measure of triadic closure in affiliation networks designed to control for this factor, which manifests in bipartite models as biclique proliferation. To avoid arbitrariness, the paper introduces a triadic framework for affiliation networks, within which a range of measures can be defined; it then presents a set of basic axioms that suffice to narrow this range to the one measure. An instrumental assessment compares the proposed and two existing measures for reliability, validity, redundancy, and practicality. All three measures then take part in an investigation of three empirical social networks, which illustrates their differences.
For positive integers n and q and a monotone graph property $\mathcal{A}$, we consider the two-player, perfect information game WC(n, q, $\mathcal{A}$), which is defined as follows. The game proceeds in rounds. In each round, the first player, called Waiter, offers the second player, called Client, q + 1 edges of the complete graph Kn which have not been offered previously. Client then chooses one of these edges which he keeps and the remaining q edges go back to Waiter. If, at the end of the game, the graph which consists of the edges chosen by Client satisfies the property $\mathcal{A}$, then Waiter is declared the winner; otherwise Client wins the game. In this paper we study such games (also known as Picker–Chooser games) for a variety of natural graph-theoretic parameters, such as the size of a largest component or the length of a longest cycle. In particular, we describe a phase transition type phenomenon which occurs when the parameter q is close to n and is reminiscent of phase transition phenomena in random graphs. Namely, we prove that if q ⩾ (1 + ϵ)n, then Client can avoid components of order cϵ−2 ln n for some absolute constant c > 0, whereas for q ⩽ (1 − ϵ)n, Waiter can force a giant, linearly sized component in Client's graph. In the second part of the paper, we prove that Waiter can force Client's graph to be pancyclic for every q ⩽ cn, where c > 0 is an appropriate constant. Note that this behaviour is in stark contrast to the threshold for pancyclicity and Hamiltonicity of random graphs.
How many strict local maxima can a real quadratic function on {0, 1}n have? Holzman conjectured a maximum of $\binom{n }{ \lfloor n/2 \rfloor}$. The aim of this paper is to prove this conjecture. Our approach is via a generalization of Sperner's theorem that may be of independent interest.
The objective of this note is to study the distribution of the running maximum of the level in a level-dependent quasi-birth-death process. By considering this running maximum at an exponentially distributed “killing epoch” T, we devise a technique to accomplish this, relying on elementary arguments only; importantly, it yields the distribution of the running maximum jointly with the level and phase at the killing epoch. We also point out how our procedure can be adapted to facilitate the computation of the distribution of the running maximum at a deterministic (rather than an exponential) epoch.
Utilising both key mathematical tools and state-of-the-art research results, this text explores the principles underpinning large-scale information processing over networks and examines the crucial interaction between big data and its associated communication, social and biological networks. Written by experts in the diverse fields of machine learning, optimisation, statistics, signal processing, networking, communications, sociology and biology, this book employs two complementary approaches: first analysing how the underlying network constrains the upper-layer of collaborative big data processing, and second, examining how big data processing may boost performance in various networks. Unifying the broad scope of the book is the rigorous mathematical treatment of the subjects, which is enriched by in-depth discussion of future directions and numerous open-ended problems that conclude each chapter. Readers will be able to master the fundamental principles for dealing with big data over large systems, making it essential reading for graduate students, scientific researchers and industry practitioners alike.
The ever-increasing use of smart phone devices, multimedia applications, and social networking, along with the demand for higher data rates, ubiquitous coverage, and better quality of service, pose new challenges to the traditional mobile wireless network paradigm that depends on macro cells for service delivery. Small cell networks (SCNs) have emerged as an attractive paradigm and hold great promise for future wireless communication systems (5G systems). SCNs encompass a broad variety of cell types, such as micro, pico, and femto cells, as well as advanced wireless relays, and distributed antenna systems. SCNs co-exist with the macro cellular network and bring the network closer to the user equipment. SCNs require low power, incur low cost, and provide increased spatial reuse. Data traffic offloading eases the load on the expensive macro cells with significant savings expected to the network operators using small cells.
As the demand for increased bandwidth rages on, SCNs emerged in dense urban areas mainly to provide coverage and capacity. They have now gained momentum and are expected to dominate in the coming years, with the rollout in large scale – either planned or in ad-hoc manner – and the development of 5G systems with many small cell components. Already, the number of “small cells” in the world exceeds the total number of traditional mobile base stations. SCNs are also envisioned to pave the way for new services. However, there are many challenges in the design and deployment of small cell networks, which have to be addressed in order to be technically and commercially successful. This book provides various concepts in the design, analysis, optimization, and deployment of small cell networks, using a treatment approach suitable for pedagogical and practical purposes.
This book is an excellent source for understanding small cell network concepts, associated problems, and potential solutions in next-generation wireless networks. It covers from fundamentals to advanced topics, deployment issues, environmental concerns, optimized solutions, and standards activities in emerging small cell networks. New trends, challenges, and research results are also provided. Written by leading experts in the field from academia and industry around the world, it is a valuable resource dealing with both the important, core, and specialized issues in these areas. It offers a wide coverage of topics, while balancing the treatment to suit the needs of first-time learners of the concepts and specialists in the field.
We disseminate a set of small cells' field trial experiments conducted at 2.6 GHz and focused on coverage/capacity within multi-floor office buildings. LTE pico cells deployed indoors as well as LTE small cells deployed outdoors are considered. The latter rely on small emission power levels coupled with intelligent ways of generating transmission beams with various directivity levels by means of adaptive antenna arrays. Furthermore, we introduce an analytical three-dimensional (3D) performance prediction framework, which we calibrate and validate against field measurements. The framework provides detailed performance levels at any point of interest within a building; it allows us to determine the minimum number of small cells required to deliver desirable coverage and capacity levels, their most desirable location subject to deployment constraints, transmission power levels, antenna characteristics (beam shapes), and antenna orientation (azimuth, tilt) to serve a targeted geographical area. In addition, we disseminate specialized solutions for LTE small cells’ deployment within hotspot traffic venues, such as stadiums, through design and deployment feasibility analysis.
Introduction
Small cells are low-cost, low-power base stations designed to improve coverage and capacity of wireless networks. By deploying small cells on top and in complement to the traditional macro cellular networks, operators are in a much better position to provide the end users with a more uniform and improved quality of experience (QoE). Small cells’ deployment is subject to service delivery requirements, as well as to the actual constraints specific to the targeted areas. For a good uniformity of service, in populated areas where the presence of buildings is the main reason for significant radio signal attenuation, small cells may need to be closely spaced, e.g., within a couple of hundred meters from each other. Naturally, the performance of small cells is highly dependent on environment-specific characteristics, such as the materials used for building construction, their specific propagation properties, and the surroundings. It is particularly important to have a proper characterization of an environment where small cells are deployed.
This chapter focuses on in-building performance and feasibility of LTE small cells through measurements, taking as reference both outdoor small cell and indoor pico cell deployments. We created scenarios where wireless connectivity within a target building is offered either by small cells located on the exterior of other buildings (small cells with outdoor characteristics) or simply by small cells located within the target building (pico cells with indoor characteristics).