To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Robert Aumann presents his Agreement Theorem as the key conditional: “if two people have the same priors and their posteriors for an event A are common knowledge, then these posteriors are equal” (Aumann, 1976, p. 1236). This paper focuses on four assumptions which are used in Aumann’s proof but are not explicit in the key conditional: (1) that agents commonly know, of some prior μ, that it is the common prior; (2) that agents commonly know that each of them updates on the prior by conditionalization; (3) that agents commonly know that if an agent knows a proposition, she knows that she knows that proposition (the “KK” principle); (4) that agents commonly know that they each update only on true propositions. It is shown that natural weakenings of any one of these strong assumptions can lead to countermodels to Aumann’s key conditional. Examples are given in which agents who have a common prior and commonly know what probability they each assign to a proposition nevertheless assign that proposition unequal probabilities. To alter Aumann’s famous slogan: people can “agree to disagree”, even if they share a common prior. The epistemological significance of these examples is presented in terms of their role in a defense of the Uniqueness Thesis: If an agent whose total evidence is E is fully rational in taking doxastic attitude D to P, then necessarily, any subject with total evidence E who takes a different attitude to P is less than fully rational.
In order to achieve competitive performance, abstract machines for Prolog and related languages end up being large and intricate, and incorporate sophisticated optimizations, both at the design and at the implementation levels. At the same time, efficiency considerations make it necessary to use low-level languages in their implementation. This makes them laborious to code, optimize, and, especially, maintain and extend. Writing the abstract machine (and ancillary code) in a higher-level language can help tame this inherent complexity. We show how the semantics of most basic components of an efficient virtual machine for Prolog can be described using (a variant of) Prolog. These descriptions are then compiled to C and assembled to build a complete bytecode emulator. Thanks to the high-level of the language used and its closeness to Prolog, the abstract machine description can be manipulated using standard Prolog compilation and optimization techniques with relative ease. We also show how, by applying program transformations selectively, we obtain abstract machine implementations whose performance can match and even exceed that of state-of-the-art, highly-tuned, hand-crafted emulators.
In the recent years, numerous proof systems have improved enough to be used for formally verifying non-trivial mathematical results. They, however, have different purposes and it is not always easy to choose which one is adapted to undertake a formalization effort. In this survey, we focus on properties related to real analysis: real numbers, arithmetic operators, limits, differentiability, integrability and so on. We have chosen to look into the formalizations provided in standard by the following systems: Coq, HOL4, HOL Light, Isabelle/HOL, Mizar, ProofPower-HOL, and PVS. We have also accounted for large developments that play a similar role or extend standard libraries: ACL2(r) for ACL2, C-CoRN/MathClasses for Coq, and the NASA PVS library. This survey presents how real numbers have been defined in these various provers and how the notions of real analysis described above have been formalized. We also look at the methods of automation these systems provide for real analysis.
We consider a calculus for multiparty sessions enriched with security levels for messages. We propose a monitored semantics for this calculus, which blocks the execution of processes as soon as they attempt to leak information. We illustrate the use of this semantics with various examples, and show that the induced safety property is compositional and that it is strictly included between a typability property and a security property proposed for an extended calculus in previous work.
The N-3 Revolute-Prismatic-Spherical (N-3RPS) manipulator is a kind of serial-parallel manipulator and has higher stiffness and accuracy compared with serial mechanisms, and a larger workspace compared with parallel mechanisms. The locking mechanism in each joint allows the manipulator to be controlled by only three wires. Modeling the dynamics of this manipulator presents an inherent complexity due to its closed-loop structure and kinematic constraints. In the first part of this paper, the inverse kinematics of the manipulator, which consists of position, velocity, and acceleration, is studied. In the second part, the inverse and forward dynamics of the manipulator is formulated based on the principle of virtual work and link Jacobian matrices. Finally, the numerical example is presented for some trajectories.
The languages accepted by finite automata are precisely the languages denoted by regular expressions. In contrast, finite automata may exhibit behaviours that cannot be described by regular expressions up to bisimilarity. In this paper, we consider extensions of the theory of regular expressions with various forms of parallel composition and study the effect on expressiveness. First we prove that adding pure interleaving to the theory of regular expressions strictly increases its expressiveness modulo bisimilarity. Then, we prove that replacing the operation for pure interleaving by ACP-style parallel composition gives a further increase in expressiveness, still insufficient, however, to facilitate the expression of all finite automata up to bisimilarity. Finally, we prove that the theory of regular expressions with ACP-style parallel composition and encapsulation is expressive enough to express all finite automata up to bisimilarity. Our results extend the expressiveness results obtained by Bergstra, Bethke and Ponse for process algebras with (the binary variant of) Kleene's star operation.
Concerns with global warming prompted many governments to mandate increased proportion of electricity generation from renewable sources. This, together with the desire to have more efficient and secure power generation and distribution, has driven research in the next-generation power grid, namely, the smart grid. Through integrating advanced information and communication technologies with power electronic and electric power technologies, smart grid will be highly reliable, efficient, and environmental-friendly. A key component of smart grid is the communication system. This paper explores the design goals and functions of the smart grid communication system, followed by an in-depth investigation on the communication requirements. Discussions on some of the recent developments related to smart grid communication systems are also introduced.
The recent successes of over-the-top (OTT) video services have intensified the competition between the traditional broadcasting video and OTT video. Such competition has pushed the traditional video service providers to accelerate the transition of their video services from the broadcasting video to the carrier-grade IP video streaming. However, there are significant challenges in providing large-scale carrier-grade IP video streaming services. For a compressed video sequence, central to the guaranteed real-time delivery are the issues of video rate, buffering, and timing as compressed video pictures are transmitted over an IP network from the encoder output to the decoder input. Toward the understanding and eventual resolution of these issues, a mathematical theory of compressed video buffering is developed to address IP video traffic regulation for the end-to-end video network quality of service. In particular, a comprehensive set of theoretical relationships is established for decoder buffer size, network transmission rate, network delay and jitter, and video source characteristics. As an example, the theory is applied to measure and compare the burstiness and delay of video streams coded with MPEG-2, advanced video coding, and high-efficiency video coding standards. The applicability of the theory to IP networks that consist of a specific class of routers is also demonstrated.
In this paper, we propose a tracking algorithm to detect power lines from millimeter-wave radar video. We propose a general framework of cascaded particle filters which can naturally capture the temporal correlation of the power line objects, and the power-line-specific feature is embedded into the conditional likelihood measurement process of the particle filter. Because of the fusion of multiple information sources, power line detection is more effective than the previous approach. Both the accuracy and the recall of power line detection are improved from around 68% to over 92%.
Technologies for coding non-camera-captured video contents have received great interests lately due to the rapid growth of application areas such as wireless display and screen sharing, etc. In response to the market demands, the ITU-T Video Coding Expert Group and ISO/IEC Motion Picture Expert Group have jointly launched a new standardization project, i.e. the High-Efficiency Video Coding (HEVC) extensions on screen content coding (HEVC SCC). Several new video coding tools, including intra block copy, palette coding, adaptive color transform, and adaptive motion resolution, have been developed and adopted into HEVC SCC draft standard. This paper reviews the main features and coding technologies in the current HEVC SCC draft standard, with discussions about the performance and complexity aspects compared with prior arts.
Mobile terminals have become the most familiar communication tool we use, and various types of people have come to use mobile terminals in various environments. Accordingly, situations in which we talk over the telephone in noisy environments or with someone who speaks fast have increased. However, it is sometimes difficult to hear a person's voice in these cases. To make the voice received through mobile terminals easy to hear, authors have developed two technologies. One is a voice enhancement technology that emphasizes a caller's voice according to the noise surrounding the recipient, and the other is a speech rate conversion technology that slows speech while maintaining voice quality. In this paper, we explain the trends and the features of these technologies and discuss ways to implement their algorithms on mobile terminals.
Standard techniques in matrix factorization (MF) – a popular method for latent factor model-based design – result in dense matrices for both users and items. Users are likely to have some affinity toward all the latent factors – making a dense matrix plausible, but it is not possible for the items to possess all the latent factors simultaneously; hence it is more likely to be sparse. Therefore, we propose to factor the rating matrix into a dense user matrix and a sparse item matrix, leading to the blind compressed sensing (BCS) framework. To further enhance the prediction quality of our design, we aim to incorporate user and item metadata into the BCS framework. The additional information helps in reducing the underdetermined nature of the problem of rating prediction caused by extreme sparsity of the rating dataset. Our design is based on the belief that users sharing similar demographic profile have similar preferences and thus can be described by the similar latent factor vectors. We also use item metadata (genre information) to group together the similar items. We modify our BCS formulation to include item metadata under the assumption that items belonging to common genre share similar sparsity pattern. We also design an efficient algorithm to solve our formulation. Extensive experimentation conducted on the movielens dataset validates our claim that our modified MF framework utilizing auxiliary information improves upon the existing state-of-the-art techniques.
The paper addresses a robust wavelet-based speech enhancement for automatic speech recognition in reverberant and noisy conditions. We propose a novel scheme in improving the speech, late reflection, and noise power estimates from the observed contaminated signal. The improved estimates are used to calculate the Wiener gain in filtering the late reflections and additive noise. In the proposed scheme, optimization of the wavelet family and its parameters is conducted using an acoustic model (AM). In the offline mode, the optimal wavelet family is selected separately for the speech, late reflections, and background noise based on the AM likelihood. Then, the parameters of the selected wavelet family are optimized specifically for each signal subspace. As a result we can use a wavelet sensitive to the speech, late reflection, and the additive noise, which can independently and accurately estimate these signals directly from an observed contaminated signal. For speech recognition, the most suitable wavelet is identified from the pre-stored wavelets, and wavelet-domain filtering is conducted to the noisy and reverberant speech signal. Experimental evaluations using real reverberant data demonstrate the effectiveness and robustness of the proposed method.
In some security applications, it is important to transmit just enough information to take the right decisions. Traditional video codecs try to maximize the global quality, irrespective of the video content pertinence for certain tasks. To better maintain the semantics of the scene, some approaches allocate more bitrate to the salient information. In this paper, a semantic video compression scheme based on seam carving is proposed. The idea is to suppress non-salient parts of the video by seam carving. The reduced sequence is encoded with H.264/AVC while the seams are encoded with our approach. The main contributions of this paper are (1) an algorithm that segments the sequence into group of pictures, depending on the content, (2) a spatio-temporal seam clustering method, (3) an isolated seam discarding technique, improving the seam encoding, (4) a new seam modeling, avoiding geometric distortion and resulting in a better control of the seam shapes, and (5) a new encoder which reduces the overall bit-rate. A full reference object-oriented quality metric is used to assess the performance of the approach. Our approach outperforms traditional H.264/AVC intra encoding with a Bjontegaard's rate improvement between 7.02 and 21.77% while maintaining the quality of the salient objects.
The enormous economic loss caused by power quality problems (more than $ 150 billion per year in USA) makes power quality monitoring an important component in power grid. With highly connected fragile digital equipment and appliances, Smart Grid has more stringent timeliness and reliability requirements on power quality monitoring. In this work, we propose a change-point detection theory-based power quality monitoring scheme to detect the most detrimental power quality events, such as voltage sags, transients and swells in a quick and reliable manner. We first present a method for single-sensor detection scenario. Based on that, we extend the scheme to multi-sensor joint detection scheme which further enhances the detection performance. A group of conventional power quality monitoring schemes (i.e. Root-mean-square, Short-time Fourier transform, MUSIC, and MBQCUSUM) are compared with the proposed scheme. Experimental results assert the superior of the proposed scheme in terms of detection latency and robustness.
Being connected “anywhere anytime” has become a way of life for much of the world's population. Thanks to major technological advances in internet, wireless communication, video technology, silicon manufacturing, etc., our mobile devices have become not only faster and more powerful, but also smaller and sleeker. With the popularity of rich media on the rise, the no. 1 data traffic over the mobile network is attributed to video. That is the reason why we depict the Freeman Dyson's book title “From Eros to Gaia.” Equipped with rich media capabilities, our mobile devices enable a completely different storytelling experience unlike anything the human race has experienced before. In this paper, we review the latest technological evolutions in the wireless space and in the video space, namely long-term evolution and high-efficiency video coding, respectively. We then discuss how these advanced technologies impact our way of life at present and in years to come.
Low-cost, compact, and accurate systems for in-home rehabilitation are needed in aging, aged, and hyper-aged groups. In this study, we developed an in-home rehabilitation system for patients with balance disorders by providing visual feedback of postural information in real-time. Our system measures the user's whole body motion and the center of pressure (COP) using a Kinect and Wii Balance Board (WBB). The accuracy of body motion for estimating the anterior folding and lateral bending angles was validated experimentally by comparing the estimates with the angles given by an optical motion capture system. Additional experiments showed that the COP has a small correlation coefficient with the angles, suggesting that WBB is necessary for measuring the COP.
For the transmission of aerial surveillance videos taken from unmanned aerial vehicles (UAVs), region of interest (ROI)-based coding systems are of growing interest in order to cope with the limited channel capacities available. We present a fully automatic detection and coding system which is capable of transmitting high-resolution aerial surveillance videos at very low bit rates. Our coding system is based on the transmission of ROI areas only. We assume two different kinds of ROIs: in order to limit the transmission bit rate while simultaneously retaining a high-quality view of the ground, we only transmit new emerging areas (ROI-NA) for each frame instead of the entire frame. At the decoder side, the surface of the earth is reconstructed from transmitted ROI-NA by means of global motion compensation (GMC). In order to retain the movement of moving objects not conforming with the motion of the ground (like moving cars and their previously occluded ground), we additionally consider regions containing such objects as interesting (ROI-MO). Finally, both ROIs are used as input to an externally controlled video encoder. While we use GMC for the reconstruction of the ground from ROI-NA, we use meshed-based motion compensation in order to generate the pelwise difference in the luminance channel (difference image) between the mesh-based motion compensated and the current input image to detect the ROI-MO. High spots of energy within this difference image are used as seeds to select corresponding superpixels from an independent (temporally consistent) superpixel segmentation of the input image in order to obtain accurate shape information of ROI-MO. For a false positive detection rate (regions falsely classified as containing local motion) of less than 2% we detect more than 97% true positives (correctly detected ROI-MOs) in challenging scenarios. Furthermore, we propose to use a modified high-efficiency video coding (HEVC) video encoder. Retaining full HDTV video resolution at 30 fps and subjectively high quality we achieve bit rates of about 0.6–0.9 Mbit/s, which is a bit rate saving of about 90% compared to an unmodified HEVC encoder.
Fifth generation mobile communications (5G) are expected to accommodate rapidly increasing mobile traffic aiming at the realization of a “Hyper Connected World” in which all people and surrounding things are connected and information is exchanged between them, and to play the role of a basis of the Internet of everything. A lot of research projects have started focusing on the research of 5G since around 2013, and some agreements are being formed about the vision and the candidate technologies of 5G. The 5G wireless network is expected to become a “Heterogeneous Network” where new wireless access technologies incompatible with 4G and the wireless access technologies for unlicensed band are incorporated with the enhanced technology of 4G (e.g. IMT-advanced). This paper introduces the vision and technology trends of 5G, shows key directions of the research and development of 5G in the future, and introduces some of our studies on 5G.
This paper revisits the solution of the word problem for ${\it\omega}$-terms interpreted over finite aperiodic semigroups, obtained by J. McCammond. The original proof of correctness of McCammond’s algorithm, based on normal forms for such terms, uses McCammond’s solution of the word problem for certain Burnside semigroups. In this paper, we establish a new, simpler, correctness proof of McCammond’s algorithm, based on properties of certain regular languages associated with the normal forms. This method leads to new applications.