To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The broad availability of tools for the acquisition and processing of multimedia signals has recently led to the concern that images and videos cannot be considered a trustworthy evidence, since they can be altered rather easily. This possibility raises the need to verify whether a multimedia content, which can be downloaded from the internet, acquired by a video surveillance system, or received by a digital TV broadcaster, is original or not. To cope with these issues, signal processing experts have been investigating effective video forensic strategies aimed at reconstructing the processing history of the video data under investigation and validating their origins. The key assumption of these techniques is that most alterations are not reversible and leave in the reconstructed signal some “footprints”, which can be analyzed in order to identify the previous processing steps. This paper presents an overview of the video forensic techniques that have been proposed in the literature, focusing on the acquisition, compression, and editing operations, trying to highlight strengths and weaknesses of each solution. It also provides a review of simple processing chains that combine different operations. Anti-forensic techniques are also considered to outline the current limitations and highlight the open research issues.
FTV (free-viewpoint television) is an innovative visual media that allows users to view a three-dimensional (3D) scene by freely changing their viewpoints. Thus, it enables realistic viewing and free navigation of 3D scenes. FTV is the ultimate 3DTV with infinite number of views and ranked at the top of visual media. FTV is not a conventional pixel-based system but a ray-based system. New types of ray capture, processing, and display technologies have been developed for FTV. These technologies were also used to realize an all-around ray-reproducing 3DTV. The international standardization of FTV has been promoted in MPEG. The first phase of FTV ismulti-view video coding and the second phase is 3D video. In this paper, the FTV system and its technologies are reviewed.
We present an improved algorithm for the computation of Zariski chambers on algebraic surfaces. The new algorithm significantly outperforms the currently available method and therefore allows us to treat surfaces of high Picard number, where huge numbers of chambers occur. As an application, we efficiently compute the number of chambers supported by the lines on the Segre–Schur quartic.
We introduce the computer algebra package PyCox, written entirely in the Python language. It implements a set of algorithms, in a spirit similar to the older CHEVIE system, for working with Coxeter groups and Hecke algebras. This includes a new variation of the traditional algorithm for computing Kazhdan–Lusztig cells and W-graphs, which works efficiently for all finite groups of rank ≤8 (except E8). We also discuss the computation of Lusztig’s leading coefficients of character values and distinguished involutions (which works for E8 as well). Our experiments suggest a re-definition of Lusztig’s ‘special’ representations which, conjecturally, should also apply to the unequal parameter case. Supplementary materials are available with this article.
Many of portable devices such as smart phones, portable multimedia players (PMP), and digital single-lens reflex (DSLR) cameras are capable of capturing high-resolution images (e.g. 10 mega-pixel in DSLR) or even video. The limited battery power supply in the portable devices often prevents these systems to use high-power large liquid crystal display (LCD). Instead, the portable devices often have a LCD screen with small physical size (e.g. 3 cm × 2 cm for Smartphone or DSLR) and with much lower pixel resolution (e.g. 0.15 mega-pixel for 480 × 320 display) than actual image/video resolution. Thus, the high-resolution image and video are down-sampled before being displayed. Unfortunately, the anti-aliasing filter often leads to rather severe blurring. Although the blurring may be minor when the viewing distance is large, it can be rather disturbing in portable applications due to the short viewing distance. To cope with the blurring problem, one possible solution is to use an LCD screen with higher resolution. But such hardware solution tends to be expensive and often not welcomed by the consumer electronic companies. Another possible solution is to continue to use the low-resolution LCD screen, but use some software technique to enhance the apparent image/video resolution. In this paper, we discuss a novel way to improve the apparent resolution of down-sampled image/video using a technique called subpixel rendering, which controls subpixel that is smaller than a pixel in a high-precision manner.
The paper examines the problem of dynamic ship positioning with the use of Kalman Filter- and Particle Filter-based sensor fusion algorithms. The proposed approach enables to estimate accurately the ship's state vector by fusing the vessel's position and heading measurements coming from on-board sensors together with distance measurements coming from sensors located at the coast (e.g. radar). The estimated state vector is used in turn, in a control loop, to regulate the horizontal position and heading of the vessel. The performance of dynamic positioning of the ship based on Kalman and Particle Filtering is evaluated through simulation experiments.
We consider a mixture of one exponential distribution and one gamma distribution with increasing failure rate. For the right choice of parameters, it is shown that its failure rate has an upsidedown bathtub shape failure rate. We also consider a mixture of a family of exponentials and a family of gamma distributions and obtain a similar result.
Taking our inspiration from modal correspondence theory, we present the idea of correspondence analysis for many-valued logics. As a benchmark case, we study truth-functional extensions of the Logic of Paradox (LP). First, we characterize each of the possible truth table entries for unary and binary operators that could be added to LP by an inference scheme. Second, we define a class of natural deduction systems on the basis of these characterizing inference schemes and a natural deduction system for LP. Third, we show that each of the resulting natural deduction systems is sound and complete with respect to its particular semantics.
This paper introduces two statistics that assess whether (or not) a sequence sampled from a stationary time-homogeneous Markov chain on a finite state space is reversible. The test statistics are based on observed deviations of transition sample counts between each pair of states in the chain. First, the joint asymptotic normality of these sample counts is established. This result is then used to construct two chi-squared-based tests for reversibility. Simulations assess the power and type one error of the proposed tests.
A cascading failure is a failure in a system of interconnected parts, in which the breakdown of one element can lead to the subsequent collapse of the others. The aim of this paper is to introduce a simple combinatorial model for the study of cascading failures. In particular, having in mind particle systems and Markov random fields, we take into consideration a network of interacting urns displaced over a lattice. Every urn is Pólya-like and its reinforcement matrix is not only a function of time (time contagion) but also of the behavior of the neighboring urns (spatial contagion), and of a random component, which can represent either simple fate or the impact of exogenous factors. In this way a non-trivial dependence structure among the urns is built, and it is used to study default avalanches over the lattice. Thanks to its flexibility and its interesting probabilistic properties, the given construction may be used to model different phenomena characterized by cascading failures such as power grids and financial networks.
In this paper, we introduce generating networks of splicing processors (GNSP for short),a formal languages generating model related to networks of evolutionary processors and toaccepting networks of splicing processors. We show that all recursively enumerablelanguages can be generated by GNSPs with only nine processors. We also show, by directsimulation, that two other variants of this computing model, where the communicationbetween processors is conducted in different ways, have the same computational power.
We analyze the asymptotic number of items chosen in a selection procedure. The procedure selects items whose rank among all previous applicants is within the best 100p percent of the number of previously selected items. We use analytic methods to obtain a succinct formula for the first-order asymptotic growth of the expected number of items chosen by the procedure.
A class of Markov chains we call successively lumbaple is specified for which it is shown that the stationary probabilities can be obtained by successively computing the stationary probabilities of a propitiously constructed sequence of Markov chains. Each of the latter chains has a(typically much) smaller state space and this yields significant computational improvements. We discuss how the results for discrete time Markov chains extend to semi-Markov processes and continuous time Markov processes. Finally, we will study applications of successively lumbaple Markov chains to classical reliability and queueing models.
One of the most fundamental results in inventory theory is the optimality of (s, S) policy for inventory systems with setup cost. This result is established under a key assumption of infinite ordering/production capacity. Several studies have shown that, when the ordering/production capacity is finite, the optimal policy for the inventory system with setup cost is very complicated and indeed, only partial characterization for the optimal policy is possible. In this paper, we consider a continuous review production/inventory system with finite capacity and setup cost. The demand follows a Poisson process and a demand that cannot be satisfied upon arrival is backlogged. We show that the optimal control policy has a very simple structure when the holding/shortage cost rate is quasi-convex. We also develop efficient algorithms to compute the optimal control parameters.
We consider the problem of dynamic multi-skill routing in call centers. Calls from different customer classes are offered to the call center according to a Poisson process. The agents are grouped into pools according to their heterogeneous skill sets that determine the calls that they can handle. Each pool of agents serves calls with independent exponentially distributed service times. Arriving calls that cannot be served directly are placed in a buffer that is dedicated to the customer class. We obtain nearly optimal dynamic routing policies that are scalable with the problem instance and can be computed online. The algorithm is based on approximate dynamic programming techniques. In particular, we perform one-step policy improvement using a polynomial approximation to relative value functions. We compare the performance of this method with decomposition techniques. Numerical experiments demonstrate that our method outperforms leading routing policies and has close to optimal performance.
The purpose of this study is two-fold. First, we investigate further properties of the second-order regular variation (2RV). These properties include the preservation properties of 2RV under the composition operation and the generalized inverse transform, among others. Second, we derive second-order expansions of the tail probabilities of convolutions of non-independent and identically distributed (i.i.d.) heavy-tail random variables, and establish second-order expansions of risk concentration under mild assumptions. The main results extend some ones in the literature from the i.i.d. case to non-i.i.d. case.
A class of labelled graphs is bridge-addable if, for all graphs G in and all vertices u and v in distinct connected components of G, the graph obtained by adding an edge between u and v is also in ; the class is monotone if, for all G ∈ and all subgraphs H of G, we have H ∈ . We show that for any bridge-addable, monotone class whose elements have vertex set {1,. . .,n}, the probability that a graph chosen uniformly at random from is connected is at least (1−on(1))e−½, where on(1) → 0 as n → ∞. This establishes the special case of the conjecture of McDiarmid, Steger and Welsh when the condition of monotonicity is added. This result has also been obtained independently by Kang and Panagiotou.
The normalized importance sampling estimator allows the target density f to be known only up to a multiplicative constant. We indicate how it can be derived by a delta method-based approximation of a Rao–Blackwellized acceptance rejection estimator. Using additional terms in the delta method then results on a new estimator that also only requires f to be known only up to a multiplicative constant. Numerical examples indicate that the new estimator usually outperforms the normalized importance sampling estimator in terms of mean square error.