To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
System adaptivity is increasingly demanded in high-performance embedded systems, particularly in multimedia system-on-chip (SoC), owing to growing quality-of-service requirements. This paper presents a reactive control model that has been introduced in Gaspard, our framework dedicated to SoC hardware/software co-design. This model aims at expressing adaptivity as well as reconfigurability in systems performing data-intensive computations. It is generic enough to be used for description in the different parts of an embedded system, for example, specification of how different data-intensive algorithms can be chosen according to some computation modes at the functional level; and expression of how hardware components can be selected via the usage of a library of intellectual properties according to execution performances. The transformation of this model toward synchronous languages is also presented, in order to allow an automatic code generation usable for formal verification, based on techniques such as model checking and controller synthesis, as illustrated in the paper. This work, based on Model-Driven Engineering and the standard UML MARTE profile, has been implemented in Gaspard.
Video streaming over mobile wireless networks is getting popular in recent years. High video quality relies on large bandwidth provisioning, however, it decreases the number of supported users in wireless networks. Thus, effective bandwidth utilization becomes a crucial issue in wireless network as the bandwidth resource in wireless environment is precious and limited. The NGN quality of service mechanisms should be designed to reduce the impact of traffic burstiness on buffer management. For this reason, we propose an active dropping mechanism to deal with the effective bandwidth utilization in this paper. We use scalable video coding extension of H.264/AVC standard to provide different video quality for users of different levels. In the proposed dropping mechanism, when the network loading exceeds the threshold, the dropping mechanism starts to drop data of the enhancement layers for users of low service level. The dropping probability alters according to the change in network loading. With the dropping mechanism, the base station increases the system capability and users are able to obtain better service quality when the system is under heavy loading. We also design several methods to adjust the threshold value dynamically. By using the proposed mechanism, better quality can be provided when the network is in congestion.
The emerging Grid is extending the scope of resources to mobile devices and sensors that are connected through loosely connected networks. Nowadays, the number of mobile device users is increasing dramatically and the mobile devices provide various capabilities such as location awareness that are not normally incorporated in fixed Grid resources. Nevertheless, mobile devices exhibit inferior characteristics such as poor performance, limited battery life, and unreliable communication, compared with fixed Grid resources. Especially, the intermittent disconnection from network owing to users’ movements adversely affects performance, and this characteristic makes it inefficient and troublesome to adopt the synchronous message delivery in mobile Grid. This paper presents a mobile Grid system architecture based on mobile agents that support the location management and the asynchronous message delivery in a multi-domain proxy environment. We propose a novel balanced scheduling algorithm that takes users’ mobility into account in scheduling. We analyzed users mobility patterns to quantitatively measure the resource availability, which is classified into three types: full availability, partial availability, and unavailability. We also propose an adaptive load-balancing technique by classifying mobile devices into nine groups depending on availability and by utilizing adaptability based on the multi-level feedback queue to handle the job type change. The experimental results show that our scheduling algorithm provides a superior performance in terms of execution times to the one without considering mobility and adaptive load-balancing.
Let H be a k-uniform hypergraph on n vertices where n is a sufficiently large integer not divisible by k. We prove that if the minimum (k − 1)-degree of H is at least ⌊n/k⌋, then H contains a matching with ⌊n/k⌋ edges. This confirms a conjecture of Rödl, Ruciński and Szemerédi [13], who proved that minimum (k − 1)-degree n/k + O(log n) suffices. More generally, we show that H contains a matching of size d if its minimum codegree is d < n/k, which is also best possible.
The integer an appears as the main term in a weighted average of the number of orbits in a particular quasihyperbolic automorphism of a 2n-torus, which has applications to ergodic and analytic number theory. The combinatorial structure of an is also of interest, as the ‘signed’ number of ways in which 0 can be represented as the sum of ϵjj for −n ≤ j ≤ n (with j ≠ 0), with ϵj ∈ {0, 1}. Our result answers a question of Thomas Ward (no relation to the fourth author) and confirms a conjecture of Robert Israel and Steven Finch.
Bergelson and Tao have recently proved that if G is a D-quasi-random group, and x, g are drawn uniformly and independently from G, then the quadruple (g, x, gx, xg) is roughly equidistributed in the subset of G4 defined by the constraint that the last two coordinates lie in the same conjugacy class. Their proof gives only a qualitative version of this result. The present note gives a rather more elementary proof which improves this to an explicit polynomial bound in D−1.
The size-Ramsey number $\^{r} $(F) of a graph F is the smallest integer m such that there exists a graph G on m edges with the property that every colouring of the edges of G with two colours yields a monochromatic copy of F. In 1983, Beck provided a beautiful argument that shows that $\^{r} $(Pn) is linear, solving a problem of Erdős. In this note, we provide another proof of this fact that actually gives a better bound, namely, $\^{r} $(Pn) < 137n for n sufficiently large.
We establish an improved upper bound for the number of incidences between m points and n circles in three dimensions. The previous best known bound, originally established for the planar case and later extended to any dimension ≥ 2, is O*(m2/3n2/3 + m6/11n9/11 + m + n), where the O*(⋅) notation hides polylogarithmic factors. Since all the points and circles may lie on a common plane (or sphere), it is impossible to improve the bound in ℝ3 without first improving it in the plane.
Nevertheless, we show that if the set of circles is required to be ‘truly three-dimensional’ in the sense that no sphere or plane contains more than q of the circles, for some q ≪ n, then for any ϵ > 0 the bound can be improved to
\[O\bigl(m^{3/7+\eps}n^{6/7} + m^{2/3+\eps}n^{1/2}q^{1/6} + m^{6/11+\eps}n^{15/22}q^{3/22} + m + n\bigr).\]
For various ranges of parameters (e.g., when m = Θ(n) and q = o(n7/9)), this bound is smaller than the lower bound Ω*(m2/3n2/3 + m + n), which holds in two dimensions.
We present several extensions and applications of the new bound.
(i) For the special case where all the circles have the same radius, we obtain the improved bound O(m5/11+ϵn9/11 + m2/3+ϵn1/2q1/6 + m + n).
(ii) We present an improved analysis that removes the subpolynomial factors from the bound when m = O(n3/2−ϵ) for any fixed ϵ < 0.
(iii) We use our results to obtain the improved bound O(m15/7) for the number of mutually similar triangles determined by any set of m points in ℝ3.
Our result is obtained by applying the polynomial partitioning technique of Guth and Katz using a constant-degree partitioning polynomial (as was also recently used by Solymosi and Tao). We also rely on various additional tools from analytic, algebraic, and combinatorial geometry.
Though the study of grounding is still in the early stages, Kit Fine, in ”The Pure Logic of Ground”, has made a seminal attempt at formalization. Formalization of this sort is supposed to bring clarity and precision to our theorizing, as it has to the study of other metaphysically important phenomena, like modality and vagueness. Unfortunately, as I will argue, Fine ties the formal treatment of grounding to the obscure notion of a weak ground. The obscurity of weak ground, together with its centrality in Fine’s system, threatens to undermine the extent to which this formalization offers clarity and precision. In this paper, I show how to overcome this problem. I describe a system, the logic of strict ground (LSG) and demonstrate its adequacy; I specify a translation scheme for interpreting Fine’s weak grounding claims; I show that the interpretation verifies all of the principles of Fine’s system; and I show that derivability in Fine’s system can be exactly characterized in terms of derivability in LSG. I conclude that Fine’s system is reducible to LSG.
is said to be in lexicographic order if its columns are in lexicographic order (where character significance decreases from top to bottom, i.e., either ak < ak+1, or bk ≤ bk+1 when ak = ak+1). A length ℓ (strictly) increasing subsequence of αn is a set of indices i1 < i2 < ⋅⋅⋅ < iℓ such that ai1 < ai2 < ⋅⋅⋅ < aiℓ and bi1 < bi2 < ⋅⋅⋅ < biℓ. We are interested in the statistics of the length of a longest increasing subsequence of αn chosen according to ${\cal D}$n, for different families of distributions ${\cal D} = ({\cal D}_{n})_{n\in\NN}$, and when n goes to infinity. This general framework encompasses well-studied problems such as the so-called longest increasing subsequence problem, the longest common subsequence problem, and problems concerning directed bond percolation models, among others. We define several natural families of different distributions and characterize the asymptotic behaviour of the length of a longest increasing subsequence chosen according to them. In particular, we consider generalizations to d-row arrays as well as symmetry-restricted two-row arrays.
Knowing the symmetries of a polyhedron can be very useful for the analysis of its structure as well as for practical polyhedral computations. In this note, we study symmetry groups preserving the linear, projective and combinatorial structure of a polyhedron. In each case we give algorithmic methods to compute the corresponding group and discuss some practical experiences. For practical purposes the linear symmetry group is the most important, as its computation can be directly translated into a graph automorphism problem. We indicate how to compute integral subgroups of the linear symmetry group that are used, for instance, in integer linear programming.
We give the complete list of possible torsion subgroups of elliptic curves with complex multiplication over number fields of degree 1–13. Additionally we describe the algorithm used to compute these torsion subgroups and its implementation.
This paper presents the use of data clustering methods applied to the analysis results of a design-stage, functional failure reasoning tool. A system simulation using qualitative descriptions of component behaviors and a functional reasoning tool are used to identify the functional impact of a large set of potential single and multiple fault scenarios. The impact of each scenario is collected as the set of categorical function “health” states for each component-level function in the system. This data represents the space of potential system states. The clustering and statistical tools presented in this paper are used to identify patterns in this system state space. These patterns reflect the underlying emergent failure behavior of the system. Specifically, two data analysis tools are presented and compared. First, a modified k-means clustering algorithm is used with a distance metric of functional effect similarity. Second, a statistical approach known as latent class analysis is used to find an underlying probability model of potential system failure states. These tools are used to reason about how the system responds to complex fault scenarios and assists in identifying potential design changes for fault mitigation. As computational power increases, the ability to reason with large sets of data becomes as critical as the analysis methods used to collect that data. The goal of this work is to provide complex system designers with a means of using early design simulation data to identify and mitigate potential emergent failure behavior.
During the design of complex systems, a design process may be subjected to stochastic disruptions, interruptions, and changes, which can be described broadly as “design impulses.” These impulses can have a significant impact on the transient response and converged equilibrium for the design system. We distinguish this research by focusing on the interactions between local and architectural impulses in the form of designer mistakes and dissolution, division, and combination impulses, respectively, for a distributed design case study. We provide statistical support for the “parallel character hypothesis,” which asserts that parallel arrangements generally best mitigate dissolution and division impulses. We find that local impulses tend to slow convergence, but systems also subjected to dissolution or division impulses still favor parallel arrangements. We statistically uphold the conclusion that the strategy to mitigate combination impulses is unaffected by the presence of local impulses.
The systems engineering V (SE-V) is an established process model to guide the development of complex engineering projects (INCOSE, 2011). The SE-V process involves decomposition and integration of system elements through a sequence of tasks that produce both a system design and its testing specifications, followed by successive levels of build, integration, and test activities. This paper presents a method to improve SE-V implementation by mapping multilevel data into design structure matrix (DSM) models. DSM is a representation methodology for identifying interactions between either components or tasks associated with a complex engineering project (Eppinger & Browning, 2012). Multilevel refers to SE-V data on complex interactions that are germane either at multiple levels of analysis (e.g., component versus subsystem) conducted either within a single phase or across multiple time phases (e.g., early or late in the SE-V process). This method extends conventional DSM representation schema by incorporating multilevel test coverage data as vectors into the off-diagonal cells. These vectors provide a richer description of potential interactions between product architecture and SE-V integration test tasks than conventional domain mapping matrices. We illustrate this method with data from a complex engineering project in the offshore oil industry. Data analysis identifies potential for unanticipated outcomes based on incomplete coverage of SE-V interactions during integration tests. In addition, assessment of multilevel features using maximum and minimum function queries isolates all the interfaces that are associated with either early or late revelations of integration risks based on the planned suite of SE-V integration tests.