To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Ambient intelligence (AmI) proposes pervasive information systems composed of autonomous agents embedded within the environment who, in orchestration, complement human activity in an intelligent manner. As such, it is an interesting and challenging application area for many computer science fields and approaches. A critical issue in such application scenarios is that the agents must be able to acquire, exchange, and evaluate knowledge about the environment, its users, and their activities. Knowledge populated between the agents in such systems may be contextually dependent, ambiguous, and incomplete. Conflicts may thus naturally arise, that need to be dealt with by the agents in an autonomous way. In this survey, we relate AmI to the area of knowledge representation and reasoning (KR), where conflict resolution has been studied for a long time. We take a look at a number of KR approaches that may be applied: context modelling, multi-context systems, belief revision, ontology evolution and debugging, argumentation, preferences, and paraconsistent reasoning. Our main goal is to describe the state of the art in these fields, and to draw attention of researchers to important theoretical issues and practical challenges that still need to be resolved, in order to reuse the results from KR in AmI systems or similar complex and demanding applications.
A large number of data providers publish and connect their structured data on the Web as linked data. Thus, the Web of data becomes a global data space. In this paper, we initially give an overview of query processing approaches used in this interlinked and distributed environment, and then focus on federated query processing on linked data. We provide a detailed and clear insight on data source selection, join methods and query optimization methods of existing query federation engines. Furthermore, we present a qualitative comparison of these engines and give a complementary comparison of the measured metrics of each engine with the idea of pointing out the major strengths of each one. Finally, we discuss the major challenges of federated query processing on linked data.
Statistical machine translation (SMT) is gaining interest given that it can easily be adapted to any pair of languages. One of the main challenges in SMT is domain adaptation because the performance in translation drops when testing conditions deviate from training conditions. Many research works are arising to face this challenge. Research is focused on trying to exploit all kinds of material, if available. This paper provides an overview of research, which copes with the domain adaptation challenge in SMT.
Folksonomy gives liberty to its users to freely assign chosen keywords as tags, and this is the main reason behind its popularity. Apart from freedom, this system also reflects the collective intelligence of the crowd. However, this freedom and liberty can degrade quality of the folksonomy. It is required that quality of the folksonomy must remain consistently excellent and does not degrade with the passage of time. This is a survey paper, in which we present a brief survey of the research efforts intended to maintain a quality-protected folksonomy. We have organized our paper by looking at the problem from four aspects namely selection of quality tags, tag management features provided by folksonomy applications, folksonomy cleaning and interoperability of tags across platforms. We conclude our review with some of the interesting research topics, which need to be explored further. Our conclusion will be relevant and beneficial for engineers and designers who aim to design and maintain a quality-protected folksonomy.
We propose a new class of modular-robotic structures, intended to produce forces which scale with the number of modules. We adopt the concept of a spherical catom and extend it by a new connection type which is relatively strong but static. We examine analytically and numerically the mechanical properties of two collective-actuator designs. The simulations are based on the discrete element method (DEM), with friction and elastic deformations taken into account. One of the actuators is shown to generate forces proportional to its volume. This property seems necessary for building modular structures of useful strength and dimensions.
Proving bounds on the resource consumption of a program by statically analyzing its source code is an important and well-studied problem. Automatic approaches for numeric programs with side effects usually apply abstract interpretation-based invariant generation to derive bounds on loops and recursion depths of function calls. This article presents an alternative approach to resource-bound analysis for numeric and heap-manipulating programs that uses type-based amortized resource analysis. As a first step towards the analysis of imperative code, the technique is developed for a first-order ML-like language with unsigned integers and arrays. The analysis automatically derives bounds that are multivariate polynomials in the numbers and the lengths of the arrays in the input. Experiments with example programs demonstrate two main advantages of amortized analysis over current abstract interpretation–based techniques. For one thing, amortized analysis can handle programs with non-linear intermediate values like f((n + m)2). For another thing, amortized analysis is compositional and works naturally for compound programs like f(g(x)).
In Interactive realizability for second-order Heyting Arithmetic with EM1 and SK1 (the excluded middle and Skolem axioms restricted to Σ10-formulas), realizers are written in a classical version of Girard's System F. Since the usual reducibility semantics does not apply to such a system, we introduce a constructive forcing/reducibility semantics: though realizers are not computable functionals in the sense of Girard, they can be forced to be computable. We apply this semantics to show how to extract witnesses for realizable Π20-formulas. In particular, a constructive and efficient method is introduced. It is based on a new ‘(state-extending-continuation)-passing-style translation’ whose properties are described with the constructive forcing/reducibility semantics.
We prove the conservation theorem for differential nets – the graph-theoretical syntax of the differential extension of Linear Logic (Ehrhard and Regnier's DiLL). The conservation theorem states that the property of having infinite reductions (here infinite chains of cut elimination steps) is preserved by non-erasing steps. This turns the quest for strong normalisation (SN) into one for non-erasing weak normalisation (WN), and indeed we use this result to prove SN of simply typed DiLL (with promotion). Along the way to the theorem we achieve a number of additional results having their own interest, such as a standardisation theorem and a slightly modified system of nets, DiLL∂ϱ.
‘Innovation’ has become a buzzword in academic papers, news articles, and book titles, but it is variously defined and is often referred to as ‘invention’ or ‘design’. A consensus of understanding the interrelationships of the concepts and activities pertaining to innovation is needed to guide collective action for innovation. This paper proposes a united view of the innovation process, which advocates uniting the complementary (1) science, (2) design, and (3) entrepreneurship sub-processes of innovation. The shared creative, uncertain, and costly nature of these three processes also implies an opportunity to leverage design science to understand and guide the science and entrepreneurship processes. This paper describes the benefits, major challenges, and actionable strategies for uniting science, design, and entrepreneurship as sub-processes of innovation, with a few detailed real life examples. The variety of the cases and examples shows that science, design, and entrepreneurship sub-processes can be effectively united to different extents, within and across organizations and innovation ecosystems.
For each of the notions of hypergraph quasirandomness that have been studied, we identify a large class of hypergraphs F so that every quasirandom hypergraph H admits a perfect F-packing. An informal statement of a special case of our general result for 3-uniform hypergraphs is as follows. Fix an integer r ⩾ 4 and 0 < p < 1. Suppose that H is an n-vertex triple system with r|n and the following two properties:
• for every graph G with V(G) = V(H), at least p proportion of the triangles in G are also edges of H,
• for every vertex x of H, the link graph of x is a quasirandom graph with density at least p.
Then H has a perfect Kr(3)-packing. Moreover, we show that neither of the hypotheses above can be weakened, so in this sense our result is tight. A similar conclusion for this special case can be proved by Keevash's Hypergraph Blow-up Lemma, with a slightly stronger hypothesis on H.
Image inpainting is a mathematically highly ill-posed process. Once something is completely lost, we can never know for sure what has been there – except if we knew the image before it was damaged or we can travel back in time and have a look. This means that image inpainting does not have a unique solution in general. Since we are intelligent human beings, we can narrow done our search by using roughly two main strategies: our natural visual perception and experience and our historical knowledge (almost like travelling back in time) about the image and its hole. Both these strategies are crucial for modelling of the desired inpainting process and will determine the inpainting methods that we will consider in subsequent chapters.
Historical knowledge of the image could be many things and depend very much on the kind of application. In art restoration, known characteristics and techniques used by the painter, intact illustrations of a similar scene by different artists (such is the case in many religious paintings from the eighteenth century), and knowledge about general painting techniques, colour mixtures and materials used at the time the painting was created have been used. In video inpainting, knowledge from earlier frames could be used to extrapolate in time into later frames where damage appears. In medical imaging applications, anatomical knowledge of a template human body is used, and similarly, there are many other application in which geometrical knowledge about the shapes of objects one is looking for can be used (compare the application on the inpainting of satellite images of roads in Section 9.2). This part of the interpolation process depends on the context of the interpolation.
Visual perception, however, gives a context-free continuation of an incomplete scene. It constitutes our – either natural or learned – ability to automatically interpolate broken or occluded structures. This automatic continuation follows certain laws.
Our purpose in writing this book is to provide a gentle introduction to a subject that is enjoying a surge in interest. We believe that the subject is fascinating in its own right, but the increase in interest can be attributed to several factors. One factor is the realization that networks are “everywhere.” From social networks such as Facebook, the World Wide Web and the Internet to the complex interactions between proteins in the cells of our bodies, we face the challenge of understanding their structure and development. By and large natural networks grow in an unpredictable manner and this is often modeled by a random construction. Another factor is the realization by Computer Scientists that NP-hard problems are often easier to solve than their worst-case suggests and that an analysis of running times on random instances can be informative.
History
Random graphs were used by Erdős [274] to give a probabilistic construction of a graph with large girth and large chromatic number. It was only later that Erdős and Rényi began a systematic study of random graphs as objects of interest in their own right. Early on they defined the random graph Gn,m and founded the subject. Often neglected in this story is the contribution of Gilbert [367] who introduced the model Gn,p, but clearly the credit for getting the subject started goes to Erdős and Rényi. Their seminal series of papers [275], [277], [278], [279] and, in particular, [276] on the evolution of random graphs laid the groundwork for other mathematicians to become involved in studying properties of random graphs.
In the early eighties the subject was beginning to blossom and it received a boost from two sources. First was the publication of the landmark book of Béla Bollobás [130] on random graphs. Around the same time, the Discrete Mathematics group at Adam Mickiewicz University began a series of conferences in 1983. This series continues biennially to this day and is now a conference attracting more and more participants.
The next important event in the subject was the start of the journal Random Structures and Algorithms in 1990 followed by Combinatorics, Probability and Computing a few years later. These journals provided a dedicated outlet for work in the area and are flourishing today.