To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Most of the books on coding and information theory are prepared for those who already have good background knowledge in probability and random processes. It is therefore hard to find a ready-to-use textbook in these two subjects suitable for engineering students at the freshmen level, or for non-engineering major students who are interested in knowing, at least conceptually, how information is encoded and decoded in practice and the theories behind it. Since communications has become a part of modern life, such knowledge is more and more of practical significance. For this reason, when our school requested us to offer a preliminary course in coding and information theory for students who do not have any engineering background, we saw this as an opportunity and initiated the plan to write a textbook.
In preparing this material, we hope that, in addition to the aforementioned purpose, the book can also serve as a beginner's guide that inspires and attracts students to enter this interesting area. The material covered in this book has been carefully selected to keep the amount of background mathematics and electrical engineering to a minimum. At most, simple calculus plus a little probability theory are used here, and anything beyond that is developed as needed. Its first version has been used as a textbook in the 2009 summer freshmen course Conversion Between Information and Codes: A Historical View at National Chiao Tung University, Taiwan. The course was attended by 47 students, including 12 from departments other than electrical engineering.
In the previous chapters, we discussed the history and application of modern positioning systems that enable the delivery of location-based services (LBS). In this chapter, we shift our attention to the fundamental positioning principles used in these systems. We begin this chapter by presenting the location stack, a model of location-aware systems, and identify the focus of this book (Section 3.1). We then proceed to discuss the most commonly used techniques for computing the position of mobile receivers. Similar to the techniques used in celestial navigation, modern positioning systems often employ a set of references with known locations for position computation. In this chapter, we discuss different positioning methods, differentiated by the type of references and signal measurements used (Sections 3.2 to 3.4). In addition to these techniques, which generally employ wireless signals, we will also briefly review dead reckoning (Section 3.6) and computer-based positioning (Section 3.7) methods. These two techniques employ modalities complementary to wireless measurements and as such provide a promising direction of development for hybrid positioning systems that employ multiple measurements to improve the accuracy and reliability of positioning. Finally, we conclude the chapter by discussing the advantages and disadvantages of each positioning method (Section 3.8).
The location stack
To position this book within the wealth of information available on positioning systems used in LBS, we review a model of location-aware systems proposed by Hightower et al. [35].
In this chapter we will consider a new type of coding. So far we have concentrated on codes that can help detect or even correct errors; we now would like to use codes to represent some information more efficiently, i.e. we try to represent the same information using fewer digits on average. Hence, instead of protecting data from errors, we try to compress it such as to use less storage space.
To achieve such a compression, we will assume that we know the probability distribution of the messages being sent. If some symbols are more probable than others, we can then take advantage of this by assigning shorter code-words to the more frequent symbols and longer codewords to the rare symbols. Hence, we see that such a code has codewords that are not of fixed length.
Unfortunately, variable-length codes bring with them a fundamental problem: at the receiving end, how do you recognize the end of one codeword and the beginning of the next? To attain a better understanding of this question and to learn more about how to design a good code with a short average codeword length, we start with a motivating example.
A motivating example
You would like to set up your own telephone system that connects you to your three best friends. The question is how to design efficient binary phone numbers. In Table 4.1 you find six different ways of how you could choose them.
For thousands of years, the ability to explore the world has significantly impacted human civilization. Human explorations have enabled the interaction of cultures for the purposes of geographic expansion (for example, through war and colonization) and economic development through trade. These interactions have also played a pivotal role in an exchange of knowledge that has supported the advancement of science, the development of religion, and the flourishing of the arts throughout the world.
World exploration is largely enabled by the ability to control the movement of a vessel from one position to another. This process, known as navigation, requires the knowledge of the locations of the source and destination points. The process of determining the location of points in space is known as positioning. In this book, we use the terms location and position interchangeably to refer to the point in physical space occupied by a person or object.
Throughout history, various positioning methods have been developed including methods using the relation of a point to various reference points such as celestial bodies and the Earth's magnetic pole. More recently, the advent of wireless communications has led to the development of a number of additional positioning systems that enable not only navigation, but also the delivery of additional value-added services. The focus of this book is one such positioning method that employs wireless local area signals to determine the location of wireless devices.
Traditionally, the application scope of positioning systems was limited to target tracking and navigation in civilian and military applications. This has changed in past decades with the advent of mobile computing. In particular, the maturation of wireless communication and advances in microelectronics have given birth to mobile computing devices, such as laptops and smart phones, which are equipped with sensing and computing capabilities. The mobility of these computing devices in wireless networks means that users' communication, resource, and information needs now change with their physical location. More specifically, location information is now part of the context in which users access and consume wireless services. This, together with the availability of positioning information (for example, through the Global Positioning System), both necessitated and enabled the development of services that cater to the changing needs of mobile users [34]. This need has sparked a new generation of applications for positioning known as location-based services (LBS) or location-aware systems. Formally, LBS have been defined in many ways [24, 50, 80]. In this book, the term LBS is used to indicate services that use the position of a user to add value to a service [50].
In this chapter, we will discuss the economical and ethical implications of LBS. We begin with an assessment of the market potential for these services (Section 2.1). This is followed by a discussion of application areas where LBS services can be employed (Section 2.2). Finally, we discuss the ethical implications of LBS (Section 2.3).
Let kr(n, δ) be the minimum number of r-cliques in graphs with n vertices and minimum degree at least δ. We evaluate kr(n, δ) for δ ≤ 4n/5 and some other cases. Moreover, we give a construction which we conjecture to give all extremal graphs (subject to certain conditions on n, δ and r).
This special issue of TPLP commemorates the 25th edition of the annual conference organized by GULP (Gruppo Ricercatori e Utenti Logic Programming), the Italian group of researchers and users of logic programming. The first event in this series was held at Genoa in 1986, one year after the foundation of the user group, continuing annually ever since. In 1994, the conference joined forces with the Spanish conference PRODE (on Declarative Programming), and in 1996 with the Portuguese APPIA (on Artificial Intelligence). This collaboration continued until 2003. Starting from 2004, the event became known as CILC (Convegno Italiano di Logica Computazionale, Italian Conference on Computational Logic), thereby broadening its topics to general computational logic, while becoming a national Italian event again. Being one of the oldest and largest national events of its kind, over the years the conference has been an important networking opportunity and catalyst for persons with different backgrounds, coming from theory and practice, and from research and industry, for exchanging their visions, achievements, and challenges in logic programming. For a more detailed historical account on GULP and its annual conferences, we refer to Rossi (2010).
Distribution semantics is one of the most prominent approaches for the combination of logic programming and probability theory. Many languages follow this semantics, such as Independent Choice Logic, PRISM, pD, Logic Programs with Annotated Disjunctions (LPADs), and ProbLog. When a program contains functions symbols, the distribution semantics is well–defined only if the set of explanations for a query is finite and so is each explanation. Well–definedness is usually either explicitly imposed or is achieved by severely limiting the class of allowed programs. In this paper, we identify a larger class of programs for which the semantics is well–defined together with an efficient procedure for computing the probability of queries. Since Logic Programs with Annotated Disjunctions offer the most general syntax, we present our results for them, but our results are applicable to all languages under the distribution semantics. We present the algorithm “Probabilistic Inference with Tabling and Answer subsumption” (PITA) that computes the probability of queries by transforming a probabilistic program into a normal program and then applying SLG resolution with answer subsumption. PITA has been implemented in XSB and tested on six domains: two with function symbols and four without. The execution times are compared with those of ProbLog, cplint, and CVE. PITA was almost always able to solve larger problems in a shorter time, on domains with and without function symbols.
In this paper, we combine Answer Set Programming (ASP) with Dynamic Linear Time Temporal Logic (DLTL) to define a temporal logic programming language for reasoning about complex actions and infinite computations. DLTL extends propositional temporal logic of linear time with regular programs of propositional dynamic logic, which are used for indexing temporal modalities. The action language allows general DLTL formulas to be included in domain descriptions to constrain the space of possible extensions. We introduce a notion of Temporal Answer Set for domain descriptions, based on the usual notion of Answer Set. Also, we provide a translation of domain descriptions into standard ASP and use Bounded Model Checking (BMC) techniques for the verification of DLTL constraints.
We present a method for the automated verification of temporal properties of infinite state systems. Our verification method is based on the specialization of constraint logic programs (CLP) and works in two phases: (1) in the first phase, a CLP specification of an infinite state system is specialized with respect to the initial state of the system and the temporal property to be verified, and (2) in the second phase, the specialized program is evaluated by using a bottom-up strategy. The effectiveness of the method strongly depends on the generalization strategy which is applied during the program specialization phase. We consider several generalization strategies obtained by combining techniques already known in the field of program analysis and program transformation, and we also introduce some new strategies. Then, through many verification experiments, we evaluate the effectiveness of the generalization strategies we have considered. Finally, we compare the implementation of our specialization-based verification method to other constraint-based model checking tools. The experimental results show that our method is competitive with the methods used by those other tools.
This article is devoted to the study of methods to change defeasible logic programs (de.l.p.s) which are the knowledge bases used by the Defeasible Logic Programming (DeLP) interpreter. DeLP is an argumentation formalism that allows to reason over potentially inconsistent de.l.p.s. Argument Theory Change (ATC) studies certain aspects of belief revision in order to make them suitable for abstract argumentation systems. In this article, abstract arguments are rendered concrete by using the particular rule-based defeasible logic adopted by DeLP. The objective of our proposal is to define prioritized argument revision operators à la ATC for de.l.p.s, in such a way that the newly inserted argument ends up undefeated after the revision, thus warranting its conclusion. In order to ensure this warrant, the de.l.p. has to be changed in concordance with a minimal change principle. To this end, we discuss different minimal change criteria that could be adopted. Finally, an algorithm is presented, implementing the argument revision operations.
Simple families of increasing trees were introduced by Bergeron, Flajolet and Salvy. They include random binary search trees, random recursive trees and random plane-oriented recursive trees (PORTs) as important special cases. In this paper, we investigate the number of subtrees of size k on the fringe of some classes of increasing trees, namely generalized PORTs and d-ary increasing trees. We use a complex-analytic method to derive precise expansions of mean value and variance as well as a central limit theorem for fixed k. Moreover, we propose an elementary approach to derive limit laws when k is growing with n. Our results have consequences for the occurrence of pattern sizes on the fringe of increasing trees.
A data integration system provides transparent access to different data sources by suitably combining their data, and providing the user with a unified view of them, called global schema. However, source data are generally not under the control of the data integration process; thus, integrated data may violate global integrity constraints even in the presence of locally consistent data sources. In this scenario, it may be anyway interesting to retrieve as much consistent information as possible. The process of answering user queries under global constraint violations is called consistent query answering (CQA). Several notions of CQA have been proposed, e.g., depending on whether integrated information is assumed to be sound, complete, exact, or a variant of them. This paper provides a contribution in this setting: it uniforms solutions coming from different perspectives under a common Answer-Set Programming (ASP)-based core, and provides query-driven optimizations designed for isolating and eliminating inefficiencies of the general approach for computing consistent answers. Moreover, the paper introduces some new theoretical results enriching existing knowledge on the decidability and complexity of the considered problems. The effectiveness of the approach is evidenced by experimental results.
We provide a Hilbert-style axiomatization of the logic of ‘actually’, as well as a two-dimensional semantics with respect to which our logics are sound and complete. Our completeness results are quite general, pertaining to all such actuality logics that extend a normal and canonical modal basis. We also show that our logics have the strong finite model property and permit straightforward first-order extensions.
Answer-Set Programming (ASP) is a powerful logic-based programming language, which is enjoying increasing interest within the scientific community and (very recently) in industry. The evaluation of Answer-Set Programs is traditionally carried out in two steps. At the first step, an input program undergoes the so-called instantiation (or grounding) process, which produces a program ′ semantically equivalent to , but not containing any variable; in turn, ′ is evaluated by using a backtracking search algorithm in the second step. It is well-known that instantiation is important for the efficiency of the whole evaluation, might become a bottleneck in common situations, is crucial in several real-world applications, and is particularly relevant when huge input data have to be dealt with. At the time of this writing, the available instantiator modules are not able to exploit satisfactorily the latest hardware, featuring multi-core/multi-processor Symmetric MultiProcessing technologies. This paper presents some parallel instantiation techniques, including load-balancing and granularity control heuristics, which allow for the effective exploitation of the processing power offered by modern Symmetric MultiProcessing machines. This is confirmed by an extensive experimental analysis reported herein.
There is an interesting logical/semantic issue with some mathematical languages and theories. In the language of (pure) complex analysis, the two square roots of −1 are indiscernible: anything true of one of them is true of the other. So how does the singular term ‘i’ manage to pick out a unique object? This is perhaps the most prominent example of the phenomenon, but there are some others. The issue is related to matters concerning the use of definite descriptions and singular pronouns, such as donkey anaphora and the problem of indistinguishable participants. Taking a cue from some work in linguistics and the philosophy of language, I suggest that i functions like a parameter in natural deduction systems. This may require some rethinking of the role of singular terms, at least in mathematical languages.
An agent-centered, goal-directed, resource-bound logic of human reasoning would do well to note that individual cognitive agency is typified by the comparative scantness of available cognitive resources—information, time, and computational capacity, to name just three. This motivates individual agents to set their cognitive agendas proportionately, that is, in ways that carry some prospect of success with the resources on which they are able to draw. It also puts a premium on cognitive strategies which make economical use of those resources. These latter I call scant-resource adjustment strategies, and they supply the context for an analysis of abduction. The analysis is Peircian in tone, especially in the emphasis it places on abduction’s ignorance-preserving character. My principal purpose here is to tie abduction’s scarce-resource adjustment capacity to its ignorance preservation.