To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper presents an improved Rao-Blackwellized particle filtering framework with consideration of the particle swarm characteristics in FastSLAM, called Relational FastSLAM or R-FastSLAM. The R-FastSLAM seeks to cope with the inherent problems of FastSLAM, i.e., a particle depletion problem and an error accumulation problem in large environments. The R-FastSLAM uses the particle swarm characteristics in calculating the importance weight and maintaining a particle formation. We assign more accurate weights to particles by clustering them using the Expectation-Maximization (EM) algorithm according to an adaptive weight compensation scheme. In addition, particles constitute an adaptive triangular mesh formation to maintain multiple data association hypotheses without any resampling step. Its outstanding accomplishments are verified on simulations and a test using the Victoria Park dataset by comparing the standard FastSLAM 2.0 with the particle swarm optimization based FastSLAM.
A distributed control mechanism for ground moving nonholonomic robots is proposed. It enables a group of mobile robots to autonomously manage formation shapes while navigating through environments with obstacles. The mechanism consists of two stages, with the first being formation control that allows basic formation shapes to be maintained without the need of any inter-robot communication. It is followed by obstacle avoidance, which is designed with maintaining the formation in mind. Every robot is capable of performing basic obstacle avoidance by itself. However, to ensure that the formation shape is maintained, formation scaling is implemented. If the formation fails to hold its shape when navigating through environments with obstacles, formation morphing has been incorporated to preserve the interconnectivity of the robots, thus reducing the possibility of losing robots from the formation.
The algorithm has been implemented on a nonholonomic multi-robot system for empirical analysis. Experimental results demonstrate formations completing an obstacle course within 12 s with zero collisions. Furthermore, the system is capable of withstanding up to 25% sensor noise.
Let it(G) be the number of independent sets of size t in a graph G. Engbers and Galvin asked how large it(G) could be in graphs with minimum degree at least δ. They further conjectured that when n ⩾ 2δ and t ⩾ 3, it(G) is maximized by the complete bipartite graph Kδ,n−δ. This conjecture has recently drawn the attention of many researchers. In this short note, we prove this conjecture.
Let $\mathcal{F}$ = {F1, F2,. . ., Fn} be a family of n sets on a ground set S, such as a family of balls in ℝd. For every finite measure μ on S, such that the sets of $\mathcal{F}$ are measurable, the classical inclusion–exclusion formula asserts that
that is, the measure of the union is expressed using measures of various intersections. The number of terms in this formula is exponential in n, and a significant amount of research, originating in applied areas, has been devoted to constructing simpler formulas for particular families $\mathcal{F}$. We provide an upper bound valid for an arbitrary $\mathcal{F}$: we show that every system $\mathcal{F}$ of n sets with m non-empty fields in the Venn diagram admits an inclusion–exclusion formula with mO(log2n) terms and with ±1 coefficients, and that such a formula can be computed in mO(log2n) expected time. For every ϵ > 0 we also construct systems with Venn diagram of size m for which every valid inclusion–exclusion formula has the sum of absolute values of the coefficients at least Ω(m2−ϵ).
The authors would like to rectify a mistake made in Theorem 1.1 of their article (Behrisch, Cojaa-Oghlan & Kang 2014), published in issue 23 (3). The text below explains the changes required.
We generalize and improve recent results by Bóna and Knopfmacher and by Banderier and Hitcz-enko concerning the joint distribution of the sum and number of parts in tuples of restricted compositions. Specifically, we generalize the problem to general combinatorial classes and relax the requirement that the sizes of the compositions be equal. We extend the main explicit results to enumeration problems whose counting sequences are Riordan arrays. In this framework, we give an alternative method for computing asymptotics in the supercritical case, which avoids explicit diagonal extraction.
We find new properties of the topological transition polynomial of embedded graphs, Q(G). We use these properties to explain the striking similarities between certain evaluations of Bollobás and Riordan's ribbon graph polynomial, R(G), and the topological Penrose polynomial, P(G). The general framework provided by Q(G) also leads to several other combinatorial interpretations these polynomials. In particular, we express P(G), R(G), and the Tutte polynomial, T(G), as sums of chromatic polynomials of graphs derived from G, show that these polynomials count k-valuations of medial graphs, show that R(G) counts edge 3-colourings, and reformulate the Four Colour Theorem in terms of R(G). We conclude with a reduction formula for the transition polynomial of the tensor product of two embedded graphs, showing that it leads to additional relations among these polynomials and to further combinatorial interpretations of P(G) and R(G).
Generally, the multi-armed has been studied under the setting that at each time step over an infinite horizon a controller chooses to activate a single process or bandit out of a finite collection of independent processes (statistical experiments, populations, etc.) for a single period, receiving a reward that is a function of the activated process, and in doing so advancing the chosen process. Classically, rewards are discounted by a constant factor β∈(0, 1) per round.
In this paper, we present a solution to the problem, with potentially non-Markovian, uncountable state space reward processes, under a framework in which, first, the discount factors may be non-uniform and vary over time, and second, the periods of activation of each bandit may be not be fixed or uniform, subject instead to a possibly stochastic duration of activation before a change to a different bandit is allowed. The solution is based on generalized restart-in-state indices, and it utilizes a view of the problem not as “decisions over state space” but rather “decisions over time”.
We study the network structure of Wikipedia (restricted to its mathematical portion), MathWorld, and DLMF. We approach these three online mathematical libraries from the perspective of several global and local network-theoretic features, providing for each one the appropriate value or distribution, along with comparisons that, if possible, also include the whole of the Wikipedia or the Web. We identify some distinguishing characteristics of all three libraries, most of them supposedly traceable to the libraries' shared nature of relating to a very specialized domain. Among these characteristics are the presence of a very large strongly connected component in each of the corresponding directed graphs, the complete absence of any clear power laws describing the distribution of local features, and the rise to prominence of some local features (e.g., stress centrality) that can be used to effectively search for keywords in the libraries.
The present book is a completely rewritten version of the second edition of my Introduction to Functional Programming using Haskell (Prentice Hall). The main changes are: a reorganisation of some introductory material to reflect the needs of a one or two term lecture course; a fresh set of case studies; and a collection of over 100 exercises that now actually contain answers. As before, no knowledge of computers or programming is assumed, so the material is suitable as a first course in computing.
Every author has his or her own drum to beat when writing a textbook, and the present one is no different. While there are now numerous books, tutorials, articles and blogs devoted to Haskell, few of them emphasise what seems to me the main reason why functional programming is the best thing since sliced bread: the ability to think mathematically about functional programs. And the mathematics involved is neither new nor difficult. Any student who has come to grips with, say, high-school trigonometry and has applied simple trigonometric laws and identities to simplify expressions involving sines and cosines (a typical example: express sin 3α in terms of sin α) will quickly appreciate that a similar activity is being proposed for programming problems. And the payoff is there at the terminal: faster computations.
• Functional programming is a method of program construction that emphasises functions and their application rather than commands and their execution.
• Functional programming uses simple mathematical notation that allows problems to be described clearly and concisely.
• Functional programming has a simple mathematical basis that supports equational reasoning about the properties of programs.
Our aim in this book is to illustrate these three key points, using a specific functional language called Haskell.
Functions and types
We will use the Haskell notation
f :: X → Y
to assert that f is a function taking arguments of type X and returning results of type Y. For example,
sin :: Float → Float
age :: Person → Int
add :: (Integer,Integer) → Integer
logBase :: Float → (Float → Float)
Float is the type of floating-point numbers, things like 3.14159, and Int is the type of limited-precision integers, integers n that lie in a restricted range such as −229 ≤ n < 229. The restriction is lifted with the type Integer, which is the type of unlimited-precision integers. As we will see in Chapter 3, numbers in Haskell come in many flavours.
In mathematics one usually writes f(x) to denote the application of the function f to the argument x.
Lists are the workhorses of functional programming. They can be used to fetch and carry data from one function to another; they can be taken apart, rearranged and combined with other lists to make new lists. Lists of numbers can be summed and multiplied; lists of characters can be read and printed; and so on. The list of useful operations on lists is a long one. This chapter describes some of the operations that occur most frequently, though one particularly important class will be introduced only in Chapter 6.
List notation
As we have seen, the type [a] denotes lists of elements of type a. The empty list is denoted by []. We can have lists over any type but we cannot mix different types in the same list. As examples,
[undefined,undefined] :: [a]
[sin, cos, tan] :: Floating a ⇒ [a → a]
[[1,2,3],[4,5]] :: Num a ⇒ [[a]]
[“tea”, “for”, 2] not valid
List notation, such as [1,2,3], is in fact an abbreviation for a more basic form
1:2:3:[]
The operator (:) :: a → [a] → [a], pronounced ‘cons’, is a constructor for lists. It associates to the right so there is no need for parentheses in the above expression. It has no associated definition, which is why it is a constructor. In other words, there are no rules for simplifying an expression such as 1:2:[].
The question of efficiency has been an ever-present undercurrent in recent discussions, and the time has come to bring this important subject to the surface. The best way to achieve efficiency is, of course, to find a decent algorithm for the problem. That leads us into the larger topic of Algorithm Design, which is not the primary focus of this book. Nevertheless we will touch on some fundamental ideas later on. In the present chapter we concentrate on a more basic question: functional programming allows us to construct elegant expressions and definitions, but do we know what it costs to evaluate them? Alan Perlis, a US computer scientist, once inverted Oscar Wilde's definition of a cynic to assert that a functional programmer was someone who knew the value of everything and the cost of nothing.
Lazy evaluation
We said in Chapter 2 that, under lazy evaluation, an expression such as
sqr (sqr (3+4))
where sqr x = x*x, is reduced to its simplest possible form by applying reduction steps from the outside in. That means the definition of the function sqr is installed first, and its argument is evaluated only when needed. The following evaluation sequence follows this prescription, but is not lazy evaluation:
sqr (sqr (3+4))
= sqr (3+4) ⋆ sqr (3+4)
= ((3+4)⋆(3+4)) ⋆ ((3+4)⋆(3+4))
= …
= 2401
The ellipsis in the penultimate line hides no fewer than four evaluations of 3+4 and two of 7⋆7.
We have seen a lot of laws in the previous two chapters, though perhaps the word ‘law’ is a little inappropriate because it suggests something that is given to us from on high and which does not have to be proved. At least the word has the merit of being short. All of the laws we have encountered so far assert the equality of two functional expressions, possibly under subsidiary conditions; in other words, laws have been equations or identities between functions, and calculations have been point-free calculations (see Chapter 4, and the answer to Exercise K for more on the point-free style). Given suitable laws to work with, we can then use equational reasoning to prove other laws. Equational logic is a simple but powerful tool in functional programming because it can guide us to new and more efficient definitions of the functions and other values we have constructed. Efficiency is the subject of the following chapter. This one is about another aspect of equational reasoning, proof by induction. We will also show how to shorten proofs by introducing a number of higher-order functions that capture common patterns of computations. Instead of proving properties of similar functions over and over again, we can prove more general results about these higher-order functions, and appeal to them instead.
In Haskell every well-formed expression has, by definition, a well-formed type. Each well-formed expression has, by definition, a value. Given an expression for evaluation,
• GHCi checks that the expression is syntactically correct, that is, it conforms to the rules of syntax laid down by Haskell.
• If it is, GHCi infers a type for the expression, or checks that the type supplied by the programmer is correct.
• Provided the expression is well-typed, GHCi evaluates the expression by reducing it to its simplest possible form to produce a value. Provided the value is printable, GHCi then prints it at the terminal.
In this chapter we continue the study of Haskell by taking a closer look at these processes.
A session with GHCi
One way of finding out whether or not an expression is well-formed is of course to use GHCi. There is a command :type expr which, provided expr is well-formed, will return its type. Here is a session with GHCi (with some of GHCi's responses abbreviated):
ghci> 3 +4)
<interactive>:1:5: parse error on input ‵)'
GHCi is complaining that on line 1 the character ')' at position 5 is unexpected; in other words, the expression is not syntactically correct.
Back in Chapter 2 we described the function putStrLn as being a Haskell command, and IO a as being the type of input–output computations that interact with the outside world and deliver values of type a. We also mentioned some syntax, called do-notation, for sequencing commands. This chapter explores what is really meant by these words, and introduces a new style of programming called monadic programming. Monadic programs provide a simple and attractive way to describe interaction with the outside world, but are also capable of much more: they provide a simple sequencing mechanism for solving a range of problems, including exception handling, destructive array updates, parsing and state-based computation. In a very real sense, a monadic style enables us to write functional programs that mimic the kind of imperative programs one finds in languages such as Python or C.
The IO monad
The type IO a is an abstract type in the sense described in the previous chapter, so we are not told how its values, which are called actions or commands, are represented. But you can think of this type as being
type IO a = World → (a, World)
Thus an action is a function that takes a world and delivers a value of type a and a new world.
This chapter is devoted to an example of how to build a small library in Haskell. A library is an organised collection of types and functions made available to users for carrying out some task. The task we have chosen to discuss is pretty-printing, the idea of taking a piece of text and laying it out over a number of lines in such a way as to make the content easier to view and understand. We will ignore many of the devices for improving the readability of a piece of text, devices such as a change of colour or size of font. Instead we concentrate only on where to put the line breaks and how to indent the contents of a line. The library won't help you to lay out bits of mathematics, but it can help in presenting tree-shaped information, or in displaying lists of words as paragraphs.
Setting the scene
Let's begin with the problem of displaying conditional expressions. In this book we have used three ways of displaying such expressions:
if p then expr1 else expr2
if p then expr1
else expr2
if p
then expr1
else expr2
These three layouts, which occupy one, two or three lines, respectively, are considered acceptable, but the following two are not:
if p then
expr1 else expr2
if p
then expr1 else expr2
The decision as to what is or is not acceptable is down to me, the author.
Coding in a new metric space, called the Enomoto-Katona space, has recently been considered in connection with the study of implication structures of functional dependencies and their generalizations in relational databases. The central problem is the determination of C(n,k,d), the size of an optimal code of length n, weight k, and distance d in the Enomoto-Katona space. The value of C(n,k,d) was known only for some congruence classes of n when (k,d) ∈ {(2,3),(3,5)}. In this paper, we obtain new infinite families of optimal codes in the Enomoto-Katona space and verify a conjecture of Brightwell and Katona in certain instances. In particular, C(n,k, 2k − 1) is determined for all sufficiently large n satisfying either n ≡ 1 mod k and n(n − 1) ≡ 0 mod 2k2, or n ≡ 0 mod k. We also give complete solutions for k = 2 and determine C(n,3,5) for certain congruence classes of n with finite exceptions.
Numbers in Haskell are complicated because in the Haskell world there are many different kinds of number, including:
Int limited-precision integers in at least the range [−229, 229). Integer overflow is not detected.
Integer arbitrary-precision integers
Rational arbitrary-precision rational numbers
Float single-precision floating-point numbers
Double double-precision floating-point numbers
Complex complex numbers (defined in Data.Complex)
Most programs make use of numbers in one way or another, so we have to get at least a working idea of what Haskell offers us and how to convert between the different kinds. That is what the present chapter is about.
The type class Num
In Haskell all numbers are instances of the type class Num:
class (Eq a, Show a) ⇒ Num a where
(+),(−),(*) :: a → a → a
negate :: a → a
abs, signum :: a → a
fromInteger :: Integer → a
The class Num is a subclass of both Eq and Show. That means every number can be printed and any two numbers can be compared for equality. Any number can be added to, subtracted from or multiplied by another number. Any number can be negated. Haskell allows -x to denote negate x; this is the only prefix operator in Haskell.
The functions abs and signum return the absolute value of a number and its sign.