To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We investigate in this chapter the case of linear neural networks, named associative memories by T. Kohonen (Figure 3.1). We begin by specializing the heavy algorithm we have studied in the general case of adaptive systems to the case of neural networks, where controls are matrices. It shows how to modify the last synaptic matrix that has learned a set of patterns for learning a new pattern without forgetting the previous patterns.
Because right-inverses of tensor products are tensor products of right-inverses, we observe that the heavy algorithm has a Hebbian character: The heavy algorithm states that the correction of a synaptic matrix during learning is the product of activities in both presynaptic and postsynaptic neurons. This added feature that plain vectors do not enjoy justifies the specifics of systems controlled by matrices instead of vectors.
We then proceed with associative memories with postprocessing, with multilayer and continuous-layer associative memories. We conclude this chapter with associative memories with gates, where the synaptic matrices link conjuncts (i.e., subsets) of presynaptic neurons with each postsynaptic neuron. They allow computation of any Boolean function. They require a short presentation of fuzzy sets.
We present in this appendix the tests of the external and internal algorithms conducted by Nicolas Seube at Thomson-SINTRA to control the tracking of an exosystem by an autonomous underwater vehicle (AUV). This system has three degrees of freedom (planar motion), six state variables (positions, heading, and their derivatives), and three controls (thruster forces). The dynamics of an AUV are highly nonlinear, coupled, and sometimes fully interacting, thus making it difficult to control by the usual methods. Moreover, the dynamics are poorly known, because only approximate hydrodynamic models are available for realworld vehicles. Finally, we need to involve the marine currents that can significantly perturb the dynamics of the AUV.
In addition, the problem of controlling an AUV cannot be linearized about a single velocity axis because all vehicle velocities usually have the same range; conventional linear control techniques clearly are unable to provide adequate performance by the control systems.
We shall present three different learning rules that address the problems of uniform minimization and adaptive learning by a set-valued feedback control map. The three classes of algorithms presented here have been tested in the case of the Japanese Dolphin AUV.
In particular, it is shown that the gradient step size is critical for the external rule, but is not critical for the uniform external algorithm. The latter could also be applied to pattern-classification problems, and may provide a plausible alternative method to stochastic gradient algorithms.
We propose in this chapter a speculative dynamical description of an abstract cognitive system that goes beyond neural networks to attempt to take into account some features of nervous systems and, in particular, adaptations to environmental constraints. This personal viewpoint of the author is but one of the several attempts to model cognitive processes mathematically. It is presented primarily for the purpose of stirring up reaction and prompting further research involving other techniques and other approaches to this wide field.
Before we look at the evolution of nervous systems for useful suggestions regarding the means they have used to master more and more complex cognitive faculties, we shall start from the fact that an organism must adapt to environmental constraints by perceiving them and recognizing them through “metaphors” with what we shall call “conceptual controls.” This problem of adaptation is not dealt with explicitly in most studies of neural networks. This chapter is devoted to highlighting the roles of cognitive systems in this process.
The variables of the cognitive system are described by its state and a regulatory control (conceptual control). The state of the system (henceforth called the sensorimotor state) is described by
the state and the variations of the environment on which the cognitive system acts,
the state of cerebral motor activity of the cognitive system, which guides an individual's action on the environment.
In the thirty years since it was proved that 0, 1 and 144 were the only perfect squares in the Fibonacci sequence [1, 9], several generalisations have been proved, but many problems remain. Thus it has been shown that 0, 1 and 8 are the only Fibonacci cubes [6] but there seems to be no method available to prove the conjecture that 0, 1, 8 and 144 are the only perfect powers.
The singular limit as one diffusion coefficient approaches zero is considered for travelling wave solutions to a pair f reaction diffusion equations. An explicit criterion determining the sign of the wave speed is obtained. The limit behaviour turns out to be of a different nature for positive and negative wave speed. Different techniques, which may be applicable to a range of examples, are needed in the two cases.
A quasilinear elliptic equation in ℝN of Hamilton-Jacobi-Bellman type is studied. An optimal criterion for uniqueness which involves only a lower bound on the functions is given. The unique solution in this class is identified as the value function of the associated stochastic control problem.
In this paper a one-parameter class of four-dimensional, reversible vector fields is investigated near an equilibrium. We call the parameter μ and place the equilibrium at 0. The differential at 0 is supposed to have ±iq, q > 0, as simple eigenvalues and 0 as a double, nonsemisimple eigenvalue. Our ultimate goal is to construct homoclinic connections of periodic orbits of arbitrary small size, in fact we shall show that the oscillations of the homoclinic orbits at infinity are bounded by a flat function of μ. This result receives its significance from the still unsolved question as to whether solutions exist which are homoclinic to the equilibrium or whether the amplitudes of the oscillations at infinity have a positive infimum. First we construct the periodic solutions. In contrast to previous work, we find these in a full rectangle [0, K0] × ]0,μ0], where K measures the amplitude of the periodic orbits. Then we show that for each n ∈ ℕ there is a μn and a family of periodic solutions X(μ), μ ∈]0,μn[, of Size μn. To each of these solutions, we can find two homoclinic orbits, which are distinguished by their phase shift at infinity. One example of such a vector field occurs when describing the flow of an inviscid irrotational fluid layer under the influence of gravity and small surface tension (Bond number b < ⅓), for a Froude number F close to 1. In this context a homoclinic solution to a periodic orbit is called a generalised solitary wave. Our work shows that there exist solitary waves with oscillations at infinity of order less than |μ|n for every n.
The most classical sufficient condition for the fixed point property of non-expansive mappings FPP in Banach spaces is the normal structure (see [6] and [10]). (See definitions below). Although the normal structure is preserved under finite lp-product of Banach spaces, (1<p≤∞), (see Landes, [12], [13]), not too many positive results are known about the normal structure of an l1,-product of two Banach spaces with this property. In fact, this question was explicitly raised by T. Landes [12], and M. A. Khamsi [9] and T. Domíinguez Benavides [1] proved partial affirmative answers. Here we give wider conditions yielding normal structure for the product X1⊗1X2.