To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Neural representations are distributed. This means that more information can be gleaned from neural ensembles than from single cells. Modern recording technology allows the simultaneous recording of large neural ensembles (of more than 100 cells simultaneously) from awake behaving animals. Historically, the principal means of analyzing representations encoded within large ensembles has been to measure the immediate accuracy of the encoding of behavioral variables (“reconstruction”). In this chapter, we will argue that measuring immediate reconstruction only touches the surface of what can be gleaned from these ensembles. We will discuss the implications of distributed representation, in particular, the usefulness of measuring self-consistency of the representation within neural ensembles. Because representations are distributed, neurons in a population can agree or disagree on the value being represented. Measuring the extent to which a firing pattern matches expectations can provide an accurate assessment of the self-consistency of a representation. Dynamic changes in the self-consistency of a representation are potentially indicative of cognitive processes. We will also discuss the implications of representation of non-local (non-immediate) values for cognitive processes. Because cognition occurs at fast timescales, changes must be detectable at fast (millisecond, tens of milliseconds) timescales.
Representation
As an animal interacts with the world, it encounters various problems for which it must find a solution. The description of the world and the problems encountered within it play a fundamental role in how an animal behaves and finds a solution.
The hippocampus lies at the apex of the hierarchical organization of cortical connectivity, receiving convergent multimodal inputs that are funneled through the adjacent entorhinal cortex (Fig. 2.1). The output of the hippocampus is relayed back through the entorhinal cortex, and thus these structures are ideally placed to both store novel associations and detect predictive errors (Lavenex and Amaral,2000; Witter et al., 2000). Indeed, while memories are likely to be stored across distributed brain regions, the learning and consolidation of explicit memories appear to depend upon the hippocampus and surrounding parahippocampal regions (Morris et al., 2003; Squire et al., 2004). However, while the anatomical substrate of such learning is becoming increasingly well defined, it remains unclear how cells act collectively within these neuronal networks to extract and store salient input correlations.
Over 50 years ago, Donald Hebb postulated a simple cellular learning rule, whereby the strength of the synaptic connection between two neurons would be increased if activity in the presynaptic neuron persistently contributed to discharging the postsynaptic neuron (Hebb, 1949). It has since then been shown that such repeated pairings of synaptic events with postsynaptic action potentials (spikes), within a window of tens of milliseconds, can produce long-term changes in synaptic efficacy in many different neuronal systems, both in vitro and in vivo (Paulsen and Sejnowski, 2000; Bi and Poo, 2001).
The cellular basis of theta-band oscillation and synchrony
The limbic cortex represents multiple synchronizing systems (Bland and Colom, 1993). Populations of cells in these structures display membrane potential oscillations as a result of intrinsic properties of membrane currents. These cells also receive inputs from other cells in the same structure and inputs from cells extrinsic to the structure, many of the latter from nuclei contributing to the ascending brainstem hippocampal synchronizing pathways. Theta-band oscillation and synchrony in the hippocampal formation (HPC) and related limbic structures is recorded as an extracellular field potential consisting of a sinusoidal-like waveform with an amplitude up to 2 mV and a narrow band frequency range of 3–12 Hz in mammals. The asynchronous activity termed large-amplitude irregular activity (LIA) is an irregular waveform with a broadband frequency range of 0.5–25 Hz (Leung et al., 1982). Kramis et al. (1975) were the first formally to propose the existence of two types of hippocampal theta activity, in both the rabbit and the rat (see review by Bland, 1986). One type was termed atropine-sensitive theta, since it were abolished by the administration of atropine sulfate. Atropine-sensitive theta occurred during immobility in rabbits in the normal state and occurred in both rabbits and rats during immobility produced by ethyl ether or urethane treatment. The other type of theta was termed atropine-resistant, since it was not sensitive to treatment with atropine sulfate but was abolished by anesthetics.
Oscillatory synchronization in the gamma-band range (~30–100 Hz) has been proposed as a possible solution to the “binding problem,” i.e. the question of how the brain integrates perceptual features that are processed in distant cortical regions to generate a coherent object representation. Intracortical recordings in animals have demonstrated stimulus-specific synchronous oscillations of spatially distributed, feature-selective neurons (Eckhorn et al., 1988; Gray et al., 1989) that may provide a general mechanism for the temporal coordination of activity patterns in spatially separate regions of the cortex (Gray and Singer, 1989; Singer et al., 1997). In addition to visual feature binding, fast oscillations have been found to reflect modulations of arousal (Munk et al., 1996), perceptual integration (Fries et al., 1997), and attentional selection processes (Fries et al., 2001), and have even been proposed as a potential neural correlate of consciousness (Engel and Singer, 2001; Singer, 2001). In the middle of the last decade, the first studies of gamma-band activity (GBA) in human electroencephalogram (EEG) have relied on paradigms analogous to the early animal work (Lutzenberger et al., 1995; Müller et al., 1996). Since then, investigations using scalp EEG, magnetoencephalography (MEG), and intracranial recordings have supported the functional significance of fast oscillatory activity for a wide range of human cognitive functions. The present chapter will first provide a brief overview of the current state of human GBA research related to visual perception, selective attention, and memory.
The different chapters discuss a wide range of concepts, techniques, and strategies of how to investigate the issue of information encoding in neuronal populations. The diversity clearly shows that the question is of central interest, that there is a lively competition of ideas and concepts, and that it is now possible to address the issue adequately by using new technology, the lack of which hampered discoveries in the past. The multileveled concepts described here show that there will not be a simple model or coding mechanism that can adequately explain how the brain works.
Part II: Organization of neuronal activity in neuronal populations
In this section, discussion focused on what general rules and physiological processes are in place that govern information encoding, processing, network formation, and laying down of memory traces.
In Chapter 2, Edward Mann and Ole Paulsen gave a detailed overview of the cellular mechanisms that underlie the establishment of oscillating networks. The fact that a multitude of specialized ion channels, interneuron subtypes, and neuronal projections are in place to establish defined oscillations in a controlled way clearly supports their concept that this mechanism is important for organizing brain processes. This point was also well illustrated by Brian Bland in Chapter 12, where he showed which “purpose-built” basal brain nuclei and projections are involved in the induction and control of theta oscillations in areas throughout the brain.
Distributed representations are the inevitable consequence of devoting large neuronal circuits to a detailed and adaptive analysis of complex information. The neocortical sheet with its extensive cortico-cortical connectivity is characterized by ubiquitous massive divergence and convergence, sparseness and reciprocity of the vast majority of connections. It therefore appears as an optimized dynamical structure for detailed and adaptive analysis on the one hand and the operation of multiple parallel neuronal processes required for optimizing speed and accuracy of information processing on the other hand. The established concepts of information coding in the cortex are based on tuning functions of many individual neurons thought to express their stimulus specificity in an independent way. A second level of organization is usually attributed to the spatial relations of neurons in topographically organized representations like those in sensory areas and the convergence of neuronal signals carrying information from different modalities into “higher” areas which are more involved in executive functions or the formation of complex memory representations. It has been argued that the collection of neuronal signals from consecutive recording sessions can be used to reconstruct population codes as it has been done with the “population vector” analysis in motor, sensory, and memory areas of the cortex. It is clear that the success of this method relies on fixed neuronal response properties which have been consolidated in cortical circuits over long periods of time such that the spatial pattern and the mixture of neurons contributing to the population response is reasonably stable.
Sensory information progresses centrally from the primary sensors in the periphery to the central neural structures that derive relevant environmental information from these sensory data and determine appropriate physiological and behavioral responses. In this chapter, I present a general theory of early olfactory sensory processing in the primary olfactory epithelium and olfactory bulb (OB). The theory depicts olfactory sensory processing as a cascade of representations, each of which exhibits characteristic physical properties and is sampled by appropriate neural mechanisms in order to construct the subsequent representation. The primary olfactory representation is mediated by the activation pattern across the population of primary olfactory sensory neurons (OSNs) in the sensory epithelium. The secondary olfactory representation is similarly mediated by the activation pattern across the population of principal neurons immediately postsynaptic to the OSNs, known as mitral cells. (Mitral cell axons diverge dramatically, projecting to roughly ten different central structures within the brain; the resulting tertiary and subsequent olfactory representations are constructed outside the olfactory bulb and are not discussed at length herein.) The transformation between the primary and secondary representations is a robust, intricate, two-stage process that corrects for artefacts that can hinder the recognition of odor qualities, regulates stimulus selectivity, and transduces the underlying mechanics from a robust but costly rate-coding scheme on a slow respiratory (theta-band) timescale to a sparse dynamical representation operating on the beta- and gamma-band timescales and suitable for integration with other central neural processes.
Information representation in neuronal populations: what is the “machine language” of the brain?
Research in the area of neuroscience and brain functions has made extraordinary progress in the last 50 years, in particular with the advent of novel methods that enables us to look at the properties of neuroanatomy and neurophysiology in much finer detail, and even at the activity of living brains during the performance of tasks. However, the question of how information is actually represented and encoded by neurons is still one of the “final frontiers” of neuroscience, and surprisingly little progress has been made here. How information is encoded in the brain has captivated medics, scientists, and philosophers for centuries. Scholars such as Leonardo da Vinci or René Descartes had already an astonishingly detailed knowledge of the anatomy of the brain, and had made suggestions that it is the brain that processes information and even harbors the seat of the personality or of the soul. However, whenever suggestions are brought forward how information might be processed and represented in the brain, these often turn out to be simplistic and idealistic. These rarely add up to more than a kind of “homunculus” that somehow receives information that is received via the eyes or the ears. This model only transfers the problem of information representation from the brain to the homunculus.
One problem with the research of information encoding is that it is completely counter-intuitive.
Auditory cortex function beyond bottom–up feature detection
Until the 1980s the auditory cortex was mainly conceptualized as the neuronal structure implementing the top hierarchy level of bottom–up processing of physical characteristics (features) of auditory stimuli. In that respect, plastic changes in anatomical and functional principles were only considered relevant for developmental processes towards an otherwise stable adult brain. Presently, this view has been replaced by a conceptualization of auditory cortex as a structure holding a strategic position in the interaction between bottom–up and top–down processing (for review see Irvine, 2007; Scheich et al., 2007), in particular auditory learning (for review see Weinberger, 2004; Irvine and Wright, 2005; Ohl and Scheich, 2005).
In this chapter we review experimental evidence from gerbil and macaque auditory cortex that has led to this change of view about auditory cortex function. It will be argued that a fundamental understanding of the role of auditory cortex in learning has required to move beyond the study of simple classical conditioning and feature detection learning, for which auditory cortex does not seem to be a generally necessary structure (see below). Specifically, it will be elaborated that the abstraction from trained particular stimuli, as it is epitomized in the phenomenon of category learning (concept formation), is a complex but fundamental learning phenomenon for which auditory cortex is a relevant structure harboring the necessary functional organization.
Pioneering studies of motor cortex by Georgopoulos and colleagues (e.g. Georgopoulos et al., 1982) established that “population vectors,” constructed from weighted averages of the responses of single neurons, can accurately predict behavioral variables, such as movement direction. This approach has been used to study population coding in a number of cortical systems and has led to the view that cortical neurons act as independent processors of information (e.g. Gochin et al., 1994). However, some recent work has challenged this interpretation of neural population activity. For example, Schneidman et al. (2003) proposed interpreting neural ensemble activity by comparing ensemble information with information represented by the single neurons that comprise the ensemble. In a synergistic coding scheme, ensembles encode more than the sum of the component neurons. The advantage of synergy is that there can be a massive gain in information from the activity of multiple neurons. In a redundant coding scheme, the removal of individual neurons has little effect on encoding and thus the ensembles can be less noisy and less prone to errors. In Narayanan et al. (2005), we adapted the information-theoretical framework proposed by Schneidman et al. (2003) to measures of decoding of the performance of a delayed response task with activity from the rodent motor cortex. The predictive relationship between neural firing rates and a categorical measure of behavior, e.g. correct vs. error performance of a reaction time task, was quantified using statistical classifiers.
In this paper we present a dual approximation scheme for the classconstrained shelf bin packing problem.In this problem, we are given bins of capacity 1, and n items ofQ different classes, each item e with class ce and sizese. The problem is to pack the items into bins, such thattwo items of different classes packed in a same bin must be indifferent shelves. Items in a same shelf are packed consecutively.Moreover, items in consecutive shelves must be separated by shelfdivisors of size d. In a shelf bin packing problem, we have toobtain a shelf packing such that the total size of items and shelfdivisors in any bin is at most 1. A dual approximation scheme must obtain a shelf packing of all items into N bins, such that, thetotal size of all items and shelf divisors packed in any bin is atmost 1 + ε for a given ε > 0 and N is the number of bins usedin an optimum shelf bin packing problem.Shelf divisors are used to avoid contact between items of differentclasses and can hold a set of items until a maximum given weight.We also present a dual approximation scheme for the class constrainedbin packing problem. In this problem, there is no use of shelfdivisors, but each bin uses at most C different classes.
We describe the main structural results on number rings, that is, integral domains for which the field of fractions is a number field. Whenever possible, we avoid the algorithmically undesirable hypothesis that the number ring in question is integrally closed.
The ring ℤ of ‘ordinary’ integers lies at the very root of number theory, and when studying its properties, the concept of divisibilityof integers naturally leads to basic notions as primality and congruences. By the ‘fundamental theorem of arithmetic’, ℤ admits unique prime factor decompositionof nonzero integers. Though one may be inclined to take this theorem for granted, its proof is not completely trivial: it usually employs the Euclidean algorithm to show that the prime numbers, which are defined as irreducibleelements having only ‘trivial’ divisors, are prime elementsthat only divide a product of integers if they divide one of the factors.
Let p be a prime number and n a positive integer, and let q = pn. Let 𝔽q be the field of q elements and denote by 𝔽*q the multiplicative subgroup of 𝔽*q. Assume t and u are elements in 𝔽*q with the property that u is in the subgroup generated by t. The discrete logarithm of u with respect to the base t, written logtu, is the least non-negative integer x such that tx= u.
In this paper we describe two methods to compute discrete logarithms, both of which derive from the number field sieve (NFS) factoring algorithm described in [Stevenhagen 2008] and [Lenstra and Lenstra 1993].
The analysis of many number theoretic algorithms turns on the role played by integers which have only small prime factors; such integers are known as “smooth numbers”. To be able to determine which algorithm is faster than which, it has turned out to be important to have accurate estimates for the number of smooth numbers in various sequences. In this chapter, we will first survey the important estimates for application to computational number theory questions, results as well as conjectures, before moving on to give sketches of the proofs of many of the most important results. After this, we will describe applications of smooth numbers to various problems in different areas of number theory. More complete surveys, with many more references, though with a different focus, were given by Norton [1971] and Hildebrand and Tenenbaum [1993a].
This article has two target audiences. For those primarily interested in computational number theory, I have tried to write this paper so that they can better understand the main tools used in analyzing algorithms. For those primarily interested in analytic problems, I have tried to give concise introductions to simplified versions of various key computational number theory algorithms, and to highlight applications and open counting questions. Besides the danger of never quite getting it right for either reader, I have had to confront the difficulty of the differences in notation between the two areas, and to work with some standard concepts in one area that might be puzzling to people in the other. Please consult the appendix for notation that is non-standard for one of the two fields.
This article is not meant to be a complete survey of all progress in this very active field. Thus I have not referred to many excellent works that are not entirely pertinent to my view of the subject, nor to several impressive works that have been superseded in the aspects in which I am interested.
We illustrate recent developments in computational number theory by studying their implications for solving the Pell equation. We shall see that, if the solutions to the Pell equation are properly represented, the traditional continued fraction method for solving the equation can be significantly accelerated. The most promising method depends on the use of smooth numbers. As with many algorithms depending on smooth numbers, its run time can presently only conjecturally be established; giving a rigorous analysis is one of the many open problems surrounding the Pell equation.
The English mathematician John Pell (1611–1685) has nothing to do with the equation. Euler (1707–1783) mistakenly attributed to Pell a solution method that had in fact been found by another English mathematician, William Brouncker (1620–1684), in response to a challenge by Fermat (1601–1665); but attempts to change the terminology introduced by Euler have always proved futile.