To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
State space models for neural population spike trains Neural computations at all scales of evolutionary and behavioural complexity are carried out by recurrently connected networks of neurons that communicate with each other, with neurons elsewhere in the brain, and with muscles through the firing of action potentials or “spikes.” To understand how nervous tissue computes, it is therefore necessary to understand how the spiking of neurons is shaped both by inputs to the network and by the recurrent action of existing network activity. Whereas most historical spike data were collected one neuron at a time, new techniques including silicon multielectrode array recording and scanning 2-photon, light-sheet or light-field fluorescence calcium imaging increasingly make it possible to record spikes from dozens, hundreds and potentially thousands of individual neurons simultaneously. These new data offer unprecedented empirical access to network computation, promising breakthroughs both in our understanding of neural coding and computation (Stevenson & Kording 2011), and our ability to build prosthetic neural interfaces (Santhanam et al. 2006). Fulfillment of this promise will require powerful methods for data modeling and analysis, able to capture the structure of statistical dependence of network activity across neurons and time.
Probabilistic latent state space models (SSMs) are particularly well-suited to this task. Neural activity often appears stochastic, in that repeated trials under the same controlled experimental conditions can evoke quite different patterns of firing. Some part of this variation may reflect differences in the way the computation unfolds on each trial. Another part might reflect noisy creation and transmission of neural signals. Yet more may come from chaotic amplification of small perturbations. As computational signals are thought to be distributed across the population (in a “population code”), variation in the computation may be distinguished by its common impact on different neurons and the systematic evolution of these common effects in time.
An SSM is able to capture such structured variation through the evolution of its latent state trajectory. This latent state provides a summary description of all factors modulating neural activity that are not observed directly. These factors could include processes such as arousal, attention, cortical state (Harris & Thiele 2011) or behavioural states of the animal (Niell & Stryker 2010; Maimon 2011).
Physiological control systems typically involve multiple interacting variables operating in feedback loops that enhance an organism's ability to self-regulate and respond to internal and external disturbances. The resulting multivariate time series often exhibit rich dynamical patterns that are altered under pathological conditions, and are therefore informative of health and disease (Ivanov et al. 1996; Costa et al. 2002; Stein et al. 2005; Nemati et al. 2011). Previous studies using nonlinear (Ivanov et al. 1996; Costa et al. 2002) indices of heart rate (HR) variability (i.e., beat-to-beat fluctuations in HR) have shown that subtle changes to the dynamics of HR may act as an early sign of adverse cardiovascular outcomes (e.g., mortality after myocardial infarction (Stein et al. 2005)) in large patient cohort. However, these studies fall short of assessing the multivariate dynamics of the vital signs (such as HR, blood pressure, respiration, etc.), and do not yield any mechanistic hypotheses for the observed deteriorations of normal variability. This shortcoming is in part due to the inherent difficulty of parameter estimation in physiological time series, where one is confronted by nonlinearities (including rapid regime changes), measurement artifacts, and/or missing data, which are particularly prominent in ambulatory recordings (due to patient movements) and bedside monitoring (due to equipment malfunction).
In Chapter 11, a framework has been described for unsupervised discovery of shared dynamics in multivariate physiological time series from large patient cohorts. A central premise of our approach was that even within heterogeneous cohorts (with respect to demographics, genetic factors, etc.) there are common phenotypic dynamics that a patient's vital signs may exhibit, reflecting underlying pathologies (e.g., detraction of the baroreflex system) or temporary physiological state changes (e.g., postural changes or sleep/wake related changes in physiology). We used a switching state space model (SSM), or in particular, a switching vector autoregressive (VAR) model, to automatically segment the time series into regions with similar dynamics, i.e., time-dependent rules describing the evolution of the system state. The state space modeling approach allows for incorporation of physiologically constrained linear models (e.g., via linearization of the nonlinear dynamics around equilibrium points of interest) to derive mechanistic explanations of the observed dynamical patterns, for instance, in terms of directional influences among the interacting variables (e.g., baroreflex gain or chemoreflex sensitivity).
Loss of functions is a common problem in this real world. Human beings often suffer from various diseases and traumas which include visual, auditory and motor impairments, for example: blindness, deafness, brainstem stroke, ALS (amyotrophic lateral sclerosis) and spinal cord injury. Current investigations search for an interface between the brain and external devices to restore these functions. New technical terms such as “neural prostheses,” “brain–machine interface (BMI),” “brain–computer interface (BCI)” are broadly appearing in up-to-date research articles as well as other media.
Fast developments in biotechnology have given us the ability to measure and record population neuronal activity with more precision and accuracy than ever before, allowing researchers to study and perform detailed analyses which may have been impossible just a few years ago. In particular, with this advancement in technology, it is now possible to construct an interface to bridge the gap between neuronal spiking activity and external devices that help control real-world applications (Lebedev & Nicolelis 2006; Schwartz et al. 2006; Homer et al. 2013). The primary goal of this research is to be able to restore motor function to physically disabled patients (Hochberg et al. 2006): spike recordings would be “decoded” to provide an external prosthetic device with a neurally controlled signal in the hope that the movement can be restored in its original form.
Pioneer work was conducted over the past decade in several research groups which demonstrated that hand movement can be represented by the neural activity of a population of cells in monkey's motor cortex (Schwartz & Moran 1999; Wessberg et al. 2000; Serruya et al. 2002). They also developed various mathematical algorithms to infer the hand motion with real-time performance. The results were noteworthy: the inference was fast and the estimation was accurate enough to perform a neural control task. Their work also demonstrated that the direct neural tasks can be successfully accomplished using the proposed algorithms (Wessberg et al. 2000; Serruya et al. 2002; Taylor et al. 2002). However, there are still various issues that need to be addressed, such as longterm stability of the micro electrode array implants, efficacy and safety, low power consumption, and mechanical reliability (Donoghue 2002; Chestek et al. 2007; Homer et al. 2013).
In this chapter we discuss predictive modeling of series time obtained by functional magnetic resonance imaging (fMRI), representing an important case of spatiotemporal data. Following its development in the early 1990s, fMRI has become a well established approach to investigating brain activity in vivo (Huettel et al. 2004), providing temporally and spatially resolved recordings of the “blood oxygen level dependent” (BOLD) signal of neural tissue. fMRI time series consist of a temporal sequence of scans of the brain and the surrounding space (discretized into voxels), such that the resulting data sets may be stored as vector time series.
The practical work with fMRI time series poses considerable challenges in many aspects, including the huge dimensionality of the data, which usually is recorded from several 10 of voxels, and the plethora of artifacts and contaminations disturbing the data (Strother 2006). Further difficulties arise from the low temporal sampling frequency, typically well below 1 Hz, and the indirect relationship between the BOLD signal and the underlying neural processes.
Currently available approaches to fMRI time series analysis may be broadly classified into three groups:
• exploratory methods, such as cluster analysis (Goutte et al. 1999), principal component analysis (PCA) (Anderson et al. 1999) and independent component analysis (ICA) (McKeown 2000);
• massively univariate (voxel-wise) regression methods, implemented in software packages such as statistical parametric mapping (SPM) (Friston et al. 1994), the FMRIB software library (FSL) (Smith et al. 2004), or the analysis of functional Neuroimages (AFNI) package (Cox 1996);
• generative dynamic models, based on specific assumptions regarding the properties of the underlying neural masses and the biophysical processes which produce the experimental data; as examples we mention dynamic causal modeling (DCM) (Friston et al. 2003) and the hemodynamic state space model (SSM) of Riera et al. (2004a).
Among these three groups of methods, the third may be interpreted as an example of predictive modeling, while for the second group this is possible only in a very limited sense, and essentially impossible for the first group.
In the modern age of the digital world, gigantic amounts of data have been recorded or collected. It remains a great challenge to process and analyze the “big data”. Many neurophysiological, physiological, clinical and behavioral data are dynamic by the nature of the experiments or the way they are collected. These signals could be complex, noisy, and often multivariate and multimodal. How to develop efficient statistical methods to characterize these data and extract information that reveals underlying biological or physiological mechanisms remains an active and important research topic. In recent years, numerous advanced computational statistics, signal processing, and machine-learning methods have been developed and there is rapidly growing interest in applying these methods to data analysis in neuroscience, physiology and medicine.
The state space model (SSM) is referred to a class of probabilistic graphical models (Koller & Friedman 2009), which describe the probabilistic dependence between the latent state variable and the observed measurement. The state or the measurement can be either continuous or discrete. The term “state space” originated in 1960s in the area of control engineering (Kalman 1960). SSM provides a general framework for analyzing deterministic and stochastic dynamical systems that are measured or observed through a stochastic process. The SSM framework has been successfully applied in engineering, statistics, computer science and economics to solve a broad range of dynamical systems problems. The most celebrated examples of SSM include the linear dynamical system and the associated inference algorithm: Kalman filter (Kalman 1960), and the hidden Markov model (HMM) (Rabiner 1989). Despite plenty of successful examples applying state space analyses to neural and clinical data, there remain many challenges in data analysis, for either developing new mathematical theories and statistical models, or developing efficient algorithms tuned for large-scale data sets, or catering for highly complex (multimodal or multiscale) and nonstationary data. In order to pave the way for further advancement in these research areas, it is important to recognize these challenges and exchange new ideas among researchers and practitioners.
It is important to point out that the modeling and analysis principles discussed in this book are general and equally valuable for time series analyses in many other disciplines, such as climatology, politics, finance, chemical engineering, consumer marketing and computer networking.
In probability theory and statistics, a random variable is a variable whose value is subject to variations due to chance. A random variable can take on a set of possible different values. The mathematical function describing the possible values of a random variable and their associated probabilities is known as a probability distribution. Random variables can be discrete, that is, taking any of a specified finite or countable list of values, endowed with a probability mass function (pmf); or continuous, taking any numerical value in an interval or collection of intervals, via a probability density function (pdf); or a mixture of both types. From either the pmf or the pdf, one can characterize the cumulant or moment statistics of the random variables, such as the mean, variance, covariance, skewness and kurtosis.
To represent the evolution of the random variable over time, a random process is further introduced. A stochastic process is a collection of random values, which is the probabilistic counterpart to a deterministic process. Whereas the deterministic process is governed by an ordinary differential equation, there is some indeterminacy in the stochastic process: given the identical initial condition, the evolution of the process may vary due to the presence of noise. In discrete time, a stochastic process involves a sequence of random variables and the time series associated with these random variables. A stochastic process is said strictly stationary if the joint probability distribution does not change when shifted in time; whereas a stochastic process is said wide-sense stationary (WSS) if its first moment and covariance statistics do not vary with respect to time. Any strictly stationary process which has a mean and a covariance is also WSS.
A Markov chain (or Markov process), named after Russian mathematician Andrey Markov (1856–1922), is a random process that undergoes transitions from one state to another on a state space. The Markov chain is memoryless: namely, the next state depends only on the current state and not on the sequence of events that preceded it. This specific kind of memoryless property is called a Markovian property.
Traditional approaches to physiological signal processing have focused on highly sensitive, and less specific detection techniques, generally with the expectation that an expert will overread the results and deal with the false positives. However, acquisition of physiological signals has become increasingly routine in recent years, and clinicians are often fed large flows of data which can become rapidly unmanageable and lead to missed important events and alarm fatigue (Aboukhalil et al. 2008).
Ignoring the nascent world of the quantified self for now, which itself has the potential to swamp medical practitioners with all manner of noise, there are two obvious examples of this paradigm. First, each intensive care unit (ICU) generates an enormous quantity of physiological data, up to 1GB per person per day (Clifford et al. 2009) More than 5 million patients are admitted annually to ICUs in the United States, with Europe and the rest of the world, rapidly catching up (Mullins et al. 2013; Rhodes et al. 2012). This can be attributed in part to the aging global population. ICU patients are a heterogeneous population, but all require a high level of acute care, with numerous bedside monitors. Patients in the ICU often require mechanical ventilation or cardiovascular support and invasive monitoring modalities and treatments (e.g., hemodialysis, plasmapheresis and extracorporeal membrane oxygenation). With an ever increasing reliance on technology to keep critically ill patients alive, the number of ICU beds in the US has grown significantly, to an estimated 6000 or more (Rhodes et al. 2012). Assuming an average ICU bed occupancy of 68.2% (Wunsch et al. 2013), the sum total of all bedside data generated in the US is over a petabyte of data each year. Multi-terabyte ICU databases are therefore becoming available (Saeed et al. 2011), and include parameters such as the ECG, the photoplethysmogram, arterial blood pressure and respiratory effort.
A time series is an ordered collection of observations y1:T ≡ {y1, …, yT}. Typical tasks in time series analysis are the prediction of future observations (for example in weather forecasting) or the extraction of lower-dimensional information embedded in the observations (for example in automatic speech recognition). In neuroscience, common applications are related to the latter, for example the detection of epileptic events or artifacts from EEG recordings (Boashash & Mesbah 2001; Rohalova et al. 2001; Tarvainen et al. 2004; Chiappa & Barber 2005, 2006), or the detection of intention in a collection of neural recordings for the purpose of control (Wu et al. 2003). Time series models commonly make the assumptions that the recent past is more informative than the distant past and that the observations are obtained from a noisy measuring device or from an inherent stochastic system. Often, in models of physical systems, additional knowledge about the properties of the time series are built into the model, including any known physical laws or constraints; other forms of prior knowledge may relate to whether the process underlying the time series is discrete or continuous. Markov models are classical models which allow one to build in such assumptions within a probabilistic framework.
A graphical depiction
A probabilistic model of a time series y1:T is a joint distribution p(y1:T). Commonly, the structure of the model is chosen to be consistent with the causal nature of time. This is achieved with Bayes' rule, which states that the distribution of the variable x, given knowledge of the state of the variable y, is given by p(x|y) = p(x, y)/p(y), see for example Barber (2012). Here p(x, y) is the joint distribution of x and y, while p(y) = ∫ p(x, y)dx is the marginal distribution of y (i.e., the distribution of y not knowing the state of x).
During the process of learning the brain undergoes changes that can be observed at both the cellular and systems level. Being able to track accurately simultaneous changes in behavior and neural activity is key to understanding how the brain learns new tasks and information.
Learning is studied in a large number of experimental paradigms involving, for example, testing effects on learning of brain lesions (Whishaw & Tomie 1991; Dias et al. 1997; Dusek & Eichenbaum 1997; Wise & Murray 1999; Fox et al. 2003; Kim & Frank 2009; Kayser & D'Esposito 2013), attentional modulation (Cook & Maunsell 2002; Hudson et al. 2009), optogenetic manipulation (Warden et al. 2012) and pharmacological interventions (Stefani et al. 2003). Studies are also performed to understand how learning is affected by aging (Harris & Wolbers 2012), stroke (Panarese et al. 2012) and psychological conditions including autism (Solomon et al. 2011) and synesthesia (Brang et al. 2013). The learning process is also studied in relation to changes in neural activity in specific brain regions (Jog et al. 1999; Wirth et al. 2003; Suzuki & Brown 2005; Brovelli et al. 2011; Mattfeld & Stark 2011).
In most cases, the response accuracy of a subject is binary, with a one representing a correct response and a zero representing an incorrect response. In its raw form, binary response accuracy can be difficult to visualize, especially if the time series is long, and the exact time when learning occurs can be difficult to identify. Typically, an experimenter is interested in deriving two things from the learning data: a learning trial and a learning curve. The first item is the time point at which responses significantly change relative to a baseline value such as chance performance. The second is estimation of a curve that defines the probability of a correct response as a function of trial. From these estimates, it is possible to compare changes in learning with other measurements such as, for example, localized brain oxygen consumption (via fMRI) or electrical activity.
Modeling physiological systems by control systems theory, advanced signal processing, and parametric modeling and estimation approaches has been of focal importance in biomedical engineering (Khoo 1999; Marmarelis 2004; Xiao et al. 2005; Porta et al. 2009). In computerized cardiology, various types of data such as the electrocardiogram (ECG), arterial blood pressure (ABP, measured by invasive arterial line catheters or noninvasive finger cuffs), and respiratory effort (RP; e.g., measured by plethysmography or by piezoelectric respiratory belt transducers) are recorded, digitized and saved to a computer to be available for off-line analysis. Modeling autonomic cardiovascular control using mathematical approaches is helpful for understanding and assessing autonomic cardiovascular functions in healthy or pathological subjects (Berntson et al. 1997; Parati et al. 2001; Stauss 2003; Eckberg 2008). Continuous quantification of heartbeat dynamics, as well as their interactions with other cardiovascular measures, have been widely studied in the past decades (Baselli et al. 1988; Saul & Cohen 1994; Chon et al. 1996; Barbieri et al. 2001; Porta et al. 2002).
A central goal in biomedical engineering applied to cardiovascular control is to develop quantitative measures and informative indices that can be extracted from physiological measurements. Assessing and monitoring informative physiological indices is an important task in both clinical practice and laboratory research. Specifically, a major challenge in cardiovascular engineering is to develop statistical models and apply signal processing tools to investigate various cardiovascular-respiratory functions, such as heart rate variability (HRV), respiratory sinus arrhythmia (RSA), and baroreflex sensitivity (BRS). In the literature, numerous methods have been proposed for quantitative HRV analysis (Malpas 2002). Two types of standard methods are widely used, one is time-domain analysis based on heartbeat intervals, the other is frequency-domain analysis (Baselli et al. 1985). In addition, nonlinear system identification methods have also been applied to heartbeat interval analysis (Chon et al. 1996; Christini et al. 1995; Zhang et al. 2004; Xiao et al. 2005). Examples of higher-order characterization for cardiovascular signals include nonlinear autoregressive (AR) models, Volterra–Wiener series expansion, and Volterra–Laguerre models (Korenberg 1991; Marmarelis 1993; Akay 2000).
A fundamental challenge in neuroscience is to understand how decisions are computed in neural circuits. One popular approach to this problem is to record from single neurons in brain regions that lie between primary sensory and motor regions while an animal performs a perceptual decision-making task. Typical tasks require the animal to integrate noisy sensory evidence over time in order to make a binary decision about the stimulus. Such experiments have the tacit goal of characterizing the dynamics governing the transformation of sensory information into a representation of the decision. However, recorded spike trains do not reveal these dynamics directly; they represent noisy, incomplete emissions that reflect the underlying dynamics only indirectly.
This dissociation between observed spike trains and the unobserved dynamics governing neural population activity has posed a key challenge for using neural measurements to gain insight into how the brain computes decisions. Recording decision-related neural activity has certainly shed much light upon what parts of the brain are involved in forms of decision making and what sorts of roles each area plays. But without direct access to the dynamics underlying single-trial decision formation, most analyses of decision-related neural data rely on estimating spike rates by averaging over trials (and sometimes, over neurons as well). Although the central tendency is of course a reasonable starting point in data analysis, sole reliance on the mean can obscure single-trial dynamics when substantial stochastic components are present. For example, as discussed in depth in this chapter, the average of a set of step functions – when the steps occur at different times on different trials – will yield an average that ramps continuously, masking the presence of discrete dynamics. Although the majority of averaging and regression-based analyses used in the field are straightforward to conceptualize and easy to apply to data, they provide limited insight into the dynamics that may govern how individual decisions are made. State space methods, on the other hand, are particularly well-suited for analyzing the neural representation of decisions (or other cognitive variables). The latent state can account for unobserved, trial varying dynamics, and the dynamics placed on the state can be directly linked to models of decision making.
Analysis of noninvasive electrophysiological time series (for example, using electroencephalography, EEG) typically culminates in the report of certain data features. These features, namely, event related potentials and spectral compositions have together provided a rich characterization of observable brain dynamics: the N100, the mismatch negativity, the P300, the alpha wave. But what do these features actually represent? What unobservable neural processes generate these different types of features?
Interestingly the field of functional magnetic resonance imaging (fMRI) has had to grapple with a similar type of question. There, equipped with a very indirect, blood-oxygen-level dependent (BOLD) measure of neural function, scientists have used animal preparations and conjoint electrophysiology to show that BOLD responses reflect synaptic input with a particular transmission preference for fast-frequency signals (Logothetis et al. 2001). So if electrophysiological responses serve as “ground truth” for fMRI, what serves as “ground truth” for EEG? Biophysically, EEG represents the effects of summed currents around a population (hundreds of thousands) of active neurons. When measured at the scalp, EEG is specifically thought to reflect the average depolarization of pyramidal cells – due to their regularly oriented dendrites tangential to the cortical surface. So the question then is – from where do these currents arise? Dynamic causal models (DCM) (David & Friston 2003; Kiebel et al. 2008; Moran et al. 2007, 2008) formalize this question using a state space representation of depolarizing and hyperpolarizing current inputs. These states are biophysical, time-dependent descriptions of what is likely occurring in a population of interacting neurons. The models are supposed to embody “ground truth” – or in other words represent a plausible and empirically informed set of operations that our neural wet-ware performs.
In animal studies, cellular physiological investigations such as microdialysis, singlecell or patch-clamp recordings can inform macroscopic electrophysiological measurements of population activity (Fellous et al. 2001). These types of experiments thus link low-level synaptic activity and the emergent dynamics of the cell population. In humans, these experiments are usually not feasible since, with rare exceptions (e.g., pre-surgical evaluation in epilepsy patients), experimental measurements must be noninvasive. One goal of DCM for EEG is to enable inference about low-level physiological processes using noninvasive recordings from the human brain.
By
M. Fukushima, ATR Neural Information Analysis Laboratories,
O. Yamashita, ATR Neural Information Analysis Laboratories,
M. Sato, ATR Neural Information Analysis Laboratories
Elucidating how the human brain is structured and how it functions is a fundamental aim of human neuroscience. To achieve such an aim, the activity of the human brain has been measured using noninvasive neuroimaging techniques, the most popular of which is functional magnetic resonance imaging (fMRI) (Ogawa et al. 1990). The fMRI signals are obtained at a spatial resolution of typically 3mm and measure changes of blood flow and blood oxygen consumption whose temporal dynamics are slower than that of neuronal electrical activities, resulting in a poor temporal resolution of the order of seconds. In contrast, magnetoencephalography (MEG) and electroencephalography (EEG) can detect changes of neuronal activities by the millisecond measurement of magnetic and electric fields, respectively, outside the skull (Hämäläinen et al. 1993; Nunez & Srinivasan 2006). The high temporal resolution of MEG (and EEG) is useful, especially for studying the dynamic integration of functionally specialized brain regions, which is a subject of growing interest in human neuroscience (de Pasquale et al. 2012).
The major problem of MEG is that spatial brain activity patterns are not easily understandable from sensor measurements. This is because the magnetic fields produced by neuronal current sources are superimposed to form rather uninterpretable spatial patterns of signals on sensors. Estimating the position and intensity of these current sources from the sensor measurements is called source reconstruction, or source localization. Solving the source reconstruction problem allows the mapping of temporally dynamic electrical activities in the human brain (Baillet et al. 2001). Since how brain regions are dynamically integrated to produce a variety of functions is of great interest in human neuroscience research, the mission of MEG source reconstruction is not only to localize position of the current sources, but also to identify directed interactions between these sources. A possible approach to this involves constructing a dynamic model of brain electrical activities, as well as developing an estimation algorithm for the source positions and interactions that are parametrized in this model.
In the neocortex, information is represented by patterns of spike activity occurring over populations of neurons. A fundamental task in neuroscience is to understand how the information is encoded and transmitted in neural population activity. In comparison with the single unit activity, population activity is more information rich and robust in representation.With the advancement of multielectrode array and imaging technologies, neuroscientists have been able to record a large population of neurons at a fine temporal and spatial resolution. In the past few decades, probabilistic modeling and Bayesian methods have become increasingly popular in the analysis of neural codes (Ma et al. 2006; Yu et al. 2007, 2009; Kemere et al. 2008; Gerwinn et al. 2009; Pillow et al. 2011).
State space analyses (Chen et al. 2010, 2013) provides a powerful framework for modeling temporal neuronal dynamics and behavior. The state space model (SSM) consists of two basic equations. The state equation characterizes the dynamics of latent state variable, which is either known or modeled by prior knowledge. The observation equation captures the likelihood of the observations conditional on the latent state and other observed variables. Chapters 1 and 2 of this volume provide a detailed account of the mathematical framework.
In this chapter, we present two examples of state space analysis of rat hippocampal population codes. The first example is aimed to decode unsorted neuronal spikes, and the second example is aimed to uncover hippocampal population codes using a hidden Markov model (HMM). The common idea is to use probabilistic modeling and Bayesian inference to discover spatiotemporal structures of hippocampal ensemble spike activity.
Decode unsorted neuronal spikes from the rat hippocampus
Overview
Despite rapid progresses in the field of neural decoding, several challenges still remain: First, it is not clear how the spiking activity of individual neurons can reliably represent information. This is often reflected by complex neuronal tuning curves, which are poorly described by simple parametric models. Second, most population decoding methods are based on sorted single units, which will inevitably suffer from various spike-sorting errors, especially in the presence of few wires or probes (Wehr et al. 1999; Harris et al. 2000; Wood et al. 2004; Won et al. 2007).