Skip to main content Accessibility help
×
Hostname: page-component-7c8c6479df-ws8qp Total loading time: 0 Render date: 2024-03-28T09:48:10.328Z Has data issue: false hasContentIssue false

Introduction

Published online by Cambridge University Press:  05 October 2015

Z. Chen
Affiliation:
New York University
Zhe Chen
Affiliation:
New York University
Get access

Summary

A brief overview of state space analysis

Mathematical background

In probability theory and statistics, a random variable is a variable whose value is subject to variations due to chance. A random variable can take on a set of possible different values. The mathematical function describing the possible values of a random variable and their associated probabilities is known as a probability distribution. Random variables can be discrete, that is, taking any of a specified finite or countable list of values, endowed with a probability mass function (pmf); or continuous, taking any numerical value in an interval or collection of intervals, via a probability density function (pdf); or a mixture of both types. From either the pmf or the pdf, one can characterize the cumulant or moment statistics of the random variables, such as the mean, variance, covariance, skewness and kurtosis.

To represent the evolution of the random variable over time, a random process is further introduced. A stochastic process is a collection of random values, which is the probabilistic counterpart to a deterministic process. Whereas the deterministic process is governed by an ordinary differential equation, there is some indeterminacy in the stochastic process: given the identical initial condition, the evolution of the process may vary due to the presence of noise. In discrete time, a stochastic process involves a sequence of random variables and the time series associated with these random variables. A stochastic process is said strictly stationary if the joint probability distribution does not change when shifted in time; whereas a stochastic process is said wide-sense stationary (WSS) if its first moment and covariance statistics do not vary with respect to time. Any strictly stationary process which has a mean and a covariance is also WSS.

A Markov chain (or Markov process), named after Russian mathematician Andrey Markov (1856–1922), is a random process that undergoes transitions from one state to another on a state space. The Markov chain is memoryless: namely, the next state depends only on the current state and not on the sequence of events that preceded it. This specific kind of memoryless property is called a Markovian property.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2015

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Ba, D., Babadi, B., Purdon, P. L. & Brown, E. N. (2014). Harmonic pursuit: robust spectral estimation by iteratively re-weighted least squares. Proceedings of the National Academy of Sciences USA 111(50), 5336–5345.CrossRefGoogle Scholar
Barber, D. (2012). Bayesian Reasoning and Machine Learning, Cambridge: Cambridge University Press.
Barbieri, R., Frank, L. M., Nguyen, D. P., Quirk, M. C., Solo, V., Wilson, M. A. & Brown, E. N. (2004). Dynamic analyses of information encoding in neural ensembles. Neural Computation 16(2), 277–307.CrossRefGoogle Scholar
Bernardo, J. & Smith, A. F. M. (1994). Bayesian Theory, New York: Wiley.
Bertsekas, D. (2005). Dynamic Programming and Optimal Control, Boston, MA: Athena Scientific.
Brockwell, A. E., Kass, R. E. & Schwartz, A. B. (2007). Statistical signal processing and the motor cortex. Proceedings of the IEEE 95(5), 881–898.CrossRefGoogle Scholar
Brockwell, A. E., Rojas, A. L. & Kass, R. E. (2004). Recursive Bayesian decoding of motor cortical signals by particle filtering. Journal of Neurophysiology 91(4), 1899–1907.CrossRefGoogle Scholar
Brown, E. N., Frank, L. M., Tang, D., Quirk, M. C. & Wilson, M. A. (1998). A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells. Journal of Neuroscience 18, 7411–7425.CrossRefGoogle Scholar
Brown, E. N., Ngyuen, D. P., Frank, L. M., Wilson, M. A. & Solo, V. (2001). An analysis of neural receptive field plasticity by point process adaptive filtering. Proceedings of National Academy of Sciences USA 98, 12261–12266.Google Scholar
Chen, Z., Barbieri, R. & Brown, E. N. (2010). State-space modeling of neural spike train and behavioral data. In K., Oweiss, ed., Statistical Signal Processing for Neuroscience and Neurotechnology, Amsterdom: Elsevier, pp. 175–218.CrossRef
Chen, Z., Vijayan, S., Barbieri, R., Wilson, M. A. & Brown, E. N. (2009). Discrete- and continuous-time probabilistic models and algorithms for inferring neuronal UP and DOWN states. Neural Computation 21, 1797–1862.CrossRefGoogle Scholar
Cunningham, J. P. & Yu, B. M. (2014). Dimensionality reduction for large-scale neural recordings. Nature Neuroscience 17, 1500–1509.CrossRefGoogle Scholar
Czanner, G., Eden, U. T., Wirth, S., Yanike, M., Suzuki, W. A. & Brown, E. N. (2008). Analysis of between-trial and within-trial neural spiking dynamics. Journal of Neurophysiology 99(5), 2672–2693.CrossRefGoogle Scholar
Ergun, A., Barbieri, B., Eden, U. T., Wilson, M. A. & Brown, E. N. (2007). Construction of point process adaptive filter algorithms for neural systems using sequential monte carlo methods. IEEE Transactions on Biomedical Engineering 54(3), 419–428.Google Scholar
Fahrmeir, L. & Tutz, G. (2001). Multivariate Statistical Modelling Based on Generalized Linear Models, 2nd edn, New York: Springer.
Galka, A., Yamashita, O., Ozaki, T., Biscay, R. & Valdés-Sosa, P. (2004). A solution to the dynamical inverse problem of EEG generation using spatiotemporal Kalman filtering. NeuroImage 23, 435–453.CrossRefGoogle Scholar
Gelman, A., Carlin, J. B., Stern, H. S. & Rubin, D. B. (2004). Bayesian Data Analysis, 2nd edn, London: Chapman & Hall/CRC Press.
Georgopoulos, A., Schwartz, A. & Kettner, R. (1986). Neural population coding of movement direction. Science 233, 1416–1419.Google Scholar
Ghahramani, Z. (1998). Learning dynamic Bayesian networks. In C. L., Giles & M., Gori, eds, Adaptive Processing of Sequences and Data Structures, Berlin: Springer, pp. 168–197.
Ghahramani, Z. & Jordan, M. I. (1997). Factorial hidden Markov models. Machine Learning 29, 245–273.CrossRef
Hatsopoulos, N. G. & Donoghue, J. P. (2009). The science of neural interface systems. Annual Review of Neuroscience 32(1), 249–266.Google Scholar
Hatsopoulos, N. G., Xu, Q. & Amit, Y. (2007). Encoding of movement fragments in the motor cortex. Journal of Neuroscience 27, 5105–5114.CrossRefGoogle Scholar
Jones, L. M., Fontanini, A., Sadacca, B. F. & Katz, D. B. (2007). Natural stimuli evoke analysis dynamic sequences of states in sensory cortical ensembles. Proceedings of the National Academy of Sciences, USA 104, 18772–18777.CrossRefGoogle Scholar
Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Transactions of the ASME–Journal of Basic Engineering 82, 35–45.CrossRefGoogle Scholar
Kemere, C., Santhanam, G., Yu, B. M., Afshar, A., Ryu, S. I., Meng, T. H. & Shenoy, K. V. (2008). Detecting neural-state transition using hidden Markov models for motor cortical prostheses. Journal of Neurophysiology 100, 2441–2452.Google Scholar
Kennedy, C. E. & Turley, J. P. (2011). Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU. Theoretical Biology and Medical Modeling 8, 40.CrossRefGoogle Scholar
Kulkarni, J. E. & Paninski, L. (2007). State-space decoding of goal directed movement. IEEE Signal Processing Magazine 25(1), 78–86.Google Scholar
Lamus, C., Hämäläinen, M. S., Temereanca, S., Brown, E. N.& Purdon, P. L. (2012). A spatiotemporal dynamic distributed solution to the MEG inverse problem. NeuroImage 63(2), 894–909.Google Scholar
Liu, Z., Wu, L. & Hauskrecht, M. (2012). State space Gaussian process prediction. In Proceedings of ICML Workshop on Clinical Data Analysis.Google Scholar
Mahowald, M. W. & Schenck, C. H. (2005). Insights from study human sleep disorders. Nature 437, 1279–1285.Google Scholar
McCullagh, P. & Nelder, J. A. (1989). Generalized Linear Models, 2nd edn, London: Chapman & Hall/CRC Press.
Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective, MIT Press, Cambridge, MA.
Paninski, L., Ahmadian, Y., Ferreira, D., Koyama, S., Rad, K., Vidne, M., Vogelstein, J. & Wu, W. (2010). A new look at state-space models for neural data. Journal of Computational Neuroscience 29, 107–126.CrossRefGoogle Scholar
Pawitan, Y. (2001). In All Likelihood: Statistical Modelling and Inference Using Likelihood, Oxford: Clarendon Press.
Penny, W., Ghahramani, Z. & Friston, K. J. (2005). Bilinear dynamical systems. Philosophical Transactions of Royal Society London B: Biological Sciences 360, 983–993.CrossRefGoogle Scholar
Prerau, M. J. & Smith, A. C., Eden, U. T., Kubota, Y., Yanike, M., Suzuki, W., Graybiel, A. M. & Brown, E. N. (2009). Characterizing learning by simultaneous analysis of continuous and binary measures of performance. Journal of Neurophysiology 102(5), 3060–3072.CrossRefGoogle Scholar
Saleh, M., Takahashi, K. & Hatsopoulos, N. G. (2012). Encoding of coordinated reach and grasp trajectories in primary motor cortex. Journal of Neuroscience 32, 1220–1332.CrossRefGoogle Scholar
Saul, L. K. & Jordan, M. I. (1999). Mixed memory Markov models: decomposing complex stochastic processes as mixtures of simpler ones. Machine Learning 37, 75–86.CrossRefGoogle Scholar
Shanechi, M. M., Hu, R. C., Powers, M., Wornell, G. W., Brown, E. N. & Williams, Z. M. (2012). Neural population partitioning and a concurrent brain-machine interface for sequential motor function. Nature Neuroscience 15, 1715–1722.Google Scholar
Shanechi, M. M., Wornell, G. W., Williams, Z. M. & Brown, E. N. (2013). Feedback-controlled parallel point process filter for estimation of goal-directed movements from neural signals. IEEE Transactions on Neural Systems and Rehabilitation Engineering 21(1), 129–140.CrossRefGoogle Scholar
Smith, A. C., Frank, L. M., Wirth, S., Yanike, M., Hu, D., Kubota, Y., Graybiel, A. M., Suzuki, W. A. & Brown, E. N. (2004). Dynamic analysis of learning in behavioral experiments. Journal of Neuroscience 24, 447–461.CrossRefGoogle Scholar
Smith, A. C., Stefani, M. R., Moghaddam, B. & Brown, E. N. (2005). Analysis and design of behavioral experiments to characterize population learning. Journal of Neurophysiology 93, 1776–1792.Google Scholar
Smith, A. C., Wirth, S., Suzuki, W. A. & Brown, E. N. (2007). Bayesian analysis of interleaved learning and response bias in behavioral experiments. Journal of Neurophysiology 97, 2516–2524.Google Scholar
Srinivasan, L. & Brown, E. N. (2007). A state-space framework for movement control to dynamic goals through brain-driven interfaces. IEEE Transactions on Biomedical Engineering 54(3), 526–535.Google Scholar
Srinivasan, L., Eden, U. T., Mitter, S. K. & Brown, E. N. (2007). General-purpose filter design for neural prosthetic devices. Journal of Neurophysiology 98, 2456–2475.CrossRefGoogle Scholar
Srinivasan, L., Eden, U. T., Willsky, A. S. & Brown, E. N. (2006). A state-space analysis for reconstruction of goal-directed movements using neural signals. Neural Computation 18, 2465–2494.CrossRefGoogle Scholar
Truccolo, W., Friehs, G. M., Donoghue, J. & Hochberg, L. R. (2008). Primary motor cortex tuning to intended movement kinematics in humans with tetraplegia. Journal of Neuroscience 28, 1163–1178.Google Scholar
van der Heijden, M., Velikova, M. & Lucas, P. J. F. (2014). Learning Bayesian networks for clinical time series analysis. Journal of Biomedical Informatics 48, 94–105.Google Scholar
Vogelstein, J. T., Packer, A. M., Machado, T. A., Sippy, T., Babdi, B., Yuste, R. & Paninski, L. (2010). Fast nonnegative deconvolution for spike train inference from population calcium imaging. Journal of Neurophysiology 104, 3691–3704.CrossRefGoogle Scholar
Vogelstein, J. T., Watson, B. O., Packer, A. M., Yuste, R., Jedynak, B. & Paninski, L. (2009). Spike inference from calcium imaging using sequential Monce Carlo methods. Biophysical Journal 97, 636–655.Google Scholar
Wu, W., Gao, Y., Bienenstock, E., Donoghue, J. P. & Black, M. J. (2006). Bayesian population decoding of motor cortical activity using a Kalman filter. Neural Computation 18(1), 80–118.CrossRefGoogle Scholar
Wu, W., Kulkarni, J. E., Hatsopoulos, N. G. & Paninski, L. (2009). Neural decoding of hand motion using a linear state-space model with hidden states. IEEE Transactions on Neural Systems and Rehabilitation Engineering 17, 370–378.Google Scholar
Yu, B. M., Cunningham, J. P., Santhanam, G., Ryu, S. I., Shenoy, K.V. & Sahani, M. (2009). Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. Journal of Neurophysiology 102(1), 614–635.CrossRefGoogle Scholar
Yu, B. M., Kemere, C., Santhanam, G., Ryu, S. I., Meng, T. H., Sahani, M. & Shenoy, K.V. (2007). Mixture of trajectory models for neural decoding of goal-directed movements. Journal of Neurophysiology 97, 3763–3780.CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Introduction
  • Edited by Zhe Chen, New York University
  • Book: Advanced State Space Methods for Neural and Clinical Data
  • Online publication: 05 October 2015
  • Chapter DOI: https://doi.org/10.1017/CBO9781139941433.002
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • Introduction
  • Edited by Zhe Chen, New York University
  • Book: Advanced State Space Methods for Neural and Clinical Data
  • Online publication: 05 October 2015
  • Chapter DOI: https://doi.org/10.1017/CBO9781139941433.002
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • Introduction
  • Edited by Zhe Chen, New York University
  • Book: Advanced State Space Methods for Neural and Clinical Data
  • Online publication: 05 October 2015
  • Chapter DOI: https://doi.org/10.1017/CBO9781139941433.002
Available formats
×