Skip to main content Accessibility help
×
Home
Hostname: page-component-684899dbb8-8hm5d Total loading time: 0.221 Render date: 2022-05-20T05:51:31.998Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "useRatesEcommerce": false, "useNewApi": true }

On Stationary Distributions of Stochastic Neural Networks

Published online by Cambridge University Press:  30 January 2018

K. Borovkov*
Affiliation:
The University of Melbourne
G. Decrouez*
Affiliation:
The University of Melbourne
M. Gilson*
Affiliation:
RIKEN Brain Science Institute and The University of Melbourne
*
Postal address: Department of Mathematics and Statistics, The University of Melbourne, Parkville, VIC 3010, Australia.
Postal address: Department of Mathematics and Statistics, The University of Melbourne, Parkville, VIC 3010, Australia.
∗∗∗ Current address: Departament de Tecnologies de la Informació i les Comunicacions, Universitat Pompeu Fabra, Barcelona 08018, Spain.
Rights & Permissions[Opens in a new window]

Abstract

HTML view is not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

The paper deals with nonlinear Poisson neuron network models with bounded memory dynamics, which can include both Hebbian learning mechanisms and refractory periods. The state of the network is described by the times elapsed since its neurons fired within the post-synaptic transfer kernel memory span, and the current strengths of synaptic connections, the state spaces of our models being hierarchies of finite-dimensional components. We prove the ergodicity of the stochastic processes describing the behaviour of the networks, establish the existence of continuously differentiable stationary distribution densities (with respect to the Lebesgue measures of corresponding dimensionality) on the components of the state space, and find upper bounds for them. For the density components, we derive a system of differential equations that can be solved in a few simplest cases only. Approaches to approximate computation of the stationary density are discussed. One approach is to reduce the dimensionality of the problem by modifying the network so that each neuron cannot fire if the number of spikes it emitted within the post-synaptic transfer kernel memory span reaches a given threshold. We show that the stationary distribution of this ‘truncated’ network converges to that of the unrestricted network as the threshold increases, and that the convergence is at a superexponential rate. A complementary approach uses discrete Markov chain approximations to the network process.

Type
Research Article
Copyright
© Applied Probability Trust 

References

Bear, M. F., Connors, B. W. and Paradiso, M. A. (2007). Neuroscience: Exploring the Brain, 3rd edn. Lippincott Williams & Wilkins, Philadelphia, PA.Google Scholar
Borovkov, A. A. (1998). Ergodicity and Stability of Stochastic Processes. John Wiley, Chichester.Google Scholar
Borovkov, A. A. (2013). Probability Theory. Springer, London.CrossRefGoogle Scholar
Borovkov, K. and Last, G. (2008). On level crossings for a general class of piecewise-deterministic Markov processes. Adv. Appl. Prob. 40, 815834.CrossRefGoogle Scholar
Borovkov, K., Decrouez, G. and Gilson, M. (2012). On stationary distributions of stochastic neural networks. Preprint. Available at http://uk.arxiv.org/abs/1206.4489.Google Scholar
Brémaud, P. (1981). Point Processes and Queues. Springer, New York.CrossRefGoogle Scholar
Brémaud, P. and Massoulié, L. (1996). Stability of nonlinear Hawkes processes. Ann. Prob. 24, 15631588.Google Scholar
Brémaud, P. and Massoulié, L. (2001). Hawkes branching point processes without ancestors. J. Appl. Prob. 38, 122135.CrossRefGoogle Scholar
Brémaud, P. and Massoulié, L. (2002). Power spectra of general shot noises and Hawkes point processes with a random excitation. Adv. Appl. Prob. 34, 205222,CrossRefGoogle Scholar
Brillinger, D. R. (1975). The identification of point process systems. Ann. Prob. 3, 909929.CrossRefGoogle Scholar
Brillinger, D. R. (1988). Maximum likelihood analysis of spike trains of interacting nerve cells. Biol. Cybernetics 59, 189200.CrossRefGoogle ScholarPubMed
Burkitt, A. N., Gilson, M. and van Hemmen, J. L. (2007). Spike-timing-dependent plasticity for neurons with recurrent connections. Biol. Cybernetics 96, 533546.CrossRefGoogle ScholarPubMed
Chornoboy, E. S., Schramm, L. P. and Karr, A. F. (1988). Maximum likelihood identification of neural point process systems. Biol. Cybernetics 59, 265275.CrossRefGoogle ScholarPubMed
Dayan, P. and Abbott, L. F. (2001). Theoretical Neuroscience. Computational and Mathematical Modeling of Neural Systems. MIT Press, Cambridge, MA.Google Scholar
Gerstner, W. and Kistler, W. M. (2002). Spiking Neuron Models. Single Neurons, Populations, Plasticity. Cambridge University Press.CrossRefGoogle Scholar
Gilson, M. (2009). Biological learning mechanisms in spiking neuronal networks. , The University of Melbourne. Available at http://www.t35.ph.tum.de/addons/publications/Gilson-2009.pdf.Google Scholar
Gilson, M. et al. (2009). Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks III: Partially connected neurons driven by spontaneous activity. Biol. Cybernetics 101, 411426.CrossRefGoogle ScholarPubMed
Gilson, M. et al. (2009). Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks IV. Biol. Cybernetics 101, 427444.CrossRefGoogle ScholarPubMed
Gilson, M., Burkitt, A. and van Hemmen, J. L. (2010). STDP in recurrent neuronal networks. Front. Comput. Neurosci. 4, 23.CrossRefGoogle ScholarPubMed
Harrison, J. M. and Resnick, S. I. (1976). The stationary distribution and first exit probabilities of a storage process with general release rule. Math. Operat. Res. 1, 347358.CrossRefGoogle Scholar
Hawkes, A. G. (1971). Point spectra of some mutually exciting point processes. J. R. Statist. Soc. B 33, 438443.Google Scholar
Izhikevich, E. M. (2007). Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. MIT Press, Cambridge, MA.Google Scholar
Jacobsen, M. (2006). Point Process Theory and Applications: Marked Point and Piecewise Deterministic Processes. Birkhäuser, Boston, MA.Google Scholar
Kempter, R., Gerstner, W. and van Hemmen, J. L. (1999). Hebbian learning and spiking neurons. Phys. Rev. E 59, 44984514.CrossRefGoogle Scholar
Koch, C. and Idan, S. (eds) (1998). Methods in Neuronal Modeling: From Ions to Networks, 2nd edn. MIT Press, Cambridge, MA.Google Scholar
Massoulié, L. (1998). Stability results for a general class of interacting point processes dynamics, and applications. Stoch. Process. Appl. 75, 130.CrossRefGoogle Scholar
Nicholls, J. G. et al. (2012). From Neuron to Brain, 5th edn. Sinauer Associates, Sunderland, MA.Google Scholar
Paninski, L. (2004). Maximum likelihood estimation of cascade point-process neural encoding models. Network: Comput. Neural Systems 15, 243262.CrossRefGoogle ScholarPubMed
Pillow, J. W., Ahmadian, Y. and Paninski, L. (2011). Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains. Neural Comput. 23, 145.CrossRefGoogle ScholarPubMed
Pillow, J. W. et al. (2008). Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature 454, 995999.CrossRefGoogle Scholar
Shepherd, G. M. (ed.) (2004). The Synaptic Organization of the Brain, 5th edn. Oxford University Press.CrossRefGoogle Scholar
Shepherd, G. M. and Grillner, S. (eds) (2010). Handbook of Brain Microcircuits. Oxford University Press.CrossRefGoogle Scholar
Sporns, O. (2011). Networks of the Brain. MIT Press, Cambridge, MA.Google Scholar
Stevenson, I. H. et al. (2009). Bayesian inference of functional connectivity and network structure from spikes. IEEE Trans. Neural Systems Rehabil. Eng. 17, 203213.CrossRefGoogle ScholarPubMed
Truccolo, W. et al. (2005). A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. J. Neurophysiol. 93, 10741089.CrossRefGoogle ScholarPubMed
You have Access

Save article to Kindle

To save this article to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

On Stationary Distributions of Stochastic Neural Networks
Available formats
×

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox.

On Stationary Distributions of Stochastic Neural Networks
Available formats
×

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive.

On Stationary Distributions of Stochastic Neural Networks
Available formats
×
×

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *