Hostname: page-component-89b8bd64d-nlwjb Total loading time: 0 Render date: 2026-05-08T23:18:39.639Z Has data issue: false hasContentIssue false

STOCHASTIC GRADIENT LEARNING AND INSTABILITY: AN EXAMPLE

Published online by Cambridge University Press:  28 January 2016

Sergey Slobodyan*
Affiliation:
CERGE-EI
Anna Bogomolova
Affiliation:
CERGE-EI and Novosibirsk State University
Dmitri Kolyuzhnov
Affiliation:
CERGE-EI and Institute of Economics and Industrial Engineering of the Siberian Branch of the Russian Academy of Sciences
*
Address correspondence to: Sergey Slobodyan, CERGE-EI, a joint workplace of Charles University in Prague and the Economics Institute of the Academy of Sciences of the Czech Republic, Politickych veznu 7, 111 21 Prague, Czech Republic; e-mail: Sergey.Slobodyan@cerge-ei.cz.

Abstract

In this paper, we investigate real-time behavior of constant-gain stochastic gradient (SG) learning, using the Phelps model of monetary policy as a testing ground. We find that whereas the self-confirming equilibrium is stable under the mean dynamics in a very large region, real-time learning diverges for all but the very smallest gain values. We employ a stochastic Lyapunov function approach to demonstrate that the SG mean dynamics is easily destabilized by the noise associated with real-time learning, because its Jacobian contains stable but very small eigenvalues. We also express caution on usage of perpetual learning algorithms with such small eigenvalues, as the real-time dynamics might diverge from the equilibrium that is stable under the mean dynamics.

Information

Type
Articles
Copyright
Copyright © Cambridge University Press 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable