Skip to main content Accessibility help
×
Home

Lumpings of Markov Chains, Entropy Rate Preservation, and Higher-Order Lumpability

  • Bernhard C. Geiger (a1) and Christoph Temmel (a2)

Abstract

A lumping of a Markov chain is a coordinatewise projection of the chain. We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original trajectories from their lumped images. Both are purely combinatorial criteria, depending only on the transition graph of the Markov chain and the lumping function. A lumping is strongly k-lumpable, if and only if the lumped process is a kth-order Markov chain for each starting distribution of the original Markov chain. We characterise strong k-lumpability via tightness of stationary entropic bounds. In the sparse setting, we give sufficient conditions on the lumping to both preserve the entropy rate and be strongly k-lumpable.

    • Send article to Kindle

      To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

      Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

      Find out more about the Kindle Personal Document Service.

      Lumpings of Markov Chains, Entropy Rate Preservation, and Higher-Order Lumpability
      Available formats
      ×

      Send article to Dropbox

      To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

      Lumpings of Markov Chains, Entropy Rate Preservation, and Higher-Order Lumpability
      Available formats
      ×

      Send article to Google Drive

      To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

      Lumpings of Markov Chains, Entropy Rate Preservation, and Higher-Order Lumpability
      Available formats
      ×

Copyright

Corresponding author

Postal address: Institute for Communications Engineering, Technische Universiät München, Theresienstrasse 90, 80333 Munich. Email address: geiger@ieee.org
∗∗ Postal address: Department of Mathematics, VU University Amsterdam, De Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands. Email address: math@temmel.me

References

Hide All
[1] Anderson, B. D. O. (1999). The realization problem for hidden Markov models. Math. Control Signals Systems 12, 80120.
[2] Brown, P. F. et al. (1992). Class-based n-gram models of natural language. Comput. Linguist. 18, 467479.
[3] Blackwell, D. (1957). The entropy of functions of finite-state Markov chains. In Trans 1st Prague Conf. Inf. Theory, Statist. Decision Functions, (Liblice, 1956). Publishing House of the Czechoslovak Academy of Sciences, Prague, pp. 1320.
[4] Blackwell, D. and Koopmans, L. (1957). On the identifiability problem for functions of finite Markov chains. Ann. Math. Statist. 28, 10111015.
[5] Burke, C. J. and Rosenblatt, M. (1958). A Markovian function of a Markov chain. Ann. Math. Statist. 29, 11121122.
[6] Carlyle, J. W. (1967). Identification of state-calculable functions of finite Markov chains. Ann. Math. Statist. 38, 201205.
[7] Cover, T. M. and Thomas, J. A. (2006). Elements of Information Theory, 2nd edn. John Wiley, Hoboken, NJ.
[8] Ephraim, Y. and Merhav, N. (2002). Hidden Markov processes. IEEE Trans. Inf. Theory 48, 15181569.
[9] Geiger, B. C. and Kubin, G. (2011). Some results on the information loss in dynamical systems. In Proc. IEEE Internat. Symp. Wireless Commun. Systems (ISWSC), IEEE, New York, pp. 794798, 2011. Extended version available at http://uk.arxiv.org/abs/1106.2404.
[10] Geiger, B. C. and Temmel, C. (2013). Information-preserving Markov aggregation. In Proc. IEEE Information Theory Workshop (ITW), IEEE, New York, pp. 258262. Extended version available at http://uk.arxiv.org/abs/1304.0920.
[11] Gilbert, E. J. (1959). On the identifiability problem for functions of finite Markov chains. Ann. Math. Statist. 30, 688697.
[12] Gray, R. M. (1990). Entropy and Information Theory. Springer, New York.
[13] Gurvits, L. and Ledoux, J. (2005). Markov property for a function of a Markov chain: a linear algebra approach. Linear Algebra Appl. 404, 85117.
[14] Heiner, M., Rohr, C., Schwarick, M. and Streif, S. (2010). A comparative study of stochastic analysis techniques. In Proc. 8th Internat. Conf. Comput. Meth. Systems Biol., ACM, New York, pp. 96106.
[15] Heller, A. (1965). On stochastic processes derived from Markov chains. Ann. Math. Statist. 36, 12861291.
[16] Henzinger, T. A., Mikeev, L., Mateescu, M. and Wolf, V. (2010). Hybrid numerical solution of the chemical master equation. In Proc. 8th Internat. Conf. Comput. Meth. Systems Biol., ACM, New York, pp. 5565.
[17] Kemeny, J. G. and Snell, J. L. (1976). Finite Markov Chains. Springer, New York.
[18] Kieffer, J. C. and Rahe, M. (1981). Markov channels are asymptotically mean stationary. SIAM J. Math. Anal. 12, 293305.
[19] Lindqvist, B. (1978). On the loss of information incurred by lumping states of a Markov chain. Scand. J. Statist. 5, 9298.
[20] Parzen, E. (1999). Stochastic Processes (Classics Appl. Math. 24). Society for Industrial and Applied Mathematics, Philadelphia, PA.
[21] Pinsker, M. S. (1964). Information and Information Stability of Random Variables and Processes. Holden-Day, San Francisco, CA.
[22] Rogers, L. C. G. and Pitman, J. W. (1981). Markov functions. Ann. Prob. 9, 573582.
[23] Sarukkai, R. R. (2000). Link prediction and path analysis using Markov chains. Comput. Networks 33, 377386.
[24] Watanabe, S. and Abraham, C. T. (1960). Loss and recovery of information by coarse observation of stochastic chain. Inf. Control 3, 248278.
[25] Wilkinson, D. J. (2006). Stochastic Modelling for Systems Biology. Chapman & Hall/CRC, Boca Raton, FL.
[26] Woess, W. (2009). Denumerable Markov chains. Generating Functions, Boundary Theory, Random Walks on Trees. European Mathematical Society, Zürich.

Keywords

MSC classification

Related content

Powered by UNSILO

Lumpings of Markov Chains, Entropy Rate Preservation, and Higher-Order Lumpability

  • Bernhard C. Geiger (a1) and Christoph Temmel (a2)

Metrics

Altmetric attention score

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed.