1.Algoet, P.H. & Cover, T.M. (1988). A sandwich proof of the Shannon-McMillan-Breiman theorem. Annals of Probability 16: 899–909.

2.Arimoto, S. (1973). On the converse to the coding theorem for discrete memoryless channels. IEEE Transactions on Information Theory IT-19: 357–359.

3.Barron, A.R. (1985). The strong ergodic theorem for densities generalized Shannon-McMillan-Breiman theorem. Annals of Probability 13: 1292–1303.

4.Bender, C.M. & Orszag, S.A. (1987). Advanced mathematical methods for scientists and engineers. New York: McGraw-Hill.

5.Blahut, R.E. (1972). Computation of channel capacity and rate distortion functions. IEEE Transactions on Information Theory IT-18: 460–473.

6.Blahut, R.E. (1987). Principles and practice of information theory. Reading, MA: Addison-Wesley.

7.Bleistein, N. & Handelsman, R.A. (1975). Asymptotic expansions of integrals. New York: Holt, Rinehart Winston.

8.Breiman, L. (1957). The individual ergodic theorem of information theory. Annals of Mathematical Statistics 28: 809–811 (corrected in 31: 809–810).

9.Covo, Y. (1992). Error bounds for noiseless channels by an asymptotic large deviations theory. M.Sc. thesis, Tel-Aviv University.

10.Covo, Y. & Schuss, Z. (1991). Error bounds for noiseless channels by an asymptotic large deviations theory. Preliminary report.

11.Csiszar, I. & Longo, G. (1971). On the error exponent for source coding and for testing simple statistical hypothesis. First published in the Hungarian Academy of Sciences, Budapest.

12.Davisson, L.D., Longo, G., & Sgarro, A. (1981). The error exponent for the noiseless encoding of finite ergodic Markov sources. IEEE Transactions on Information Theory IT-27: 431–438.

13.Dueck, G. & Korner, J. (1979). Reliability function of a discrete memoryless channel at rates above capacity. IEEE Transactions on Information Theory IT-25: 82–85.

14.Gardiner, C.W. (1985). Handbook of stochastic methods for physics, chemistry and the natural sciences. Springer-Verlag.

15.Gray, R.M. (1975). Sliding block source coding. IEEE Transactions on Information Theory IT-21: 357–368.

16.Gray, R.M. (1990). Entropy and information theory. Springer-Verlag.

17.Gray, R.M., Neuhoff, D.L., & Omura, J.K. (1975). Process definitions of distortion rate functions and source coding theorems. IEEE Transactions on Information Theory IT-21: 524–532.

18.Knessl, C., Matkowsky, B.J., Schuss, Z., & Tier, C. (1985). An asymptotic theory of large deviations for Markov jump processes. SIAM Journal of Applied Mathematics 46(6): 1006–1028.

19.Longo, G. &. Sgarro, A. (1979). The source coding theorem revisited: A combinatorial approach. IEEE Transactions on Information Theory IT-25: 544–548.

20.Mackenthun, K.M. & Pursley, M.B. (1978). Variable rate universal block source coding subject to a fidelity constraint. IEEE Transactions on Information Theory IT-24(3): 340–360.

21.Marton, K. (1974). Error exponent for source coding with a fidelity criterion. IEEE Transactions on Information Theory IT-20: 197–199.

22.Omura, J. (1973). A coding theorem for discrete time sources. IEEE Transactions on Information Theory IT-19: 490–498.

23.Orey, S. (1985). On the Shannon-Perez-Moy theorem. Contemporary Mathematics 41: 319–327.

24.Ornstein, D.S. & Shields, P.C. (1990). Universal almost sure data compression. Annals of Probability 18: 441–452.

25.Sadeh, I. (1996). Universal data compression algorithm based on approximate string matching. Probability in the Engineering and Informational Sciences 10: 465–486.

26.Shannon, C.E. (1948). A mathematical theory of communication. Bell Systems Technical Journal 27: 379–423, 623–656.

27.Shannon, C.E.. (1959). Coding theorems for a discrete source with a fidelity criterion. *IRE National Conv. Rec.* Part 4: 142–163.

28.Ziv, J. (1972). Coding of sources with unknown statistics. Part 1: Probability of encoding error; Part 2: Distortion relative to a fidelity criterion. IEEE Transactions on Information Theory IT-18: 384–394.