Book contents
- Frontmatter
- Contents
- Editor's foreword
- Preface
- Part I Principles and elementary applications
- Part II Advanced applications
- 11 Discrete prior probabilities: the entropy principle
- 12 Ignorance priors and transformation groups
- 13 Decision theory, historical background
- 14 Simple applications of decision theory
- 15 Paradoxes of probability theory
- 16 Orthodox methods: historical background
- 17 Principles and pathology of orthodox statistics
- 18 The Ap distribution and rule of succession
- 19 Physical measurements
- 20 Model comparison
- 21 Outliers and robustness
- 22 Introduction to communication theory
- Appendix A Other approaches to probability theory
- Appendix B Mathematical formalities and style
- Appendix C Convolutions and cumulants
- References
- Bibliography
- Author index
- Subject index
18 - The Ap distribution and rule of succession
from Part II - Advanced applications
Published online by Cambridge University Press: 05 September 2012
- Frontmatter
- Contents
- Editor's foreword
- Preface
- Part I Principles and elementary applications
- Part II Advanced applications
- 11 Discrete prior probabilities: the entropy principle
- 12 Ignorance priors and transformation groups
- 13 Decision theory, historical background
- 14 Simple applications of decision theory
- 15 Paradoxes of probability theory
- 16 Orthodox methods: historical background
- 17 Principles and pathology of orthodox statistics
- 18 The Ap distribution and rule of succession
- 19 Physical measurements
- 20 Model comparison
- 21 Outliers and robustness
- 22 Introduction to communication theory
- Appendix A Other approaches to probability theory
- Appendix B Mathematical formalities and style
- Appendix C Convolutions and cumulants
- References
- Bibliography
- Author index
- Subject index
Summary
Inside every Non-Bayesian, there is a Bayesian struggling to get out.
Dennis V. LindleyUp to this point, we have given our robot fairly general principles by which it can convert information into numerical values of prior probabilities, and convert posterior probabilities into definite final decisions; so it is now able to solve lots of problems. But it still operates in a rather inefficient way in one respect. When we give it a new problem, it has to go back into its memory (this proposition that we have denoted by X or I, which represents everything it has ever learned). It must scan its entire memory archives for anything relevant to the problem before it can start working on it. As the robot grows older this gets to be a more and more time-consuming process.
Now, human brains don't do this. We have some machinery built into us which summarizes our past conclusions, and allows us to forget the details which led us to those conclusions. We want to see whether it is possible to give the robot a definite mechanism by which it can store general conclusions rather than isolated facts.
Memory storage for old robots
Note another thing, which we will see is closely related to this problem. Suppose you have a penny and you are allowed to examine it carefully, and convince yourself that it is an honest coin; i.e. accurately round, with head and tail, and a center of gravity where it ought to be.
- Type
- Chapter
- Information
- Probability TheoryThe Logic of Science, pp. 553 - 588Publisher: Cambridge University PressPrint publication year: 2003