It is now believed that our universe was created around 13.8 billion years ago. Our planet earth came into existence around 4.5 billion years ago. For nearly a billion years or so after it was formed, the earth was stark and bereft of life. The matter contained on earth was inorganic with relatively small molecules. There were endless rock formations, oceans, and an atmosphere.
And then there was life.
The problem of how life was created is a fascinating one. Our focus is on looking at life on earth and asking how it works. The lessons we learn provide hints to the answer to the deep and fundamental question pondered by our ancients: Was life on earth inevitable? Then there are the questions posed by Henderson : Is the nature of our physical world biocentric? Is there a need for fine-tuning in biochemistry to provide for the fitness of life in the cosmos – or, even less ambitiously, for life here on earth? Surprisingly, as we will show, a physics approach turns out to be valuable for thinking about these questions.
All living organisms have a genetic map consisting of a one-dimensional string of information encoded in the DNA molecule. An essential question that one seeks to answer is how an organism converts that information into a three-dimensional living being.
Life has many common patterns. All living cells follow certain simple “universal” themes.
ABSTRACT. Decisions under uncertainty depend not only on the degree of uncertainty but also on its source, as illustrated by Ellsberg's observation of ambiguity aversion. In this article we propose the comparative ignorance hypothesis, according to which ambiguity aversion is produced by a comparison with less ambiguous events or with more knowledgeable individuals. This hypothesis is supported in a series of studies showing that ambiguity aversion, present in a comparative context in which a person evaluates both clear and vague prospects, seems to disappear in a noncomparative context in which a person evaluates only one of these prospects in isolation.
One of the fundamental problems of modern decision theory is the analysis of decisions under ignorance or ambiguity, where the probabilities of potential outcomes are neither specified in advance nor readily assessed on the basis of the available evidence. This issue was addressed by Knight , who distinguished between measurable uncertainty or risk, which can be represented by precise probabilities, and unmeasurable uncertainty, which cannot. Furthermore, he suggested that entrepreneurs are compensated for bearing unmeasurable uncertainty as opposed to risk. Contemporaneously, Keynes  distinguished between probability, representing the balance of evidence in favor of a particular proposition and the weight of evidence, representing the quantity of evidence supporting that balance. He then asked, “If two probabilities are equal in degree, ought we, in choosing our course of action, to prefer that one which is based on a greater body of knowledge?” [p. 313].
ABSTRACT. Decision theory distinguishes between risky prospects, where the probabilities associated with the possible outcomes are assumed to be known, and uncertain prospects, where these probabilities are not assumed to be known. Studies of choice between risky prospects have suggested a nonlinear transformation of the probability scale that overweights low probabilities and underweights moderate and high probabilities. The present article extends this notion from risk to uncertainty by invoking the principle of bounded subadditivity: An event has greater impact when it turns impossibility into possibility, or possibility into certainty, than when it merely makes a possibility more or less likely. A series of studies provides support for this principle in decision under both risk and uncertainty and shows that people are less sensitive to uncertainty than to risk. Finally, the article discusses the relationship between probability judgments and decision weights and distinguishes relative sensitivity from ambiguity aversion.
Decisions are generally made without definite knowledge of their consequences. The decisions to invest in the stock market, to undergo a medical operation, or to go to court are generally made without knowing in advance whether the market will go up, the operation will be successful, or the court will decide in one's favor. Decision under uncertainty, therefore, calls for an evaluation of two attributes: the desirability of possible outcomes and their likelihood of occurrence. Indeed, much of the study of decision making is concerned with the assessment of these values and the manner in which they are - or should be - combined.
ABSTRACT. We develop a belief-based account of decision under uncertainty. This model predicts decisions under uncertainty from (i) judgments of probability, which are assumed to satisfy support theory; and (ii) decisions under risk, which are assumed to satisfy prospect theory. In two experiments, subjects evaluated uncertain prospects and assessed the probability of the respective events. Study 1 involved the 1995 professional basketball playoffs; Study 2 involved the movement of economic indicators in a simulated economy. The results of both studies are consistent with the belief-based account, but violate the partition inequality implied by the classical theory of decision under uncertainty.
KEY WORDS decision making; risk; uncertainty; expected utility; prospect theory; support theory; decision weights; judgment; probability
It seems obvious that the decisions to invest in the stock market, undergo a medical treatment, or settle out of court depend on the strength of people's beliefs that the market will go up, that the treatment will be successful, or that the court will decide in their favor. It is less obvious how to elicit and measure such beliefs. The classical theory of decision under uncertainty derives beliefs about the likelihood of uncertain events from people's choices between prospects whose consequences are contingent on these events. This approach, first advanced by Ramsey (1931), gives rise to an elegant axiomatic theory that yields simultaneous measurement of utility and subjective probability, thereby bypassing the thorny problem of how to interpret direct expressions of belief.
Email your librarian or administrator to recommend adding this to your organisation's collection.