We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Whatever the original intent, the introduction of the term ‘thought experiment’ has proved to be one of the great public relations coups of science writing. For generations of readers of scientific literature, the term has planted the seed of hope that the fragment of text they have just read is more than mundane. Because it was a thought experiment, does it not tap into that infallible font of all wisdom in empiricist science, the experiment? And because it was conducted in thought, does it not miraculously escape the need for the elaborate laboratories and bloated budgets of experimental science?
These questions in effect pose the epistemological problem of thought experiments in the sciences:
Thought experiments are supposed to give us information about our physical world. From where can this information come?
One enticing response to the problem is to imagine that thought experiments draw from some special source of knowledge of the world that transcends our ordinary epistemic resources.
Dyslexia, or difficulty in learning to read that is not caused by a sensory deficit or lack of effort or education, affects readers of all languages (Caravolas,2005). Based on this definition, dyslexia cannot be diagnosed until children have demonstrated trouble with reading acquisition. However, it would be ideal to identify which children will go on develop reading problems before they struggle or fail to learn to read. Children who are identified early and who receive early intervention are likely to have better reading outcomes (Bowyer-Crane et al., 2008; Torgesen, 2004; Schatschneider & Torgesen, 2004; Vellutino, Scanlon, & Tanzman, 1998) and may suffer fewer of the negative consequences associated with poor reading. Further, an understanding of which children are at greatest risk for reading difficulties would allow educators and clinicians to allocate limited intervention resources to students who need them most. Extensive behavioral research has sought to answer this question and yet models predicting reading are rarely employed in practice.
The received view is that a Maxwell's demon must fail to reverse the second law of thermodynamics for reasons to do with information and computation. This received view has failed, I argue, and our continuing preoccupation with it has distracted us from a simpler and more secure exorcism that merely uses the Liouville theorem of statistical physics. I extend this exorcism to the quantum case.
Curie’s principle asserts that every symmetry of a cause manifests as a symmetry of the effect. It can be formulated as a tautology that is vacuous until it is instantiated. However, instantiation requires us to know the correct way to map causal terminology onto the terms of a science. Causal metaphysics has failed to provide a unique, correct way to carry out the mapping. Thus, successful or unsuccessful instantiation merely reflects our freedom of choice in the mapping.
The thermodynamics of computation assumes that computational processes at the molecular level can be brought arbitrarily close to thermodynamic reversibility and that thermodynamic entropy creation is unavoidable only in data erasure or the merging of computational paths, in accord with Landauer’s principle. The no-go result shows that fluctuations preclude completion of thermodynamically reversible processes. Completion can be achieved only by irreversible processes that create thermodynamic entropy in excess of the Landauer limit.
In recent decades, there has been growing interest among farming and scientific communities toward integrated crop–range–livestock farming because of evidence of increased crop production, soil health, environmental services and resilience to increased climatic variability. This paper reviews studies on existing cropping systems and integrated crop–range–livestock systems across the USA which are relevant in the context of summarizing opportunities and challenges associated with implementing long-term crop–range–livestock systems research in the highly variable environment of the central High Plains. With precipitation ranging from 305 to 484mm and uncertain irrigation water supply, this region is especially vulnerable to changing moisture and temperature patterns. The results of our review indicate that diverse crop rotations, reduced soil disturbance and integrated crop–livestock systems could increase economic returns and agroecosystem resilience. Integrating agricultural system components to acquire unique benefits from small- to medium-sized operations, however, is a challenging task. This is because assessment and identification of suitable farming systems, selection of the most efficient integration scheme, and pinpointing the best management practices are crucial for successful integration of components. Effective integration requires development of evaluation criteria that incorporate the efficiency of approaches under consideration and their interactions. Therefore, establishing the basis for more sustainable farming systems in the central High Plains relies on both long-term agricultural systems research and evaluation of short-term dynamics of individual components.
It is proposed that we use the term “approximation” for inexact description of a target system and “idealization” for another system whose properties also provide an inexact description of the target system. Since systems generated by a limiting process can often have quite unexpected—even inconsistent—properties, familiar limit processes used in statistical physics can fail to provide idealizations but merely provide approximations.
In a material theory of induction, inductive inferences are warranted by facts that prevail locally. This approach, it is urged, is preferable to formal theories of induction in which the good inductive inferences are delineated as those conforming to universal schemas. An inductive inference problem concerning indeterministic, nonprobabilistic systems in physics is posed, and it is argued that Bayesians cannot responsibly analyze it, thereby demonstrating that the probability calculus is not the universal logic of induction.
Bayesian probabilistic explication of inductive inference conflates neutrality of supporting evidence for some hypothesis H (“not supporting H”) with disfavoring evidence (“supporting not-H”). This expressive inadequacy leads to spurious results that are artifacts of a poor choice of inductive logic. I illustrate how such artifacts have arisen in simple inductive inferences in cosmology. In the inductive disjunctive fallacy, neutral support for many possibilities is spuriously converted into strong support for their disjunction. The Bayesian “doomsday argument” is shown to rely entirely on a similar artifact.
Newton's equations of motion tell us that a mass at rest at the apex of a dome with the shape specified here can spontaneously move. It has been suggested that this indeterminism should be discounted since it draws on an incomplete rendering of Newtonian physics, or it is “unphysical,” or it employs illicit idealizations. I analyze and reject each of these reasons.
The regional distribution of degeneration of the corpus callosum (CC) in dementia is not yet clear. This study compared regional CC size in participants (n = 179) from the Cache County Memory and Aging Study. Participants represented a range of cognitive function: Alzheimer's disease (AD), vascular dementia (VaD), mild ambiguous (MA–cognitive problems, but not severe enough for diagnosis of dementia), and healthy older adults. CC outlines obtained from midsagittal magnetic resonance images were divided into 99 equally spaced widths. Factor analysis of these callosal widths identified 10 callosal regions. Multivariate analysis of variance revealed significant group differences for anterior and posterior callosal regions. Post-hoc pairwise comparisons of CC regions in patient groups as compared to the control group (controlling for age) revealed trends toward smaller anterior and posterior regions, but not all were statistically significant. As compared to controls, significantly smaller anterior and posterior CC regions were found in the AD group; significantly smaller anterior CC regions in the VaD group; but no significant CC regional differences in the MA group. Findings suggest that dementia-related CC atrophy occurs primarily in the anterior and posterior portions. (JINS, 2008, 14, 414–423.)
The epistemic state of complete ignorance is not a probability distribution. In it, we assign the same, unique, ignorance degree of belief to any contingent outcome and each of its contingent, disjunctive parts. That this is the appropriate way to represent complete ignorance is established by two instruments, each individually strong enough to identify this state. They are the principle of indifference (PI) and the notion that ignorance is invariant under certain redescriptions of the outcome space, here developed into the ‘principle of invariance of ignorance’ (PII). Both instruments are so innocuous as almost to be platitudes. Yet the literature in probabilistic epistemology has misdiagnosed them as paradoxical or defective since they generate inconsistencies when conjoined with the assumption that an epistemic state must be a probability distribution. To underscore the need to drop this assumption, I express PII in its most defensible form as relating symmetric descriptions and show that paradoxes still arise if we assume the ignorance state to be a probability distribution.
Thought experiments in science are merely picturesque argumentation. I support this view in various ways, including the claim that it follows from the fact that thought experiments can err but can still be used reliably. The view is defended against alternatives proposed by my cosymposiasts.
When Einstein formulated his General Theory of Relativity, he presented it as the culmination of his search for a generally covariant theory. That this was the signal achievement of the theory rapidly became the orthodox conception. A dissident view, however, tracing back at least to objections raised by Erich Kretschmann in 1917, holds that there is no physical content in Einstein's demand for general covariance. That dissident view has grown into the mainstream. Many accounts of general relativity no longer even mention a principle or requirement of general covariance.
What is unsettling for this shift in opinion is the newer characterization of general relativity as a gauge theory of gravitation, with general covariance expressing a gauge freedom. The recognition of this gauge freedom has proved central to the physical interpretation of the theory. That freedom precludes certain otherwise natural sorts of background spacetimes; it complicates identification of the theory's observables, since they must be gauge invariant; and it is now recognized as presenting special problems for the project of quantizing of gravitation.
…that we need not choose between
It would seem unavoidable that we can choose at most one of these two views: the vacuity of a requirement of general covariance or the central importance of general covariance as a gauge freedom of general relativity. I will urge here that this is not so; we may choose both, once we recognize the differing contexts in which they arise.
Contrary to formal theories of induction, I argue that there are no universal inductive inference schemas. The inductive inferences of science are grounded in matters of fact that hold only in particular domains, so that all inductive inference is local. Some are so localized as to defy familiar characterization. Since inductive inference schemas are underwritten by facts, we can assess and control the inductive risk taken in an induction by investigating the warrant for its underwriting facts. In learning more facts, we extend our inductive reach by supplying more localized inductive inference schemes. Since a material theory no longer separates the factual and schematic parts of an induction, it proves not to be vulnerable to Hume's problem of the justification of induction.