We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Given how thoroughly the history of quantum physics has been excavated, it might be wondered what these two hefty volumes by a physicist (Duncan) and a historian (Janssen) bring to the table. Aside from their inclusion of a wide range of recent work in this area, including some notable publications by themselves, the answer is twofold: first, as they state explicitly in the preface to the first volume, derivations of the key results are presented ‘at a level that a reader with a command of physics and mathematics comparable to that of an undergraduate in physics should be able to follow without having to take out pencil and paper’ (vol. 1, p. vi). In response to those who might raise Whiggish eyebrows, I shall simply play the ‘you-try-reading-Pascual-Jordan's-groundbreaking-work-in-the-original’ card. As the authors suggest, by using modern notation and streamlining derivations whilst also, they maintain, remaining conceptually faithful to the original sources (ibid.), the book is rendered suitable for classroom use, albeit at the higher undergraduate or graduate levels.
Modified Mini-Mental State Examination (3MSE) is often used to screen for dementia, but little is known about psychometric validity in American Indians.
Methods:
We recruited 818 American Indians aged 65–95 for 3MSE examinations in 2010–2013; 403 returned for a repeat examination in 2017–2019. Analyses included standard psychometrics inferences for interpretation, generalizability, and extrapolation: factor analysis; internal consistency-reliability; test-retest score stability; multiple indicator multiple cause structural equation models.
Results:
This cohort was mean age 73, majority female, mean 12 years education, and majority bilingual. The 4-factor and 2nd-order models fit best, with subfactors for orientation and visuo-construction (OVC), language and executive functioning (LEF), psychomotor and working memory (PMWM), verbal and episodic memory (VEM). Factor structure was supported for both research and clinical interpretation, and factor loadings were moderate to high. Scores were generally consistent over mean 7 years. Younger participants performed better in overall scores, but not in individual factors. Males performed better on OVC and LEF, females better on PMWM. Those with more education performed better on LEF and worse on OVC; the converse was true for bilinguals. All differences were significant, but small.
Conclusion:
These findings support use of 3MSE for individual interpretation in clinic and research among American Indians, with moderate consistency, stability, reliability over time. Observed extrapolations across age, sex, education, and bilingual groups suggest some important contextual differences may exist.
This paper discusses an exploratory investigation into vision and mission, the starting points of the classical strategic planning process (Ginter, Rucks and Duncan 1985). Constructs measuring vision and mission are identified, as well as three other items that are frequently cited in the literature as being critical for effective formulation and implementation of vision and mission statements (latent abilities, market focus and competitor focus). It is argued that clearer definitions of these constructs and others discussed in strategic literature are a necessary step to allow empirical investigation into the interactions inherent in the strategic process.
Increased use of marketing contracts by agricultural firms has stimulated a modest amount of literature in which the principles of decision theory are applied to the contracting problem. Much of this literature has focused on farmers' choices between cash and futures market positions. Others have modeled the influence of annual open market and fixed forward price options on farm growth objectives. Little or no attention has been paid to expressed interest, especially among processors, for suitable long-term (multi-annual) contract price formulae and for a theoretical framework through which to evaluate them. This paper attempts to provide this service in special regard to the tomato and tomato paste contracting problems of a U.S. fruit and vegetable processing cooperative.
A critical analysis of contemporary strategic management theory and practice suggests that modernist, linear thinking has facilitated the development of an abstracted reality which is misleading to managers and fundamentally flawed. It is argued that formulaic strategic tools such as those propounded by Porter fail to capture the reality of the complex environments that confront firms and falsely suggest that an answer can be derived from a predetermined toolbox.
As an alternative to this dominant paradigm, the complexity of markets is presented not as something to be feared and ignored, but rather as a truth to be embraced. As a basis of taking this step, current knowledge on how complex environments work, perspectives on how they can be better understood and how people and organizations can engage within them, is presented. Ultimately it is recognised that both theoretical and practical foundations need significant, further development.
Decline in social functioning occurs in individuals who later develop psychosis.
Aims
To investigate whether baseline differences in disability are present in those who do and those who do not make a transition to psychosis in a group clinically at high risk and whether disability is a risk factor for transition.
Method
Prospective multicentre, naturalistic field study with an 18-month follow-up period on 245 help-seeking individuals clinically at high risk. Disability was assessed with the Disability Assessment Schedule of the World Health Organization (WHODAS–II).
Results
At baseline, the transition group displayed significantly greater difficulties in making new friends (z =−3.40, P = 0.001), maintaining a friendship (z =−3.00, P = 0.003), dealing with people they do not know (z =−2.28, P = 0.023) and joining community activities (z =−2.0, P = 0.05) compared with the non-transition group. In Cox regression, difficulties in getting along with people significantly contributed to the prediction of transition to psychosis in our sample (β = 0.569, s.e. = 0.184, Wald = 9.548, P = 0.002, hazard ratio (HR) = 1.767, 95% CI 1.238–2.550).
Conclusions
Certain domains of social disability might contribute to the prediction of psychosis in a sample clinically at high risk.
The central concern of this article is whether the semantic approach has the resources to appropriately capture the core tenets of structural realism. Chakravartty (2001) has argued that a realist notion of correspondence cannot be accommodated without introducing a linguistic component, which undermines the approach itself. We suggest that this worry can be addressed by an appropriate understanding of the role of language in this context. The real challenge, however, is how to incorporate the core notion of ‘explanatory approximate truth’ in such a way that the emphasis on structure is retained.
If a system in atomic physics contains a number of particles of the same kind, e.g. a number of electrons, the particles are absolutely indistinguishable one from another. No observable change is made when two of them are interchanged … A satisfactory theory ought, of course, to count any two observationally indistinguishable states as the same state and to deny that any transition does occur when two similar particles exchange places.
(Dirac, 1958, p. 207)
Introduction
In our contribution to this volume we deal with discrete symmetries: these are symmetries based upon groups with a discrete set of elements (generally a set of elements that can be enumerated by the positive integers). In physics we find that discrete symmetries frequently arise as ‘internal’, non-spacetime symmetries. Permutation symmetry is such a discrete symmetry, arising as the mathematical basis underlying the statistical behaviour of ensembles of certain types of indistinguishable quantum particle (e.g. fermions and bosons). Roughly speaking, if such an ensemble is invariant under a permutation of its constituent particles (i.e. permutation symmetric) then one doesn't ‘count’ those permutations which merely ‘exchange’ indistinguishable particles; rather, the exchanged state is identified with the original state.
This principle of invariance is generally called the ‘indistinguishability postulate’ (IP), but we prefer to use the term ‘permutation invariance’ (PI).
Discussions of representation in science tend to draw on examples from art. However, such examples need to be handled with care given a) the differences between works of art and scientific theories and b) the accommodation of these examples within certain philosophies of art. I shall examine the claim that isomorphism is neither necessary nor sufficient for representation and I shall argue that there exist accounts of representation in both art and science involving isomorphism which accommodate the apparent counterexamples and, moreover, allow us to understand how “impossible” artistic objects and inconsistent scientific theories can be said to represent.
We present a novel process methodology for the controlled cutting of nanotubes and other nanostructures to well-controlled lengths and sizes. The continuing increase in complexity of electronic devices, coupled with decreasing size of individual elements, are placing more stringent demands on the resolution and accuracy of fabrication patterns. The ability to fabricate on a nanometer scale guarantees a continuation in miniaturization of functional devices. Particularly interesting is the application of nanotubes' chemical and electronic properties which vary with their dimensions and structure. One realization of this process includes the use of photolithography or electron beam lithography to place protective resist patterns over the nanostructures to be cut. Those sections which are not covered by the resist pattern are removed by reactive ion etching. This is a scaleable process which permits the simultaneous cutting of many nanostructures and ensembles of nanostructures. The lengths, shapes or length distributions can be predicted from theory and thus specified for a given application requirement. Nanostructures which can be cut in this process include nanotubes, nanofibers and nanoplanes. Large scale production of nanostructures with uniform length or specific size-distribution can be used in electronic applications such as field-emission transistors, optoelectronic elements, single electron devices and sensors.
We examine, from the partial structures perspective, two forms of applicability of mathematics: at the “bottom” level, the applicability of theoretical structures to the “appearances”, and at the “top” level, the applicability of mathematical to physical theories. We argue that, to accommodate these two forms of applicability, the partial structures approach needs to be extended to include a notion of “partial homomorphism”. As a case study, we present London's analysis of the superfluid behavior of liquid helium in terms of Bose-Einstein statistics. This involved both the introduction of group theory at the top level, and some modeling at the “phenomenological” level, and thus provides a nice example of the relationships we are interested in. We conclude with a discussion of the “autonomy” of London's model.
Thirty years after the conference that gave rise to The Structure of Scientific Theories, there is renewed interest in the nature of theories and models. However, certain crucial issues from thirty years ago are reprised in current discussions; specifically: whether the diversity of models in the science can be captured by some unitary account; and whether the temporal dimension of scientific practice can be represented by such an account. After reviewing recent developments we suggest that these issues can be accommodated within the partial structures formulation of the semantic or model-theoretic approach.
The relationship between mathematics and science is clearly of fundamental concern in both the philosophy of mathematics and the philosophy of science. How this relationship should be represented is a crucial issue in this area. One possibility is to employ a model-theoretic framework in which ‘physical’ structures are regarded as embedded in ‘mathematical’ ones. In section 2 I will briefly outline a form of this type of account which offers a function space analysis of theories (Redhead 1975). This function space analysis is then used to represent the relationship between theoretical and mathematical structures. In subsequent sections I will consider the role of group theory in physics from within this meta-theoretical framework and then draw some conclusions for realism in the philosophy of science.
Function spaces and the model-theoretic approach
According to Redhead, it is an ‘empirical-historical fact’ that theories in physics can be represented as mathematical structures (Redhead 1975). This then allows the possibility of representing the relation of mathematics to physics in terms of embedding a theory T in a mathematical structure M′, in the usual set-theoretic sense of there existing an isomorphism between T and a sub-structure M of M′. M′ is then taken to be a non-simple conservative extension of M. There is an immediate question regarding the nature of T. To be embedded in M′ it must already be ‘mathematized’ in some form or other. Thus, the issue here is not so much Wigner's inexplicable utility of mathematics in science, in the sense of its being the indispensable language in which theories are expressed, but rather the way in which new theoretical structure can be generated via this embedding of a theory, which is already mathematized, into a mathematical structure.
Are there quantum objects? What are they if they are neither given nor resemble anything familiar? To answer these questions we have to abstract from the substantive features of familiar things, delineate the pure logical forms by which we acknowledge objects and show how the forms are fulfilled in quantum theories. We have to explicate, in general terms and without resorting to the given, what we mean by objects. The clear criteria will enable us to affirm the objectivity of quantum theories.
Auyang 1995, p. 5
Introduction
Cantor, famously, defined a set as ‘… collections into a whole of definite, distinct objects of our intuition or of our thought’ (Cantor 1955, p. 85, our emphasis). On this basis the standard formulations of set theory, and consequently much of contemporary mathematics, are erected. Reflecting upon this definition, and the underlying ontological presuppositions, the question immediately arises, how are we to treat, mathematically, collections of objects which are not distinct individuals? This question becomes particularly acute in the quantum mechanical context, of course. As Yu. I. Manin noted at the 1974 American Mathematical Society Congress on the Hilbert Problems,
We should consider possibilities of developing a totally new language to speak about infinity [that is, axioms for set theory]. Classical critics of Cantor (Brouwer et al.) argue that, say, the general choice is an illicit extrapolation of the finite case.
Several series of amorphous silicon nitride thin films have been grown by plasma-enhanced chemical vapour deposition, where the ratio of ammonia and silane feed gases was held constant for each series while the deposition temperature was varied from 160 °C to 550 °C, and all other deposition conditions were held constant. Photothermal Deflection Spectroscopy measurements were used to determine the Urbach slope E0 and the defect density ND. It is found that ND is determined by E0 for most of these samples, suggesting that defect equilibration occurs in a-SiNx:H for x up to at least 0.6. The growth temperature at which the disorder is minimised increases to higher values with increasing x, which is explained in terms of a hydrogen-mediated bond equilibration reaction. Fourier Transform Infra Red spectroscopy measurements were performed to determine the changes in hydrogen bonding with growth temperature. The results suggest that a second bond equilibration reaction also occurs at the growing surface, but that equilibrium cannot be reached at higher temperatures because of hydrogen evolution from Si-H bonds.
An introduction to the model-theoretic approach in the philosophy of science is given and it is argued that this program is further enhanced by the introduction of partial structures. It is then shown that this leads to a natural and intuitive account of both “iconic” and mathematical models and of the role of the former in science itself.