To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Many find it plausible to posit a category of supererogatory actions. But the supererogatory resists easy analysis. Traditionally, supererogatory actions are characterized as actions that are morally good, but not morally required; actions that go ‘beyond’ the call of our moral obligations. As I shall argue in this article, however, the traditional analysis can be accepted only by a view with troubling consequences concerning the structure of the moral point of view. I propose a different analysis that is extensionally correct, avoids the problems of the traditional view, and, incidentally, also defuses any objection to act-consequentialism, or any other first-order moral theory, on grounds that it cannot accommodate the supererogatory.
Hunger continues to be one of humanity's greatest challenges despite the existence of a more-than-adequate global food supply equal to 2,800 kilocalories for every person every day. In measuring progress, policy-makers and concerned citizens across the globe rely on information supplied by the Food and Agriculture Organization (FAO), an agency of the United Nations. In 2010 the FAO reported that in the wake of the 2007–2008 food-price spikes and global economic crisis, the number of people experiencing hunger worldwide since 2005–2007 had increased by 150 million, rising above 1 billion in 2009. However, in its State of Food Insecurity in the World 2012 (SOFI 12) the FAO presented new estimates, having revamped its methods and reinterpreted its hunger data back to 1990. The revised numbers for the period 1990–1992 to 2010–2012 reverse the trend to a steadily falling one. Based on the FAO's new calculations, extreme undernourishment peaked in 1990 at a record-breaking one billion, followed by a significant decline through 2006, when progress stalled but did not reverse (see chart below).
This article examines fire-safety innovations (safety matches, extinguishers and hydrants) from professional engineers and lay inventors in Mexico City from 1860–1910. With a drastic increase in the frequency and intensity of fires in the capital, engineers and entrepreneurs saw fire hazards as business opportunities. Global trends that promoted patenting new technologies spurred entrepreneurs and professionals alike to transform the capital into an epicentre of innovation aimed at making the city safer. Their inventions marked the city with visible signs of technological change and reminded residents that fire was a real threat to urban life.
In a recent article in Utilitas, Gerald Lang suggests a solution to the so-called ‘wrong kind of reason problem’ (WKR problem) for the buck-passing account of value. In two separate replies to Lang, Jonas Olson and John Brunero, respectively, point out serious problems with Lang's suggestion, and at least Olson concludes that the solution Lang opts for is of the wrong kind for solving the WKR problem. I argue that while both Olson and Brunero have indeed identified considerable flaws in Lang's suggestion for a solution to the WKR problem, they have not provided sufficient grounds for dismissing the kind of solution that Lang proposes. I show how a version of this kind of solution can be formulated so as to avoid both Olson's and Brunero's objections. I also raise some worries concerning an alternative solution to the WKR problem suggested by Sven Danielsson and Jonas Olson.
Adam and Eve have the same record yet receive different punishments. Adam receives the punishment that they both deserve, whereas Eve receives a more lenient punishment. In this article, we explore whether a deserved-but-unequal punishment, such as what Adam receives, can be just. We do this by explicating the conceptions of retributive justice that underlie both sides of the debate. We argue that inequality in punishment is disturbing mainly because of the disrespect it often expresses towards the offender receiving the harsher treatment, and also because it casts doubt on whether Adam got what he deserved. We suggest that when no disrespect is involved and when it is clear that the criminal got what he deserved, inequality is not worrisome.
In this paper, we propose a unified account of the semantics of the English present progressive in the form of a semantic network, basing ourselves on the theoretical principles and analytical tools offered by the theory of Cognitive Grammar, as laid out by Langacker (1987, 1991). The core meaning of the English present progressive, we claim, is to indicate epistemic contingency in the speaker's immediate reality. It thus contrasts with the simple present, which is associated with situations that are construed as structurally belonging to reality. On the basis of a study of the Santa Barbara Corpus of spoken American English, an inventory has been made of the more specific uses of the present progressive, temporal as well as modal. It is shown that each of these uses can be derived from this basic meaning of contingency in immediate reality via a set of conceptual branching principles, in interaction with elements in the context.
Stephen Kearns and Daniel Star have recently proposed the following theory of reasons:
Reasons as Evidence: Necessarily, a fact F is a reason for an agent A to Φ iff F is evidence that A ought to Φ (where Φ is either a belief or an action).
In this article I present an objection, inspired by Mill's proof of the principle of utility, to the right-to-left reading of the biconditional. My claim is that the fact that you can perform some action can be evidence that you ought to do it without, itself, being a reason to do it. If this is true then Reasons as Evidence is false.
I argue that the Doctrine of Double Effect is accepted because of unreliable processes of belief-formation, making it unacceptably likely to be mistaken. We accept the doctrine because we more vividly imagine intended consequences of our actions than merely foreseen ones, making our aversions to the intended harms more violent, and making us judge that producing the intended harms is morally worse. This explanation fits psychological evidence from Schnall and others, and recent neuroscientific research from Greene, Klein, Kahane and Schaich Borg. It explains Mikhail and Hauser's ‘universal moral grammar’ and an interesting phenomenon about Double Effect cases noted by Bennett. When unequally vivid representations determine our decisions, we typically misjudge the merits of our options and make mistakes. So if Double Effect is a product of unequal vividness, it is likely to be mistaken. This argument, I claim, fits Berker's specifications for good empirically grounded arguments in ethics.
The emergence of market mechanisms for the protection of biodiversity and ecosystem services in recent years has been portrayed by most conservation institutions and epistemological frameworks as an unprecedented opportunity for the conservation of nature. This article shifts the focus from the improved effectiveness arguments concerning such mechanisms to examine their institutional and political context and origins. It outlines the field of transnational biodiversity markets and uncovers the normative biases it displays. The regulatory vocabulary of ‘market mechanisms’ is juxtaposed to the more explicit ideological approach professed by the critics of neoliberalism. The argument is that, rather than an inevitable component in contemporary conservation governance, market mechanisms imply a set of contested choices for certain values, a particular economic development trajectory, a particular understanding of the biodiversity problem, and a weak role for legal obligations. Transnational environmental law needs to establish its interest beyond that of the regulation literature if it is to account for the proliferating movements around the world in favour of a reasserted normative guidance for the markets and against the process of marketing the market.
This cross-sectional study in the acquisition of future temporality by English-speaking L2 French learners presents a descriptive account of the major contrastive features of the expression in futurity in English and French before considering learnability implications. A personal narrative and a cloze task were administered to L2 French learners (n = 34) at three proficiency levels and French native speaker controls (n = 14). Analyses revealed task and proficiency effects, but all learners used a variety of morphological forms to express futurity in their personal narratives, and appear to be acquiring temporal and modal values associated with the future.
Many writers have presented Joseph Elzéar Bernier (1852–1934) as a hero whose key role in establishing Canada's sovereignty over the Arctic islands was unjustly downplayed by the government he served. According to this view, the sector claim that Bernier made on 1 July 1909 is the true foundation of Canada's title to the archipelago. This article draws on government files to assess civil servants’ attitude to his sovereignty-related activities. It also describes the role played by James White, whose more sophisticated and effective sector concept predated Bernier's and served as the basis for the official sector claim made in June 1925. The evidence indicates that government officials in the 1920s were well justified in their doubts about Bernier's pretensions. However, even though they rejected his version of the sector theory and resented the campaign of self-glorification on which he embarked after his retirement, their personal relations with him were good, and they took considerable trouble to ensure what they considered to be an appropriate degree of recognition for him. The article therefore clarifies the differences between Bernier's rhetoric and reality, particularly with regard to the sector principle.
This paper examines the successful campaign in Britain to develop canine distemper vaccine between 1922 and 1933. The campaign mobilized disparate groups around the common cause of using modern science to save the nation's dogs from a deadly disease. Spearheaded by landed patricians associated with the country journal The Field, and funded by dog owners and associations, it relied on collaborations with veterinary professionals, government scientists, the Medical Research Council (MRC) and the commercial pharmaceutical house the Burroughs Wellcome Company (BWC). The social organization of the campaign reveals a number of important, yet previously unexplored, features of interwar science and medicine in Britain. It depended on a patronage system that drew upon a large base of influential benefactors and public subscriptions. Coordinated by the Field Distemper Fund, this system was characterized by close relationships between landed elites and their social networks with senior science administrators and researchers. Relations between experts and non-experts were crucial, with high levels of public engagement in all aspects of research and vaccine development. At the same time, experimental and commercial research supported under the campaign saw dynamic interactions between animal and human medicine, which shaped the organization of the MRC's research programme and demonstrated the value of close collaboration between veterinary and medical science, with the dog as a shared object and resource. Finally, the campaign made possible the translation of ‘laboratory’ findings into field conditions and commercial products. Rather than a unidirectional process, translation involved negotiations over the very boundaries of the ‘laboratory’ and the ‘field’, and what constituted a viable vaccine. This paper suggests that historians reconsider standard historical accounts of the nature of patronage, the role of animals, and the interests of landed elites in interwar British science and medicine.
William Bateson vigorously objected to the assumptions within the chromosome theory of heredity proposed by T. H. Morgan because he perceived inadequate experimental data that could substantiate the theory. Those objections were largely resolved by 1921, and Bateson reluctantly accepted the basic assumption that chromosomes carried the genetic factors from one generation to the next. Bateson's own research at that time on developmental genetics seemed out of touch with the general tone of the genetics field, and the chromosome theory did not provide illuminating mechanisms that elucidated phenomena such as plant variegations or chimeras. Bateson imagined a general theory of heredity and development based on vortices and waves, concepts he borrowed from contemporary physics. For decades he sought to devise an intellectually and aesthetically satisfying theory to eventually explain evolution in genetic terms, but his aspirations remained unfulfilled when he died in 1926.
Babbage wrote two relatively detailed, yet significantly incongruous, autobiographical accounts of his pre-Cambridge and Cambridge days. He published one in 1864 and in it advertised the existence of the other, which he carefully retained in manuscript form. The aim of this paper is to chart in some detail for the first time the discrepancies between the two accounts, to compare and assess their relative credibility, and to explain their author's possible reasons for knowingly fabricating the less credible of the two.