To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper discusses two opposing views about the relation between artificial intelligence (AI) and human intelligence: on the one hand, a worry that heavy reliance on AI technologies might make people less intelligent and, on the other, a hope that AI technologies might serve as a form of cognitive enhancement. The worry relates to the notion that if we hand over too many intelligence-requiring tasks to AI technologies, we might end up with fewer opportunities to train our own intelligence. Concerning AI as a potential form of cognitive enhancement, the paper explores two possibilities: (1) AI as extending—and thereby enhancing—people’s minds, and (2) AI as enabling people to behave in artificially intelligent ways. That is, using AI technologies might enable people to behave as if they have been cognitively enhanced. The paper considers such enhancements both on the level of individuals and on the level of groups.
Luck egalitarianism is a responsibility-sensitive theory of distributive justice. Its application to health and healthcare is controversial. This article addresses a novel critique of luck egalitarianism, namely, that it wrongfully discriminates against those responsible for their health disadvantage when allocating scarce healthcare resources. The philosophical literature about discrimination offers two primary reasons for what makes discrimination wrong (when it is): harm and disrespect. These two approaches are employed to analyze whether luck egalitarian healthcare prioritization should be considered wrongful discrimination. Regarding harm, it is very plausible to consider the policies harmful but much less reasonable to consider those responsible for their health disadvantages a socially salient group. Drawing on the disrespect literature, where social salience is typically not required for something to be discrimination, the policies are a form of discrimination. They are, however, not disrespectful. The upshot of this first assessment of the discrimination objection to luck egalitarianism in health is, thus, that it fails.
Intelligence operations overwhelmingly focus on obtaining secrets (espionage) and the unauthorized disclosure of secrets by a public official in one political community to another (treason). It is generally understood that the principal responsibility of spies is to successfully procure secrets about the enemy. Yet, in this essay, I ask: Are spies and traitors ethically justified in using cyber operations not merely to acquire secrets (cyber espionage) but also to covertly manipulate or falsify information (cyber manipulation) to prevent atrocities? I suggest that using cyber manipulation operations to parry atrocities is pro tanto morally permissible and, on occasion, required.
In this article I show how David Hume's works provide the ingredients for a conception of religiosity understood as a feeling of wonder concerning nature or existence, accompanied by a playful attitude regarding the imaginative shapes that can be given to this emotion. Hume serves as an inspiration rather than an object of study: I respect the spirit and values of his work, while going beyond his own explicit points. My reading accounts for Hume's aversion to traditional religions (‘superstition’), and for his acknowledgement of the universal attraction of the idea of invisible intelligent power and his own fascination with it. I argue first that superstition is a natural reaction to existential uncertainty. Second, I argue that uncertainty fuels activity, creativity and morality, and thus may be left untended. Though it always involves a measure of pain, too, human happiness is found in challenge and activity. Traditional monotheist religions respond to this need by generating experiences of wonder, thus, however, stimulating passive devotion and dogmatism. Opposing this, the suggestion of Hume's works is to respect the mystery of nature rather than shrouding it in unfounded convictions. The fictional character Philo illustrates how the longing for an answer by is itself can already be a profoundly religious feeling. Hume's descriptions of ancient polytheism and Philo show how this can be accompanied by a playful, imaginative interaction with the world.
Will existing forms of artificial intelligence (AI) lead to genuine intelligence? How is AI changing our society and politics? This essay examines the answers to these questions in Brian Cantwell Smith's The Promise of Artificial Intelligence and Mark Coeckelbergh's The Political Philosophy of AI with a focus on their central concern with judgment—whether AI can possess judgment and how developments in AI are affecting human judgment. First, I argue that the existentialist conception of judgment that Smith defends is highly idealized. While it may be an appropriate standard for intelligence, its implications for when and how AI should be deployed are not as clear as Smith suggests. Second, I point out that the concern with the displacement of judgment in favor of “reckoning” (or calculation) predates the rise of AI. The meaning and implications of this trend will become clearer if we move beyond ontology and metaphysics and into political philosophy, situating technological changes in their social context. Finally, I suggest that Coeckelbergh's distinctly political conception of judgment might offer a solution to the important boundary-drawing problem between tasks requiring judgment and those requiring reckoning, thus filling a gap in Smith's argument and clarifying its political stakes.
Cet article questionne la nature des troubles phonologiques à travers le prisme de la notion de complexité phonologique chez des locuteurs francophones atteints d’aphasie. Si de nombreux travaux ont été consacrés aux erreurs phonétiques portant sur la réalisation des phonèmes, plus rares sont les études qui prennent en compte la dimension phonologique, i.e. l’environnement contextuel. Toutefois certains auteurs montrent que la structure syllabique du mot, ou encore la position des segments au sein des syllabes influencent les réalisations des locuteurs atteints d’une aphasie (Wilshire & Nespoulous, 2003; Buchwald et Miozzo, 2012; Buchwald, 2017). Ces facteurs renvoient à la notion de complexité phonologique.
Cette étude présente une analyse des erreurs phonologiques dans l’aphasie à partir d’un corpus de données empiriques récoltées auprès de huit locuteurs. Plusieurs facteurs de la complexité phonologique sont ici analysés afin de comprendre s’ils jouent un rôle dans la réalisation des erreurs. L’hypothèse est que la présence de séquences consonantiques, la position de ces séquences dans les items (initiale vs médiane), la nature de ces séquences (hétérosyllabique, tautosyllabique) ainsi que la longueur des items (bi vs trisyllabique) influencent la production des erreurs. À travers cette recherche, nous espérons accroître les connaissances sur la nature des déficits phonologiques.
The ethical value of intelligence lies in its crucial role in safeguarding individuals from harm by detecting, locating, and preventing threats. As part of this undertaking, intelligence can include protecting the economic well-being of the political community and its people. Intelligence, however, also entails causing people harm when it violates their vital interests through its operations. The challenge, therefore, is how to reconcile this tension, which Cécile Fabre's recent book Spying through a Glass Darkly does by arguing for the “ongoing and preemptive imposition of defensive harm.” Fabre applies this underlying argument to the specifics of economic espionage to argue that while states, businesses, and individuals do have a general right over their information that prevents others from accessing it, such protections can be forfeited or overridden when there is a potential threat to the fundamental rights of third parties. This essay argues, however, that Fabre's discussion on economic espionage overlooks important additional proportionality and discrimination concerns that need to be accounted for. In addition to the privacy violations it causes, economic espionage can cause harms to people's other vital interests, including their physical and mental well-being and autonomy. Given the complex way in which the economy interlinks with people's lives and society, harms to one economic actor will have repercussions on those secondary economic entities dependent on them, such as workers, buyers, and investors. This, in turn, can produce further harms on other economic actors, causing damages to ripple outward across society.