To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A natural question is why AI in design? Although the design applications written about in the journal vary widely, the common thread is that researchers use AI techniques to implement their ideas. The use of AI techniques for design applications, at least when AI EDAM was started, was partially a reaction against the predominant design methods based on some form of optimization. Knowledge-based techniques, particularly rule-based systems of various sorts, were very popular. One of the draws of these methods, I believe, was their ability to represent knowledge that is hard or awkward to represent in traditional optimization frameworks. This mirrors my experience: at the time, I was working in configuration with components that had a large number compatibility and resource constraints. Although many constraints could be represented in mixed integer linear programming systems, it was not easy to conceptualize, write, and most importantly, maintain the constraints in those systems.
Many ethical questions about our future with intelligent machines rest upon assumptions concerning the origins, development and ideal future of humanity and of the universe, and hence overlap considerably with many religious questions. First, could computers themselves become moral in any sense, and could different components of morality – whatever they are – be instantiated in a computer? Second, could computers enhance the moral functioning of humans? Do computers potentially have a role in narrowing the gap between moral aspiration and how morality is actually lived out? Third, if we develop machines comparable in intelligence to humans, how should we treat them? This question is especially acute for embodied robots and human-like androids. Fourthly, numerous moral issues arise as society changes such that artificial intelligence plays an increasingly significant role in making decisions, with implications for how human beings function socially and as individuals, treat each other and access resources.
This paper examines the evidence for the marginal feminine endings *-ay- and *-āy- in Proto-Semitic, and the feminine endings *-e and *-a in Proto-Berber. Their similar formation (*CV̆CC-ay/āy), semantics (verbal abstracts, underived concrete feminine nouns) and plural morphology (replacement of the feminine suffix by a plural suffix with -w-) suggest that this feminine formation should be reconstructed to a shared ancestor which may be called Proto-Berbero-Semitic.
Automatic Identification System (AIS) has recently become the leading issue in maritime navigation and traffic management worldwide. The present AIS solution, based on a VHF data communications scheme, provides AIS functionalities for SOLAS (AIS Class A) vessels only in a limited environment defined by radio propagation properties. Here we present a novel approach in AIS development based on current mobile communication technologies. It utilises existing mobile communications equipment that the majority of targetted end-users own and are familiar with. A novel AIS concept aims to offer a transition of AIS data traffic to mobile Internet. An innovative AIS architecture supports AIS data processing, storing and transferring to authorised parties. This enhances not only the operational area, but also provides the global AIS with data transfer security and an improved aids-for-navigation service, with all legally traceable vessels (both AIS Class A and AIS Class B) included in the system. In order to provide the development framework for Internet AIS, a set of essential four use-cases, a communication protocol and the first Internet AIS prototype have been recently developed and are briefly introduced in this article.
The star AY Vulpeculae was recognized as variable and labeled as an Algol-type system by Hoffmeister, (A.N. 242, 133, 1931). Koch et al. (I.B.V.S. 1709 1979), included it in their list of eclipsing binaries for which photoelectric work was needed. The system is faint, but because its primary minimum is very deep it is of astrophysical interest. The observations of the present investigation represent the first definitive photometric study made of AY VUL and is one of several systems with periods greater than two days being observed with the 1.0 meter Ritchey-Chretian reflector at the Flagstaff Station . A total of 1242 observations of AY Vul (406 in V, 417 in B, and 419 in U) were obtained on 12 nights in 1986.
Orbital elements for AY VUL were obtained by using the Wood model, and the calculations were performed on the VAX 11/750 computer at the Flagstaff Station of the U. S. Naval Observatory. AY Vulpeculae can be regarded as a classical Algoltype semidetached system in which the secondary component fills its Roche lobe, while the primary lies well inside its own lobe. Both eclipses are partial. The orbital elements are listed below.
Governing AI is about getting AI right. Building upon AI scholarship in science and technology studies, technology law, business ethics, and computer science, it documents potential risks and actual harms associated with AI, lists proposed solutions to AI-related problems around the world, and assesses their impact. The book presents a vast range of theoretical debates and empirical evidence to document how and how well technical solutions, business self-regulation, and legal regulation work. It is a call to think inside and outside the box. Technical solutions, business self-regulation, and especially legal regulation can mitigate and even eliminate some of the potential risks and actual harms arising from the development and use of AI. However, the long-term health of the relationship between technology and society depends on whether ordinary people are empowered to participate in making informed decisions to govern the future of technology – AI included.
AI and Image illustrates the importance of critical perspectives in the study of AI and its application to image collections in the art and heritage sector. The authors' approach is that such entanglements of image and AI are neither dystopian or utopian but may amplify, reduce or condense existing societal inequalities depending on how they may be implemented in relation to human expertise and sensibility in terms of diversity and inclusion. The Element further discusses regulations around the use of AI for such cultural datasets as they touch upon legalities, regulations and ethics. In the conclusion they emphasise the importance of the professional expert factor in the entanglements of AI and images and advocate for a continuous and renegotiating professional symbiosis between human and machines. This title is also available as Open Access on Cambridge Core.
After its launch on 30 November 2022 ChatGPT (or Chat Generative Pre-Trained Transformer) quickly became the fastest-growing app in history, gaining one hundred million users in just two months. Developed by the US-based artificial-intelligence firm OpenAI, ChatGPT is a free, text-based AI system designed to interact with the user in a conversational way. Capable of answering complex questions with sophistication and of conversing in a breezy and impressively human style, ChatGPT can also generate outputs in a seemingly endless variety of formats, from professional memos to Bob Dylan lyrics, HTML code to screenplays and five-alarm chilli recipes to five-paragraph essays. Its remarkable capability relative to earlier chatbots gave rise to both astonishment and concern in the tech sector. On 22 March 2023 a group of more than one thousand scientists and entrepreneurs published an open letter calling for a six-month moratorium on further human-competitive AI development – a moratorium that was not observed.
This paper presents the main topics, arguments, and positions in the philosophy of AI at present (excluding ethics). Apart from the basic concepts of intelligence and computation, the main topics of artificial cognition are perception, action, meaning, rational choice, free will, consciousness, and normativity. Through a better understanding of these topics, the philosophy of AI contributes to our understanding of the nature, prospects, and value of AI. Furthermore, these topics can be understood more deeply through the discussion of AI; so we suggest that “AI philosophy” provides a new method for philosophy.
In this chapter, Fruzsina Molnár-Gábor and Johanne Giesecke consider specific aspects of how the application of AI-based systems in medical contexts may be guided under international standards. They sketch the relevant international frameworks for the governance of medical AI. Among the frameworks that exist, the World Medical Association’s activity appears particularly promising as a guide for standardisation processes. The organisation has already unified the application of medical expertise to a certain extent worldwide, and its guidance is anchored in the rules of various legal systems. It might provide the basis for a certain level of conformity of acceptance and implementation of new guidelines within national rules and regulations, such as those on new technology applications within the AI field. In order to develop a draft declaration, the authors then sketch out the potential applications of AI and its effects on the doctor–patient relationship in terms of information, consent, diagnosis, treatment, aftercare, and education. Finally, they spell out an assessment of how further activities of the WMA in this field might affect national rules, using the example of Germany.
It is only recently that the EPO’s Boards of Appeal have had to deal with appeals relating to the surge of AI based inventions. In doing so the Boards of Appeal have adopted a gradualist approach, adapting the extensive EPO case law relating to the patentability of computer programs ‘as such’ and applying it to AI inventions. The most recent change to the Guidelines indicates the EPO’s willingness to adapt to technological developments and to refine its approach to patentability of inventions involving AI, while at the same time taking a firm line against patenting non-technical inventions.
AI is a complex, multifaceted concept and is therefore hard to define because AI can refer to technological artifacts, certain methods or a scientific field that is split into many subfields and that is continuously changing and evolving AI systems can therefore be seen as digital artifacts that require hardware and software components and that contain at least one learning or learned component, i.e., a component that is able to change the system’s behavior based on presented data and the processing of this data.
This chapter turns to the possibility that the AI systems challenging the legal order may also offer at least part of the solution. Here China, which has among the least developed rules to regulate conduct by AI systems, is at the forefront of using that same technology in the courtroom. This is a double-edged sword, however, as its use implies a view of law that is instrumental, with parties to proceedings treated as means rather than ends. That, in turn, raises fundamental questions about the nature of law and authority: at base, whether law is reducible to code that can optimize the human condition, or if it must remain a site of contestation, of politics, and inextricably linked to institutions that are themselves accountable to a public. For many of the questions raised, the rational answer will be sufficient; but for others, what the answer is may be less important than how and why it was reached, and whom an affected population can hold to account for its consequences.
Conceptually, the tradition of escape from Egypt requires a place to go, a happy ending. Numbers 21 and Deuteronomy 2–3 provide this in the east, in a tradition that does not of itself assume combination with a western campaign. The Bible as we now have it provides the book of Joshua, with its crossing of the Jordan River from the east (chapters 3–4), the assault on Jericho (chapter 6), followed by eventual victory at Ai (chapters 7–8), forced compromise with the Gibeonites (chapter 9), and at last a sweeping success against the assembled forces of first the south, then the north (chapters 10–11). The rest of the book is built around detailed territorial definitions for all the tribes, both east and west of the Jordan (chapters 13–19), with special consideration for the Levites (chapter 21). All this is framed by speeches from Joshua himself in chapters 1 and 23, which have long been understood as classic examples of deuteronomistic work, in making this collection part of a larger history.
Although the western conquest led by Joshua depends in its full expression on the old Israelite tradition of exodus from Egypt, the book gives it a limited and finally Judahite perspective. Like the later revisions of the Moses traditions, which integrate the tribal scheme of Genesis into the old tales of departure and invasion, the book of Joshua pictures a conquest by twelve tribes. Perhaps this tribal focus allows Judah to be given a special role; in any case, the establishment of all Israel, far beyond the borders of Judah, is essential. Moreover, the geographical ambitions of the book are considerable, reaching beyond the proven accomplishment of Israel in any period. Joshua mixes an Israelite ideal with an overwhelmingly Judahite realization. For all the narrative located in the territory of Israel, it is extremely difficult to isolate plausibly Israelite material. In the end, the one most convincing text is the account of Joshua's victory at Ai in chapter 8. This alone provides a starting point for understanding the construction of a western conquest from the tradition of Joshua as ancient warrior leader. Joshua 8 belongs to the family of biblical texts that presents Israel as a collective unit, especially for going to war, and as such, the text belongs with the various Moses materials.
This chapter explores AI’s potential consciousness, distinguishing it from human consciousness and addressing concerns about unintentionally creating conscious AI. The "Hard Problem of Consciousness" examines challenges in understanding how systems generate consciousness. "Strong AI" and "weak AI" concepts are introduced, envisioning AI replicating human functions, including consciousness. The chapter explores artificial consciousness’s significance in human–AI interactions, attachment, and ethical considerations, addressing potential risks and implications. Later sections cover consciousness aspects such as self-awareness, subjectivity, memory, anticipation, learning, perception, time awareness, cognition, reflection, intentionality, emotion, empathy, dreaming, and imagination. It navigates the intersection of AI, consciousness, and ethical and legal implications, discussing challenges and testing approaches like the Turing test, the Argonov test, the ConsScale test, the emotional response test, the ethical decision-making test, the mirror test, the global workspace test, and the know thyself test. The chapter suggests that AI consciousness may not be binary but could exist in varying degrees.
Artificial intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility, and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but often fail to suffice due to the context-sensitivity of ethical challenges. Second, this chapter discusses methods to tackle these challenges. Main ethical theories (such as virtue ethics, consequentialism, and deontology) are shown to provide a starting point, but often lack the details needed for actionable AI ethics. Instead, we argue that mid-level philosophical theories coupled to design-approaches such as “design for values”, together with interdisciplinary working methods, offer the best way forward. The chapter aims to show how these approaches can lead to an ethics of AI that is actionable and that can be proactively integrated in the design of AI systems.