To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, the philosopher Christoph Durt elaborates a novel view on AI and its relation to humans. He contends that AI is neither merely a tool, nor an artificial subject, nor necessarily a simulation of human intelligence. These misconceptions of AI have led to grave misunderstandings of the opportunities and dangers of AI. A more comprehensive concept of AI is needed to better understand the possibilities of responsible AI. The chapter shows the roots of the misconceptions in the Turing Test. The author argues that the simplicity of the setup of the Turing Test is deceptive, and that Turing was aware that the text exchanges can develop in much more intricate ways than usually thought. The Turing Test only seemingly avoids difficult philosophical questions by passing on the burden to an evaluator, who is part of the setup, and hides in plain sight his or her decisive contribution. Durt shows that, different from all previous technology, AI processes meaningful aspects of the world as experienced and understood by humans. He delineates a more comprehensive picture according to which AI integrates into the human lifeworld through its interrelations with humans and data.
When interactions between individual dynamical components are sufficiently strong, coordinated dynamics at the systemic level can emerge. This is called synchronisation.
In this chapter, the law scholar Boris Paal identifies a conflict between two objectives pursued by the data protection law, the comprehensive protection of privacy and personal rights and the facilitation of an effective and competitive data economy. Focusing on the European Union’s General Data Protection Regulation (GDPR), the author recognises its failure to address the implications of AI, the development of which depends on the access to large amounts of data. The regulation is observed as not only immensely burdensome for controllers but also likely to significantly limit the output of AI-based applications. In general, the main principles of the GDPR seem to be in direct conflict with the functioning and underlying mechanisms of AI applications, which evidently, were not considered sufficiently whilst the regulation was being drafted. Hence, Paal argues that establishing a separate legal basis governing the permissibility of processing operations using AI-based applications should be considered; the enhanced legal framework should seek to reconcile data protection with the openness for new opportunities of AI developments.
We present a few paradigmatic modelling strategies selected due to their generality in addressing the emergence of spatio-temporal structure, including criticality, synchronisation, intermittency, adaptation and forecasting.
In this chapter, political philosopher Alex Leveringhaus asks whether Lethal Autonomous Weapons (AWS) are morally repugnant and whether this entails that they should be prohibited by international law. To this end, Leveringhaus critically surveys three prominent ethical arguments against AWS: firstly, AWS create ‘responsibility gaps’; secondly, that their use is incompatible with human dignity; and ,thirdly, that AWS replace human agency with artificial agency. He argues that some of these arguments fail to show that AWS are morally different from more established weapons. However, the author concludes that AWS are currently problematic due to their lack of predictability.
In this chapter, the law scholar Ebrahim Afsah outlines different implications of AI for the area of national security. He argues that while AI overlaps with many challenges to the national security arising from cyberspace, it also creates new risks, including the emergence of a superintelligence in the future, the development of autonomous weapons, the enhancement of existing military capabilities, and threats to foreign relations and economic stability. Most of these risks, however, Afsah concludes, can be subsumed under existing normative frameworks.
To answer the question of what responsible AI means, the authors, Jaan Tallinn and Richard Ngo, propose a framework for the deployment of AI which focuses on two concepts: delegation and supervision. The framework aims towards building ‘delegate AIs’ which lack goals of their own but can perform any task delegated to them. However, AIs trained with hardcoded reward functions, or even human feedback, often learn to game their reward signal instead of accomplishing their intended tasks. Thus, Tallinn and Ngo argue that it will be important to develop more advanced techniques for continuous high-quality supervision – for example, by evaluating the reasons which AIs give for their choices of actions. These supervision techniques might be made scalable by training AIs to generate reward signals for more advanced AIs. Given their current limitations, however, Tallinn and Ngo call for caution when developing new AI: we must be aware of the risks and overcome self-interest and dangerous competitive incentives in order to avoid them.
The philosopher Wilfried Hinsch focuses on statistical discrimination by means of computational profiling. He defines statistical profiling as an estimate of what individuals will do by considering the group of people they can be assigned to. The author explores which criteria of fairness and justice are appropriate for the assessment of computational profiling. According to Hinsch, grounds of discrimination such as gender or ethnicity do not explain when or why it is wrong to discriminate. Thus, Hinsch argues that discrimination constitutes a rule-guided social practice that imposes unreasonable burdens on specific people. He argues that, on the one hand, statistical profiling is a part of human nature and not by itself wrongful discrimination. However, on the other hand, even statistically correct profiles can be unacceptable considering reasons of procedural fairness or substantive justice. Because of this, Hinsch suggests a fairness index for profiles to determine procedural fairness; and argues that because AI systems do not rely on human stereotypes or rather limited data, computational profiling may be a better safeguard of fairness than humans.
The chapter aims to serve as a conceptual sketch for the intricacies involved in autonomous algorithmic collusion, including the notion of concerted practices for cases that would otherwise elude the cartel prohibition. Stefan Thomas, a law scholar, starts by assessing how algorithms can influence competition in markets before dealing with the traditional criteria of distinction between explicit and tacit collusion, which might reveal a potential gap in the existing legal framework regarding algorithmic collusion. Finally, he analyses whether the existing cartel prohibition can be construed in a way that captures the phenomenon appropriately. This chapter shows how enforcement paradigms that hinge on descriptions of the inner sphere and conduct of human beings may collapse when applied to the effects precipitated by independent AI based computer agents.
In this chapter the law scholars Haksoo Ko, Sangchul Park, and Yong Lim, analyse the way South Korea has been dealing with the COVID-19 pandemic and its legal consequences. Instead of enforcing strict lockdowns, South Korea imposed several other measures, such as a robust AI-based contact tracing scheme. The chapter provides an overview of the legal framework and the technology which allowed South Korea to employ its technology-based contact tracing scheme. Additionally, the authors showcase the information system South Korea implemented, as well as the actual use of the data. The authors argue that South Korea has a rather stringent data-protection regime, which proved to be the biggest hurdle in implementing the contact tracing scheme. However, the country introduced a separate legal framework for extensive contact tracing after its bruising encounter with the Middle East Respiratory Syndrome (MERS) in 2015 which was reactivated and provided government agencies with extensive authority to process personal data for epidemiological purposes. The AI-based technology built in the process of creating smart cities also proved handy as it was repurposed for contact tracing purposes.
The chapters of Part I will discuss why complexity science is important, how this science relates to other sciences and also a little bit about its philosophical status. The aim is to make clear what makes complexity science special and in which way it contributes to our understanding of the surrounding world.