To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
There is a broad consensus that human supervision holds the key to sound automated decision-making: if a decision-making policy uses the predictive outputs of a statistical algorithm, but those outputs form only part of a decision that is made ultimately by a human actor, use of those outputs will not (per se) fall foul of the requirements for due process in public and private decision-making. Thus, the focus in academic and judicial spheres has been on making sure that humans are equipped and willing to wield this ultimate decision-making power. Yet, proprietary software obscures the reasons for any given prediction; this is true both for machine learning and deterministic algorithms. And without these reasons, the decision-maker cannot accord appropriate weight to that prediction in their reasoning process. Thus, a policy of using opaque statistical software to make decisions about how to treat others is unjustified, however involved humans are along the way.
This chapter closes Part 1 by analysing how the opacity surrounding the use of AI and ADM tools by financial corporations is enabled, and even encouraged by the law. As other chapters in the book demonstrate, such opacity brings about significant risks to fundamental rights, consumer rights, and the rule of law. Analysing examples from jurisdictions including the US, UK, EU, and Australia, Bednarz and Przhedetsky unpack how financial entities often rely on rules and market practices protecting corporate secrecy such as complex credit scoring systems, proprietary rights to AI models and data, as well as the carve out of ‘non-personal’ information from data and privacy protection laws. The authors then focus on the rules incentivising the use of AI and ADM tools by financial entities, showing how they provide a shield behind which corporations can hide their consumer scoring and rating practices. The authors also explore potential regulatory solutions that could break the opacity and ensure transparency, introducing direct accountability and scrutiny of ADM and AI tools, and reducing the control of financial corporations over people’s data.
Chapter 7 proposes how the liability framework should be implemented. Harm liability can flow from a statutory standard or local tort law. This focus allows liability to complement, rather than replicate, public enforcement. The quantum of liability should depend on the harm incurred by the victim, rather than on the wrongfulness of the perpetrator’s conduct or the consequences that the perpetrator foresaw. Privacy liability is most effective as part of a mechanism of collective redress, such as class actions. A robust notion of loss and harm can address problems of insufficient compensation and uncertainties in class certification. Considering privacy problems at scale, we need a framework recognizing mass privacy effects for regulators and courts.
Compliant interaction between robots and the environment is crucial for completing contact-rich tasks. However, obtaining and implementing optimal interaction behavior in complex unknown environments remains a challenge. This article develops a hybrid impedance and admittance control (HIAC) scheme for robots subjected to the second-order unknown environment. To obtain the second-order target impedance model that represents the optimal interaction behavior without the accurate environment dynamics and acceleration feedback, an impedance adaptation method with virtual inertia is proposed. Since impedance control and admittance control have complementary structures and result in unsatisfactory performance in a wide range of environmental stiffness due to their fixed causality, a hybrid system framework suitable for the second-order environment is proposed to generate a series of intermediate controllers which interpolate between the responses of impedance and admittance controls by using a switching controller and adjusting its switching duty cycle. In addition, the optimal intermediate controller is selected using a mapping of the optimal duty cycle to provide the optimal implementation performance for the target impedance model. The proposed HIAC scheme can achieve the desired interaction and impedance implementation performance while ensuring system stability. Simulation and experimental studies are performed to verify the effectiveness of our scheme with a 2-DOF manipulator and a 7-DOF Franka EMIKA panda robot, respectively.
Chapter 7 analyses the legal challenges that incorporation of AI-systems in the Automated State will bring. The starting point is that legal systems have coped relatively well so far with the use of computers by public authorities. The critical disruption of the Automated State predicted by Robert McBride in 1967 has not been materialised and, therefore, we have not been forced to substantively rethink the adequacy of how administrative law deals with machines. However, the incorporation of AI in automation may be that disruption. In this chapter, Bello y Villarino offers a counterpoint to those who believe that existing principles and rules can be easily adapted to address the use of AI in the public sector. He discusses the distinct elements of AI, through an exploration of the dual role of public authorities: a state that executes policy and a state that designs policy. The use of AI systems in both contexts are of a different regulatory order. Until now there has been an assumption that policy design should be allowed a broad margin of discretion, especially when compared to the state as an executor of policies and rules. Yet, the automation of policy design will require that public authorities make explicit decisions about objectives, boundary conditions, and preferences. Discretion for humans can remain, but AI systems analysing policy choices may suggest that certain options are superior to others. This could justify employing different legal lenses to approach the regulation of automated decision-making and decision-support systems used by the State. The reasoning, to some extent, could also be extrapolated to Automated Banks. Each perspective is analysed in reference to the activity of modern states. The main argument is that the AI-driven Automated State is not suited for the one-size-fits-all approach often claimed to apply to administrative law. The final part of the chapter explores some heuristics that could facilitate the regulatory transition.
Artificial intelligence (AI) and automated decision-making (ADM) tools promise money and unmatched power to banks and governments alike. As the saying goes, they will know everything about their citizens and customers and will also be able to predict their behaviour, preferences, and opinions. Global consulting firm McKinsey estimates that AI technologies will unlock $1 trillion in additional value for the global banking industry every year.1 Governments around the world are getting on the AI bandwagon, expecting increased efficiency, reduced costs, and better insights into their populations.
Caused by the progressing digitalization in mechanical engineering, a significant demand for information about characteristic process and state variables of technical systems arises. However, since it is oftentimes neither obvious what nor how to measure, the integration of measuring functions, in particular in terms of a retrofit, represents a current challenge in mechanical engineering. In order to overcome this challenge, an approach for the systematic identification of potential measurands is provided in this contribution. For this purpose, the approach of physical effect catalogs is taken up and used for the systematic identification of potential measurands, starting from a physical variable to be determined. Existing catalog systems have two major limitations with respect to the intended identification of cause–effect relationships: They assume an effect to be realized and a consideration of design parameters of a technical system is not intended. These limitations are overcome by linking the fundamental idea of existing catalog systems with the basics of multipole-based modeling. In this way, a multipole-based effect catalog system is developed. It creates the foundation to systematically include the changes and transformations of a process or state variable to be determined into the identification of potential measurands.
Tech companies bypass privacy laws daily, creating harm for profit. The information economy is plagued with hidden harms to people’s privacy, equality, finances, reputation, mental wellbeing, and even to democracy, produced by data breaches and data-fed business models. This book explores why this happens and proposes what to do about it. Legislators, policymakers, and judges are trapped into ineffective approaches to tackle digital harms because they work with tools unfit to deal with the unique challenges of data ecosystems that leverage AI. People are powerless towards inferences about them that they can’t anticipate, interfaces that manipulate them, and digital harms they can’t escape. Adopting a cross-jurisdictional scope, this book describes how laws and regulators can and should respond to these pervasive and expanding harms. In a world where data is everywhere, one of society’s most pressing challenges is addressing power discrepancies between the companies that profit from personal data and the people whose data produces profit. Doing so requires creating accountability for the consequences of corporate data practices—not the practices themselves. Laws can achieve this by creating a new type of liability that recognizes the social value of privacy, uncovering dynamics between individual and collective digital harms.
Learning finite automata (termed as model learning) has become an important field in machine learning and has been useful realistic applications. Quantum finite automata (QFA) are simple models of quantum computers with finite memory. Due to their simplicity, QFA have well physical realizability, but one-way QFA still have essential advantages over classical finite automata with regard to state complexity (two-way QFA are more powerful than classical finite automata in computation ability as well). As a different problem in quantum learning theory and quantum machine learning, in this paper, our purpose is to initiate the study of learning QFA with queries (naturally it may be termed as quantum model learning), and the main results are regarding learning two basic one-way QFA (1QFA): (1) we propose a learning algorithm for measure-once 1QFA (MO-1QFA) with query complexity of polynomial time and (2) we propose a learning algorithm for measure-many 1QFA (MM-1QFA) with query complexity of polynomial time, as well.
The potential of AI solutions to enhance effective decision-making, reduce costs, personalise offers and products, and improve risk management have not gone unnoticed by the financial industry. On the contrary, the characteristics of AI systems seem to perfectly accommodate to the features of financial services and to masterly address their most distinctive and challenging needs. Thus, the financial industry proves to provide a receptive and conducive environment to the growing application of AI solutions in a variety of tasks, activities, and decision-making processing. The aim of this paper is to examine the current state of the legal regime applicable in the European Union to the use of AI systems in the financial sector and to reflect on the need to formulate principles and rules that ensure responsible automation of decision-making and that serve as a guide for widely and extensively implementing AI solutions in banking activity.
This chapter offers a synthesis on the role the law has to play in Automated States. Arguing for a new research and regulatory agenda on AI and ADM beyond the artificial ‘public’ and ‘private’ divide, it seeks to identify new approach and safeguards necessary to make AI companies and the Automated States accountable to their customers, citizens and communities. I argue that emphasis on procedural safeguards alone – or what I call procedural fetishism – is not enough to counter the unprecedented levels of AI power in the Automated States. Only by shifting our perspective from procedural to substantive, we can search for new ways to regulate the future in the Automated States. The chapter concludes the collection with an elaboration of what more substantive regulation should look like: create a global instrument on data privacy, redistribute wealth and power by breaking and taxing AI companies, increasing public scrutiny and adopting prohibitive laws; democratizing AI companies by making them public utilities, and giving people a say how these companies should be governed. Crucially, we must also decolonize future AI regulation by recognizing colonial practices of extraction and exploitation and paying attention to the voices of Indigenous peoples and communities of the so-called Global South. With all these mutually reinforcing efforts, the new AI regulation will debunk the corporate and state agenda of procedural fetishism and establish a new social contract in the age of AI.
Chapter 4 delves into two efforts to reinforce consent: opt-in and informed choice. It illustrates why, in the information economy, they also fail. Power asymmetries enable systemic manipulation in the design of digital products and services. Manipulation by design thwarts improved consent provisions, interfering with people’s decision-making. People’s choices regarding their privacy are determined by the designs of the systems with which they interact. European and American attempts to regulate manipulation by changing tracking from ‘opt-out’ to ‘opt-in’ and reinforcing information crash on the illusion of consent. Contract law doctrines that aim to reduce manipulation are unsuitable because they assume mutually beneficial agreements, and privacy policies are neither. Best efforts to strengthen meaningful consent and choice, even where policies are specifically intended to protect users, ultimately are insufficient because of the environment in which privacy “decisions” take place.