To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Georas analyzes different dilemmas that arise when we use robots to serve humans living in the digital age. She focuses on the design and deployment of carebots in particular, to explore how they are embedded in more general multifaceted material and discursive configurations, and how they are implicated in the construction of humanness in socio-technical spaces. In doing so, she delves into the "fog of technology," arguing that this fog is always also a fog of inequality since the emerging architectures of our digitized lives will connect with pre-existing forms of domination. In this context, resistive struggles are premised upon our capacity to dissent, which is what ultimately enables us to express our humanity and at the same time makes us unpredictable. What it means to be human in the digital world is thus never fixed, but, Georas argues, must always be strategically reinvented and reclaimed, since there always will be people living on the “wrong side of the digital train tracks” who will be unjustly treated.
Millar and Gray argue that mobility shaping is raising a set of unresolved ethical, political, and legal issues that have significant consequences for shaping human experience in the future. By way of analogy, they unpack how these emerging issues in mobility echo those that have been asked in the more familiar context of net neutrality. They then apply some of the ethical and legal reasoning surrounding net neutrality to the newly relevant algorithmically controlled mobility space. They conclude that we can establish and ensure a just set of principles and rules for shaping mobility in ways that promote human flourishing by extending some of the legal and regulatory framework around net neutrality to mobility providers.
Lyon uses the COVID epidemic to think about the instrumentalizing role of surveillance capitalism in digital society. He argues that the tech solutionism proffered by tech companies during the pandemic too often implied that democratic practices and social justice are at least temporarily dispensable for some greater good, with disastrous consequences for human flourishing. As a counterpoint, Lyon uses the notion of an ethics of care as a way to refocus on the importance of articulating the conditions that will enable the humans who live in datafied societies to live meaningful lives. He then offers Eric Stoddart’s notion of the “common gaze” to begin to imagine what those conditions might be. From this perspective, surveillance can be conceptualized as a gaze for the common good with a “preferential optic” focused on the conditions that will alleviate the suffering of the marginalized.
Murakami Wood makes both an empirical and a theoretical contribution by analysing the discourses contained in smart city marketing materials to create a detailed description of the kind of human that smart city developers and promoters envision as smart city residents. The resulting portrait of the “platform human” – a being whose entrepreneurial and libertarian needs are seamlessly enabled by technology built into the lived environment – is informed by a technologically-enabled notion of class, a particular and specific political identity of smart citizens as property-owning, entrepreneurial, and libertarian, and a generic environmental ‘goodness’ associated with smart platforms. The combination of these three elements resonates strongly with transhumanist speciation where humans are imagined as data-driven, surveillant, and robotic.
The chapter discusses the evolution of justice and dispute resolution in the era of LawTech (LT). Traditional taxonomies of justice are mirrored in new forms of digital dispute settlement (DDS), where the idealized Justice Hercules is compared to the prospect of robo-judges. Currently, LT primarily supports traditional courts as they transition to e-courts. Alternative dispute resolution (ADR) is evolving into online dispute resolution (ODR), with blockchain-based crowdsourcing emerging as a potential alternative to traditional justice. Hybrid models of dispute resolution are also taking shape. The chapter outlines assessment criteria for adopting LT in digital systems, focusing on ensuring that DS in the digital economy remains independent, impartial, and enforceable. Human centricity is core construct for the co-development of LT and DS. This overarching principle requires human oversight, transparency, data privacy, and fairness in both access and outcomes.
Technological disruption leads to discontent in the law, regarding the limited remedies that are available under private law. The source of the problem is a ‘private law’ model that assumes that the function of law is to correct wrongs by compensating individuals who are harmed. So, the model is based on (i) individual claimants and (ii) financial redress. If we copy this private law model into our regulatory regimes for new technologies our governance remedies will fall short. On the one hand, the use of AI can affect in a single act a large number of people. On the other hand, not all offences can be cured through awarding money damages. Therefore, it is necessary to rethink private remedies in the face of AI wrongs to make law effective. To achieve this, the mantra of individual compensation has to be overcome in favor of a social perspective should prevail including the use of non-pecuniary measures to provide effective remedies for AI wrongs.
Provided the law’s classifications are broadly drawn, technological innovation will not require the classifications to be redrawn or new categories to be introduced. This is not to say, however, that innovations will never require a rethinking of old categories or the invention of new ones. Difficult as that may be, the more difficult issue is detecting disruptions in the first place. Some truly disruptive innovations, such as computer programs, may be hidden from view for a variety of reasons. Others, touted as disruptive, such as cryptoassets, may not really be the case.
Failures of environmental law to preserve, protect and improve the environment are caused by law’s contingency and constitutional presumptions of supremacy over the self-regulatory agency of nature. Contingency problems are intrinsic to law and, therefore, invite deployment of technologies. Constitutional presumptions can be corrected through geo-constitutional reform. The latter requires the elaboration of geo-constitutional principles bestowing authority on nature’s self-regulatory agency. It is suggested that principles of autonomy, loyalty, pre-emption, supremacy and rights have potential to serve that aim and imply proactive roles for technologies in environmental governance. Geo-constitutional reform is necessary to prevent the fatal collapse of the natural regulatory infrastructure enabling life and a future of environmental governance by design. Once environmental catastrophe has materialized, however, geo-constitutionalism loses its raison d’être.
This chapter argues that, as evidenced by EU digital law and EU border management, the EU legislature is complicit in the creation of complex socio-technical systems that undermine core features of the EU’s legal culture. In the case of digital law, while the EU continues to govern by publicly declared and debated legal rules, the legal frameworks – exemplified by the AI Act – are excessively complex and opaque. In the case of border management, the EU increasingly relies not on governance by law but on governance by various kinds of technological instruments. Such striking departures from the EU’s constitutive commitments to the rule of law, democracy and respect for human rights, are more than a cause for concern; they raise profound questions about what it now means to be a European.
This chapter challenges the conventional wisdom of how users of social media platforms such as Instagram, X, or TikTok pay for service access. It argues that rather than merely exchanging data for services, users unknowingly barter their attention, emotions, and cognitive resources – mental goods that corporations exploit through technologically managed systems like targeted advertising and habit-forming design. The chapter explores how these transactions are facilitated not by legal contracts but by code, which allows social media companies to extract value in ways that traditional legal conceptual frameworks cannot capture. It further highlights the negative externalities of these exchanges, such as cognitive impairments and mental health issues, framing them as pollution byproducts of the attention economy. By examining both the visible and hidden dimensions of this technologically mediated exchange, the chapter calls for a deeper understanding of the mechanisms that govern our interactions with digital platforms rather than rushing to propose new legal solutions.
Advanced AI (generative AI) poses challenges to the practice of law and to society as a whole. The proper governance of AI is unresolved but will likely be multifaceted (soft law such as standardisation, best practices and ethical guidelines), as well as hard law consisting of a blend of existing law and new regulations. This chapter argues that ‘lawyer’s professional codes’ of conduct (ethical guidelines) provide a governance system that can be applied to the AI industry. The increase in professionalisation warrants the treating of AI creators, developers and operators, as professionals subject to the obligations foisted on the legal profession and other learned professions. Legal ethics provides an overall conceptual structure that can guide AI development serving the purposes of disclosing potential liabilities to AI developers and building trust for the users of AI. Additionally, AI creators, developers and operators should be subject to fiduciary duty law. Fiduciary duty law as applied to these professionals would require a duty of care in designing safe AI systems, a duty of loyalty to customers, users and society not to create systems that manipulate consumers and democratic governance and a duty of good faith to create beneficial systems. This chapter advocates the use of ethical guidelines and fiduciary law not as soft law but as the basis of structuring private law in the governance of AI.
Law’s governance seemingly faces an uncertain future. In one direction, the alternative to law’s governance is a dangerous state of disorder and, potentially, existential threats to humanity. That is not the direction in which we should be going, and we do not want our escalating discontent with law’s governance to give it any assistance. Law’s governance is already held in contempt by many. In the other direction, if we pursue technological solutions to the imperfections in law’s governance, there is a risk that we diminish the importance of humans and their agency. If any community is contemplating transition to governance by technology, it needs to start its impact assessment with the question of whether the new tools are compatible with sustaining the foundational conditions themselves.
This chapter analyses the public and private governance structure of the EU AI Act (AIA) and its associated ecosystem of compliance and conformity. Firstly, the interaction of public and private governance in the making of AI law meant to concretise the rules in the AIA is analysed. Secondly, the focus shifts to the interaction of public and private governance in the Act’s enforcement through compliance, conformity and public authorities. Thirdly, it is argued that the EU legislature has neither fully developed public private governance nor the interaction between the two. As a result, there are many gaps in the involvement of civil society in compliance, conformity and enforcement of private regulations, in particular harmonized technical standards, Codes of Practice and Codes of Conduct. Moreover, the extreme complexity of the AIA’s governance structure is likely to trigger litigation between AI providers and deployers and the competent surveillance authorities, or more generally in B2B and B2C relations.
This chapter examines three reasons for discontent with law’s governance of technology. Reservations concern the exercise of legal powers, the convenience of legal regulations, and prestige. The analysis is supplemented with the impact that the pace of technological innovation has on legal systems and the distinction between internal and external problems of legal governance. The internal problems regard the efficacy, efficiency, and overall soundness of the normative acts; the external problems are related to the claims of further regulatory systems in society, such as the forces of the market, or of social customs. By following the recommendations of Leibniz in the sixth paragraph of his Discourse on Metaphysics, the overall idea is to discuss the simplest possible hypothesis to attain the richest world of phenomena. Discontent with law’s governance of technology is indeed a complex topic with manifold polymorphous ramifications.