To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Deep fakes are a special kind of counterfeit image that is difficult to distinguish from an authentic image. They may be used to represent a person doing any act and are generated using advanced machine learning techniques. Currently, such an appropriation of personality is only actionable if the circumstances disclose one of a number of largely unrelated causes of action. As these causes of action are inadequate to protect claimants from the appropriation of their personalities, there should be a new independent tort or statutory action for the appropriation of personalities which is grounded in the protection of a person’s dignitary interests.
AI has the potential to overcome problems concerning the existing approaches to contract drafting, management and implementation, whilst also having the potential to exacerbate these problems. To deal with this risk and to create AI which is trustworthy in relation to contracting, such systems require channelling in a new direction, termed ‘transactional responsibility’. Legal regulation must be structured around the entirety of the socio-technical system which underpins AI.
AI will disrupt the existing tort settlement. Tort law should be tech-impartial – that is, it should not encourage or discourage the adoption of new technologies where they generate the same level of risk, and victim rights should not be eroded by the use of new technologies in place of existing systems of work. Existing tort law is poorly suited to address some AI challenges, and a liability gap will emerge as systems replace employees since AI does not have legal personality and cannot commit a tort. A form of AI statutory vicarious liability should apply in commercial settings to address the liability gap and as the tech-impartial solution.
AI will greatly assist in the administration of express and charitable trusts and also be of significant benefit to trust law in acting as an adjudicator. AI should be able to act as an acceptable trustee of an express trust, and resulting trusts do not insurmountably challenge AI, either as trustees or adjudicators. The proposition that discretionary trusts are unsuited to AI administration can be rejected along with the notion that the discretionary nature of remedies makes this area of law unsuited to AI adjudication. Although constructive trusts may pose some difficulties for AI, this may be solved through legal reform. Further, the difficulties that AI trustees will create are not incapable of practical solutions.
Law reform is needed to recognise the impact of automation and machine learning systems on the services provided by intermediaries while requiring intermediaries to minimise illicit or harmful content.
The ever-growing involvement of InsurTech in the insurance operations requires us to consider the question of the level of disruption that the new technologies may have had on the insurance services, and therefore on society. To answer this question, the areas of insurance services in which InsurTech has been predominantly employed are explored. Following that, the social and economic impact of InsurTech is discussed together with the fundamental principles that guide the business and legal operation of insurance services.
This chapter introduces basic concepts of AI to lawyers, deals with key concepts, the capabilities and limitations of AI, and identifies technological challenges which might require legal responses.
AI will greatly challenge product liability since it is based on assumptions as to physical objects distributed through organised linear value chains which do not necessarily apply in the AI context. AI systems further challenge both liability compartmentalisation based on separate risk spheres and the notion of defectiveness. The European Product Liability Regime is based on a linear value chain, whereas with AI, systems may be distributed differently. The realities of new value chains call for a number of adjustments to central product liability concepts, which will widen the scope of product liability rules. Further, AI may in fact have the potential to ultimately dissolve the very notion of product liability itself.
Data is one of the most valuable resources in the twenty-first century. Property rights are a tried-and-tested legal response to regulating valuable assets. With non-personal, machine-generated data within an EU context, mainstream IP options are not available, although certain types of machine generated data may be protected as trade secrets or within sui generis database protection. However, a new IP right is not needed. The formerly proposed EU data producer’s right is a cautionary tale for jurisdictions considering a similar model. A new property right would both strengthen the position of de facto data holders and drive up costs. However, with data, there are valuable lessons to be learned from constructed commons models.
Whether AI should be given legal personhood should not be framed in binary terms. Instead, this issue should be analysed in terms of a sliding-scale spectrum. On one axis, there is the quantity and quality of the bundle of rights and obligations that legal personhood entails. The other axis is the level of the relevant characteristics that courts may include in conferring legal personhood.
The conferral of personhood is a choice made by legal systems, but just because it can be done, does not mean that it should. Analogies made between AI systems and corporations are superficial and flawed. For instance, the demand for asset partitioning does not apply to AI systems in the same way that it does with corporations and may lead to moral hazards. Conferring personhood on AI systems would also need to be accompanied with governance structures equivalent to those that accompany corporate legal personhood. Further, the metaphorical ghost of data as property needs to be exorcised.
AI appears to disrupt key private law doctrines, and threatens to undermine some of the principal rights protected by private law. The social changes prompted by AI may also generate significant new challenges for private law. It is thus likely that AI will lead to new developments in private law. This Cambridge Handbook is the first dedicated treatment of the interface between AI and private law, and the challenges that AI poses for private law. This Handbook brings together a global team of private law experts and computer scientists to deal with this problem, and to examine the interface between private law and AI, which includes issues such as whether existing private law can address the challenges of AI and whether and how private law needs to be reformed to reduce the risks of AI while retaining its benefits.
In the previous chapter, we introduced word embeddings, which are real-valued vectors that encode semantic representation of words. We discussed how to learn them and how they capture semantic information that makes them useful for downstream tasks. In this chapter, we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task.
In this chapter, we describe several common applications (including the ones we touched on before) and multiple possible neural approaches for each. We focus on simple neural approaches that work well and should be familiar to anybody beginning research in natural language processing or interested in deploying robust strategies in industry. In particular, we describe the implementation of the following applications: text classification, part-of-speech tagging, named entity recognition, syntactic dependency parsing, relation extraction, question answering, and machine translation.