We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The robo-advice industry is one of the fastest-growing ‘AI’-powered automated services that may be transforming access to investment advice. However, the robo-advice industry has settled on low-cost, standardised advice linked to relatively non-complex financial products that are offered to customers, rather than aiming for intelligent and personalised advisory tailoring. The regulatory regime for investment advice plays a significant part in delineating the boundaries of legal risk for the industry, therefore shaping the design of robo-advice as an innovation.
The AI agency problem is overstated, and many of the issues concerning AI contracting and liability can be solved by treating artificial agents as instrumentalities of persons or legal entities. AI systems should not be characterised as agents for liability purposes. This approach best accords with their functionality and places the correct duties and responsibilities on their human developers and operators.
There is a convergence on the protection of the traditional right to privacy and today’s right to data protection as evidenced by judicial rulings. However, there are still distinct differences among the jurisdictions based on how personal data is conceived (as a personality or proprietary right) and on the aims of the regulation. These have implications for how the use of AI will impact the laws of US and EU. Nevertheless, there are some regulatory convergences between US and EU law in terms of the realignment of traditional rights through data-driven technologies, the convergence between data protection safeguards and consumer law, and the dynamics of legal transplantation and reception in data protection and consumer law.
Because the use of robo-advisers creates significant risks in addition to the rewards that their use offers, the question of how they ought to be regulated has been and continues to be a subject of substantial debate. The various models for regulating the robo-adviser industry are examined. The best model for regulating the robo-adviser industry is a mix of mandatory disclosure; fiduciary duties for those developing, marketing, and operating robo-advisers; investor education; and regulation by litigation. Further, robo-advisers ought to be regulated by the regular standardised surveying of the investors who are using them and the release of that data to the general public.
How AI has and could impact the content, application and processes of corporate law and corporate governance, and the interactions of corporate law actors, including boards, shareholders and regulators, are critically examined. The current and future impact of AI and related technologies on corporate law and corporate governance norms and practices are also analysed.
Some of the most important challenges flowing from the rise of algorithmic management for employment law, broadly conceived as encompassing both the individual and collective dimension of employment relation, as well as associated regulatory domains, including data protection and anti-discrimination law, insofar as they are relevant to the employment context, are examined.
The differences between AI software and normal software are important as these have implications for how a transaction of AI software will be treated under sales law. Next, what it means to own an AI system – whether it is a chattel, merely a software, or something more than a software – is explored. If AI is merely a software, it will be protected by copyright, but there will be problems with licensing. But if AI is encapsulated in a physical medium, the transaction may be treated as one of the sale of goods, or a sui generis position may be taken. A detailed analysis of the Court of Justice of the European Union’s decision in Computer Associates v The Software Incubator is provided. An AI transaction can be regarded as a sale of goods. Because the sale of goods regime is insufficient, a transaction regime for AI systems has to be developed, which includes ownership and fair use (assuming AI is regarded as merely a software) and the right to repair (whether AI is treated as goods or software).
There are two core problems with private law’s causal rules in an AI context: 1) a problem of proof due to opacity, and 2) autonomy. Further, if AI is capable of being considered an intervening agent, using AI would have liability-avoiding effects. There may be particular problems with informational and decisional AI. Consideration is given to whether, in certain contexts, AI justifies a departure from the ordinary principles of causation.
Deep fakes are a special kind of counterfeit image that is difficult to distinguish from an authentic image. They may be used to represent a person doing any act and are generated using advanced machine learning techniques. Currently, such an appropriation of personality is only actionable if the circumstances disclose one of a number of largely unrelated causes of action. As these causes of action are inadequate to protect claimants from the appropriation of their personalities, there should be a new independent tort or statutory action for the appropriation of personalities which is grounded in the protection of a person’s dignitary interests.
AI has the potential to overcome problems concerning the existing approaches to contract drafting, management and implementation, whilst also having the potential to exacerbate these problems. To deal with this risk and to create AI which is trustworthy in relation to contracting, such systems require channelling in a new direction, termed ‘transactional responsibility’. Legal regulation must be structured around the entirety of the socio-technical system which underpins AI.
AI will disrupt the existing tort settlement. Tort law should be tech-impartial – that is, it should not encourage or discourage the adoption of new technologies where they generate the same level of risk, and victim rights should not be eroded by the use of new technologies in place of existing systems of work. Existing tort law is poorly suited to address some AI challenges, and a liability gap will emerge as systems replace employees since AI does not have legal personality and cannot commit a tort. A form of AI statutory vicarious liability should apply in commercial settings to address the liability gap and as the tech-impartial solution.
AI will greatly assist in the administration of express and charitable trusts and also be of significant benefit to trust law in acting as an adjudicator. AI should be able to act as an acceptable trustee of an express trust, and resulting trusts do not insurmountably challenge AI, either as trustees or adjudicators. The proposition that discretionary trusts are unsuited to AI administration can be rejected along with the notion that the discretionary nature of remedies makes this area of law unsuited to AI adjudication. Although constructive trusts may pose some difficulties for AI, this may be solved through legal reform. Further, the difficulties that AI trustees will create are not incapable of practical solutions.
Law reform is needed to recognise the impact of automation and machine learning systems on the services provided by intermediaries while requiring intermediaries to minimise illicit or harmful content.
The ever-growing involvement of InsurTech in the insurance operations requires us to consider the question of the level of disruption that the new technologies may have had on the insurance services, and therefore on society. To answer this question, the areas of insurance services in which InsurTech has been predominantly employed are explored. Following that, the social and economic impact of InsurTech is discussed together with the fundamental principles that guide the business and legal operation of insurance services.
This chapter introduces basic concepts of AI to lawyers, deals with key concepts, the capabilities and limitations of AI, and identifies technological challenges which might require legal responses.