Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-ttngx Total loading time: 0 Render date: 2024-06-11T02:55:26.900Z Has data issue: false hasContentIssue false

2 - Demystifying Consumer-Facing Fintech

Accountability for Automated Advice Tools

from Part I - Automated Banks

Published online by Cambridge University Press:  16 November 2023

Zofia Bednarz
Affiliation:
University of Sydney
Monika Zalnieriute
Affiliation:
University of New South Wales, Sydney

Summary

Chapter 2 looks at transparency and fintech tools. The premise behind many so-called fintech innovations in consumer markets is to make more personalised financial products available to an often underserved and largely inexperienced cohort. Many consumers are not good at managing their day-to-day finances, selecting optimal credit products or investing for the future. Fintech products, and the applications associated with them, are commonly promoted on the basis they will use consumer data, AI capacities, and a lower cost basis to promote competition and better serve consumers, including financially excluded or vulnerable consumers. Paterson, Miller, and Lyons challenge these premises by demystifying the kinds of capacities that are possible through the fintech technologies being offered to consumers. The most common form of fintech solutions offered to consumers are credit, budgeting, and investment tools. These typically do not disrupt existing service models through the use of deep learning AI. Rather they are commonly enabled by encoding the rules of thumb used by mortgage brokers and financial advisers. They make a return through methods criticised on when deployed by social media platforms, namely on-selling data, targeted advertising, and commission-based sales. There is moreover little incentive for fintech providers to make products that benefit marginalised cohorts for whom there is minimal relevant data and little likelihood of lucrative return. The authors argue that greater transparency is required about what is being offered to consumers though fintech tools and who benefits from them, along with greater accountability for ill-founded and even sensationalised claims.

Type
Chapter
Information
Money, Power, and AI
Automated Banks and Automated States
, pp. 29 - 50
Publisher: Cambridge University Press
Print publication year: 2023
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

2.1 Introduction: Money, Power, and AI

As the authors of this book recognise, money and power are intimately linked. For most consumers, access to banking services, credit, and a saving plan for retirement are necessary – although not sufficient – requirements for a stable, meaningful, and autonomous life. Conversely, financial hardship may have considerable impact on not only the financial but also the emotional well-being of consumers.Footnote 1 There are many causes of financial hardship, including high levels of personal debt, reliance on high-cost credit, lack of access to mainstream banking services, and unexpected circumstances such as unemployment or ill health.Footnote 2 Additionally, consumers are sometimes subject to fraudulent, deceptive, and dishonest practices, which can escalate their financial problems.Footnote 3 Moreover, many consumers find that they lack the time or skills to manage their day-to-day finances, select optimal credit products, or invest effectively for the future.Footnote 4

So where does AIFootnote 5 – the third theme of this book – sit in this schema? The growing capacity of AI and related digital technologies has contributed to a burgeoning interest in the potential for financial technology (‘fintech’) to transform the way in which traditional banking and financial services are provided.Footnote 6 Governments across the globe have promoted the capacity of AI informed fintech to improve market competition and consumer welfare,Footnote 7 and have introduced initiatives to support the development of innovative fintech products within their jurisdictions.Footnote 8 Fintech products are increasingly being used by the financial services sector for internal processes, decision-making, and interactions with customers.Footnote 9

Inside financial institutions, fintech products are assisting in fraud detection, cybersecurity, marketing, and onboarding new clients.Footnote 10 Fintech products are being developed to automate financial services firms’ decisions about lending, creditworthiness, and pricing credit and insurance.Footnote 11 In a consumer-facing role, fintech products are being used for communicating with customers, such as through chatbots (generative or otherwise),Footnote 12 and in providing access to financial products, for example, loanFootnote 13 or credit card online applications.Footnote 14 Fintech products are being developed to provide credit product comparisons for consumers looking for the best deal.Footnote 15 However, the most common forms of consumer-facing fintech are, at the time of writing, financial advice toolsFootnote 16 primarily for investing and budgeting.Footnote 17

Consumer-facing fintech generally, and automated financial advice tools specifically, are often promoted as benefiting consumers by assisting them to make better decisions about credit, savings, and investment, and by providing these services in a manner that is more cost-effective, convenient, and consistent than could be provided by human advisers.Footnote 18 These features undoubtedly hold attractions for consumers. However, in our opinion, the allure of AI, and its financial market equivalent of fintech, should not be allowed to overshadow the limitations of, and the risks of harm inherent in, these technologies. As this book makes clear, whether used by governments or private sector firms, AI and automated decision-making tools raise risks of harm to privacy, efficacy, bias, and perpetrating existing power hierarchies. Albeit on a different scale, consumer-facing fintech, such as automated financial advice tools, carry many of the same kinds of risks, which equally demand regulatory attention and best practice for good governance. There has been little assessment of whether automated financial advice tools are effective in achieving improving the financial well-being of consumers. It is also unclear whether and to what extent such tools are equitable and inclusive, or conversely amplify existing bias or patterns of exclusion in financial services and credit markets.

Some of the potential risks of harm to consumers from automated financial advice tools will be addressed by existing law. However, we argue that there is a need to move past the commercial, and indeed political, promotion of ‘AI’ and ‘fintech’ to understand their specific fields of operation and demystify their scope. This is because the use of AI in this equation is not neutral or without friction. Automated advice tools raise discrete and unique challenges for regulatory oversight, namely opacity, personalisation, and scale. We therefore suggest, drawing on the key principles propounded in AI ethics frameworks, that the effective regulation of automated financial advice tools should require greater transparency about what is being offered to consumers. There should also be a regulatory commitment to ensuring the outputs of such tools are contestable and accountable, having regard to the challenges raised by the technology they utilise.

This chapter explores these issues, beginning with an overview of automated financial advice, focusing on what are currently the most widely available tools, namely ‘robo’ investment advice and budgeting apps. We discuss the risks of harm raised by these uses of AI and related technologies, arising from uncertainty about the quality of the service provided, untrammelled data collection, and the potential for bias, as well as the need for a positive policy focus on the impact of such tools on goals of equity and inclusion. We review the guidance provided by regulators, as well as the gaps and uncertainties in the existing regulatory regimes. We then consider the role of principles of transparency and contestability as preconditions to greater accountability from the firms deploying such tools, and more effective oversight by regulators.

2.2 Aspiration and Application in Consumer-Facing Fintech

The term ‘fintech’ refers to the use of AI and related digital technologies to deliver financial products and services.Footnote 19 The AI used to deliver fintech products may include natural language processing in front-end interfaces to communicate effectively with clients and statistical machine learning models to make predictions that inform financial decision-making. ‘Consumer-facing’ fintech refers to the use of fintech to provide services to consumers, as opposed to use by professional investors, business lenders, or for back-room banking processes. As already noted, perhaps the most prominent form of fintech service offered to consumers, as opposed to informing the internal processes of financial institutions, is automated financial advice, primarily about investing and budgeting.

The aims of most fintech products are to allow services to be delivered at scale, reducing human handling of information, and, in the case of consumer-facing fintech, benefiting consumers. Automated financial advice tools typically purport to offer a low-cost option for financial advice derived from insights from consumer data and statistical analysis and provided through an accessible interface using state-of-the-art processing to identify and respond to consumers’ financial aims. The commonly stated aspiration of governments and regulators in supporting the development of these and other fintech products is to promote innovation and to provide low-cost, reliable, and effective financial services to consumers.Footnote 20 Some fintech providers express aspirations to be more inclusive and empower ordinary people to participate in the financial and banking sectors.Footnote 21

There are undoubted attractions in such aspirations.Footnote 22 The majority of consumers do not seek financial planning advice,Footnote 23 probably because it is perceived as being too expensive.Footnote 24 Yet many consumers find financial matters difficult or confusing. This is due to a combination of factors, including low financial literacy, limits on time, and the impact of behavioural biases on decision-making. In principle, automation should allow financial services providers to lower the cost and improve the consistency of advice,Footnote 25 as well as providing the convenience of an on-demand service.Footnote 26 Additionally, by using consumers’ own data, automated financial advice tools have the potential to be uniquely tailored to those consumers’ individual circumstances.Footnote 27 Indeed, this is one of the premises behind Australia’s consumer data right, which aims to give consumers control over their data to promote innovation and competition in the banking sector.Footnote 28

Currently, the two main kinds of automated financial advice tools are robo-advisers and budgeting apps.Footnote 29 Though these tools will no doubt evolve, they provide a simpler, less personalised service than might be envisaged by the ‘AI’ label commonly attached to them.

2.2.1 Robo-Advisers

Robo-advisersFootnote 30 provide ‘automated financial product advice using algorithms and technology and without the direct involvement of a human adviser’.Footnote 31 In principle, robo-advice might cover automated advice about any topic relevant to financial management, such as budgeting, borrowing, investing, superannuation, retirement planning, and insurance. Currently, most robo-advisers provide automated investment advice and portfolio management.Footnote 32

Typically, robo-advice services begin with consumers answering a questionnaire about their goals, expectations, and aptitude for risk. An investment profile for consumers is derived from this information, based on their goals and capacity to bear risk. An algorithm matches consumers’ profiles with an investment portfolio available through the advisory firm to produce an investment recommendation.Footnote 33 Should a consumer choose to follow the advice and invest in that portfolio, many robo-advisers will also manage the portfolio on an ongoing basis, keeping it within the parameters recommended for the consumer. Consumers generally pay a fee for the service provided by the robo-adviser, often a percentage of the amount invested, with minimum investment amounts required to access the service.

Robo-advice is sometimes described as ‘trading with AI’.Footnote 34 This language might be thought to suggest specialised insights into the stock market uniquely tailored to consumers’ needs and arrived at through sophisticated machine learning models. The practice is more straightforward. At the time of writing, robo-advisers do not rely on state-of-the-art AI technology, such as using neural networks to process data points and make predictions about stock market moves, or link individual profiles to unique investment strategies. As Baker and Dellaert explain, the matching process will be based on ‘a model of how to optimise the fit between the attributes of the financial products available to the consumer and the attributes of the consumers who are using the robo-advisor’.Footnote 35 The robo-adviser will typically build the consumer profile based on the entry questionnaire and match this with an investment strategy established using financial modelling techniques and based on the investment packages already offered by firm. The process will usually have been automated through some form of expert system – a hand coded application of binary rule identified by humans. Ongoing management of the consumer’s portfolio will be done on a similar basis, often using exchange-traded funds (ETFs) that ‘require no or less active portfolio management’.Footnote 36

Unlike human financial advisers, robo-advice tools typically do not provide budgeting or financial management advice to consumers.Footnote 37 Their recommendations are limited to the kinds of investment that will match consumers’ investment profiles. Robo investment advisers do not provide advice on matters of tax, superannuation, asset management, or savings, and they do not yet have the capacity to provide this more nuanced advice.Footnote 38 Sometimes robo-advice tools are used in conjunction with human financial advisers who will provide a broader suite of advice. Automated budgeting tools are also increasingly available on the market.

2.2.2 Budgeting Tools

Budgeting tools allow consumers to keep track of their spending by categorising expenses and providing dashboard-style visualisations of spending and saving.Footnote 39 Some banks offer budgeting tools to clients, and there are many independent service providers. Some neo-banks have, additionally, consolidated their brand around their in-built budgeting tools.Footnote 40

As with robo-advisers, automated budgeting tools collect information about consumers through an online questionnaire. Budgeting tools also typically require consumers to provide access to their bank accounts, in order to scrape transaction data from that account,Footnote 41 or alternatively rely on data-sharing arrangements.Footnote 42 Based on this information, the services provided by budgeting tools include categorising and keeping track of spending; providing recommendations about budgeting; and monitoring savings.Footnote 43 In some cases, the tools will transfer funds matching consumers’ savings goals to a specific account, provide bill reminders, make bill payments, monitor information about credit scores, suggest potential savings through various cost-cutting measures or identifying alternative service providers.Footnote 44 Additionally, automated budgeting tools may provide articles and opinion pieces about financial matters, such as crypto, non-fungible tokens (NFTs), or budgeting.Footnote 45 Some budgeting tools have a credit card option,Footnote 46 and at least one is linked to a ‘buy now-pay later’ provider.Footnote 47

Automated budgeting tools often describe their service as relying on AI.Footnote 48 Again, however, they do not, as might have been expected from this terminology, typically provide a personalised plan for saving derived from insights from multiple data points relating to consumers. They may use some form of natural language processing to identify spending items. Primarily, somewhat like robo-advisers, they rely on predetermined, human-coded rules for categorising spending and presenting savings. Most budgeting tools are free, although some charge for a premium service. This means that the tools are funded in other, more indirect ways, including through selling targeted advertisements on the app, fees for referrals, commissions for third-party products sold on the app, the sale of data (usually aggregated), and in some cases a percentage of the savings where a lower cost service or provider is identified for consumers.Footnote 49

2.3 Regulation and Risk in Consumer-Facing Fintech

This brief survey of available automated financial advice tools aimed at consumers suggests that they are operating with a fairly narrowly defined scope and using relatively straightforward digital processes. The tools may evolve into the future to make greater use of such state-of-the-art AI, such as using generative AI for providing general advice to consumers. However, even in their current form, the tools pose risks of harm to consumers that are more than fanciful, and similar to those raised by AI generally. The risks arising from AI are becoming increasingly well recognised, including poor efficacy, eroding privacy, data profiling, and bias and discrimination.Footnote 50 These risks are also inherent in consumer-facing fintech and automated financial advice tools. Moreover, we suggest they are only partially addressed by existing law. While financial services law commonly imposes robust obligations on those providing financial advice, those obligations may not squarely address the issues arising from the automated character of the advice, particularly issues of bias. Additionally, some automated advice tools, such as budgeting apps, may fall outside of these regimes. It is therefore worth considering these issues in more detail.

2.3.1 Quality of Performance

One of the notable features of automated financial advice is that consumers are unlikely to be able to scrutinise the quality of the service provided. Consumers will typically turn to automated advice tools because they lack skills in the relevant area, be it investing or budgeting. This lack of expertise makes it difficult for them to assess the quality of the advice they receive.Footnote 51 There is not a lot of information for consumers in selecting between different tools, as compared to standard consumer goods. While some rankings of automated financial advice tools have emerged, these often focus on ease of use – the interface, syncing with bank data, fees charged – rather than the quality of the advice provided,Footnote 52 and some ranking reviews include sponsored content.Footnote 53 Accordingly, at least at this point in time, automated financial advice tools may be very much a credence good – for which assertions of quality are all that is available to consumers. Unless the advice provided by the tools is patently bad, it may not be apparent that the poor quality of the automated process is to blame, as opposed to other external factors. Indeed, without a point of comparison, which is effectively excluded by the personalised nature of the service, it may be difficult for consumers to identify poor quality advice at all.

There is currently little academic research on the extent to which consumers are well-served by automated financial advice tools, particularly when weighted against possible costs in terms of data-sharing.Footnote 54 There have been a number of concerns raised in the literature about how well the tools may function. Although robo-advisers may operate in a manner that is more objective and consistent than human financial advisers,Footnote 55 this does not mean they operate free from the influence of commissions, which may be coded into their advisory process. It is unclear to what extent the recommendations provided by automated financial advice tools are personalised to consumers, as opposed to being generic or based on broad target groupings. Additionally, concerns have been raised about the relatively small number of investment options actually held by robo-investment advisers.Footnote 56 While automated budgeting tools may assist consumers by providing an accessible, straightforward, and visual way of monitoring spending,Footnote 57 this does not necessarily translate into long-term savingsFootnote 58 or improved financial literacy.Footnote 59 It is further possible that one of the main functions of at least some budgeting apps is to obtain consumers’ attention in order to market other financial services, such as credit cards, as well as the opportunity for the providers to profit from the use or sale of consumer data for marketing and data analytics.Footnote 60

In consumer transactions – particularly those that are complex, hard for consumers to monitor, or which carry the risk of high impact harms – reliance is usually placed on regulators to take ‘ex ante’ measures for ensuring that the products supplied to consumers are acceptably safe and reliable. Financial services regulators in jurisdictions such as Australia, the United Kingdom, the European Union, and the United States of America have responded to the rise of robo-advisers by affirming that the existing regulatory regime applies to this form of advice.Footnote 61 Financial services providers are typically subject to an array of statutory conduct obligations, which overlap, albeit imperfectly, with their fiduciary duties arising under general law.Footnote 62 These statutory duties require firms to manage conflicts of interest,Footnote 63 act in their clients, best interests,Footnote 64 ensure the suitability of the advice provided,Footnote 65 and take reasonable care in proving the advice.Footnote 66 These obligations should, in principle, assist in addressing concerns about the quality of the service provided by robo-advisers.Footnote 67 Nonetheless, some uncertainties remain, including, for example, whether the category-based approach deployed by robo-advisers fits with statutory requirements for personalised advice that is suitable for the individual consumer.Footnote 68

Regulators have additionally stated they expect firms providing robo-advice to have a ‘human in the loop’, in the sense of a person with ‘an understanding of the technology and algorithms used to provide digital advice’ and who are ‘able to review the digital advice generated by algorithms’.Footnote 69 Recommendations for a human overseeing the automated advice leave open the question of what that human should be monitoring – is it merely compliance with existing law applying to the giving of advice, or should there be other considerations taken into account, arising from the automated character of the advice?

In terms of the issue of automation, regulators have focused on the informational aspects of the process. They have emphasised that firms providing automated advice should give consideration to the way in which the information on which the advice is based is collected from consumers so as to ensure it is accurate and relevant, especially because there is no human intermediary to pick up possible discrepancies or errors. Regulators have also advised firms to take care in the way the advice is framed and explained, given the potential for misunderstanding and error in an automated process.Footnote 70 Issues of information gathering and reporting are important but they are only part of the challenge presented by automation for consumer protection law and policy. Moreover, they tend to represent a very individualised response to the risks of harm to consumers relying on automated financial advice, focusing on what consumers need to provide and understand, as opposed to the substance of the process through which advice is provided.

Notably, there is typically no specific law or regulatory guidance that applies to automated budgeting tools, which do not involve financial services. These tools will be subject to general consumer protection regimes, which typically prohibit misleading conduct, and mandate reasonable care and skill in the provision of services.Footnote 71 Uncertainties about the application of existing law to automated advice give rise to the question of whether other kinds of regulatory mechanisms are required to complement sector-specific or general consumer protection law in order to address the risks of harms that are specific to the use of AI and related digital technologies. In answering this question, we suggest that, at minimum, the risks around data collection and bias need to be considered.

2.3.2 The Data/Service Trade-Off

Automated financial advice tools operate on the core premise that consumers necessarily hand over data to obtain the service. A firm may be using consumer data for the dual purposes of providing advice and making a return for itself, such as through promoting other products for a commission on sales, up-selling add-on products for a fee, or on-selling the data for profit.Footnote 72 This behaviour is particularly apparent in the case of budgeting apps, which are typically free. As already noted, these services earn income through in-app advertising, fees, and commissions for referrals and potentially through selling aggregated consumer data, as well as targeted advertising. Notably, the privacy terms of automated budgeting tools commonly allow the collection of a wide range of consumer data and the use of that data for a number of purposes, including improving the service and related company group services, marketing, and, in aggregated form, sharing with third parties.Footnote 73

Data protection and privacy law impose obligations on the collection and processing of data.Footnote 74 However, the key requirements of notice and consent typically found under these regimes may easily be met in automated advice contexts because the exchange is at the heart of the transaction. Consumers provide their data in order to obtain the advice they need. While consumers may be unaware of how much information they are handing over, there is some evidence that consumers, particularly younger consumers, are prepared to trade data for cheaper, more efficient financial services.Footnote 75 However, to the extent consumers are ill or under-informed about the quality of the service being provided by automated advice tools, the data-for-service bargain may look thinner than they might have at first thought.Footnote 76 Under the fintech service model, consumers provide personal data to obtain a personalised and cost-effective service but have few objective measures as to the quality of what is actually being provided.

2.3.3 Bias and Exclusion

In discussing legal and regulatory responses to the growing influence of AI and related technologies, much attention has rightly been given to their role in amplifying surveillance, bias and discrimination.Footnote 77 The technologies may use personal data to profile consumers, which in turn allows firms to differentiate between different consumers and groups with a high degree of precision, leading to risks of harmful manifestations of targeted advertising, or differential pricing.Footnote 78 Bias and error are particular concerns in firms’ use of AI technologies for decision-making, including in decisions about lending,Footnote 79 credit,Footnote 80 or insurance.Footnote 81 Automated lending decisions and credit scoring might be more objective than human-made decisions and might benefit cohorts that have previously been disadvantaged by human prejudice.Footnote 82 But there is no guarantee this is the case, and indeed the outcomes may be worse for these groups. Differential treatment of already disadvantaged groups – such as minoritiy or low-income cohorts – may already be embedded in the practices and processes of the institution. To the extent this data is used in credit-scoring models or to inform automated decisions, historical unequal treatment may be amplifiedFootnote 83 or distorted.Footnote 84 Unequal treatment may, moreover, be difficult to identify or address where it is based, not directly on protected attributes, but on proxies for those attributes found in the training data.Footnote 85

Bias may also be embedded in automated advice tools used by consumers. For example, a robo-advice tool might exhibit bias by treating a person who takes time off work for childrearing as going through a period of precarious employment or being unable to hold down steady employment. An automated budgeting tool might exhibit bias by characterising products for menstruation as discretionary spending, instead of essentials. There are complex technical and policy decisions to be made in identifying and responding to the risks of unacceptable bias in automated financial advice tools.Footnote 86 Consumer protection and financial services law have not traditionally have not been central to this process, which is primarily the domain of human rights law. However, decisions based on historical prejudice may be unconscionable or unfair, contrary to consumer protection law. Certainly, in the United States, the Federal Trade Commission has indicated that discriminatory algorithms would fall foul of its jurisdiction to respond to unfair business practices.Footnote 87

A related issue concerns financial exclusion. Fintech innovators and government initiatives to encourage innovation often refer to an aspiration of promoting inclusion and overcoming exclusion.Footnote 88 There are few findings on the extent to which this aspiration is achievable. There are plausible reasons why automated advice tools may fail to assist, or assist adequately, consumers already excluded from mainstream financial or banking services, or consumers who have had less engagement with the mainstream banking system, such as where they are ‘not accessing or using financial services in a mainstream market in a way that is appropriate to their needs’.Footnote 89 Financially excluded consumers might not be offered meaningfully relevant advice tools because there is no relevant or useful data about them or because they are unlikely to be sufficiently profitable for financial services providers to develop products suited to them. These consumers may also find that the models on which the advisory tools are based are inaccurate when applied to their circumstances.

For example, investment tools may be of little value to consumers struggling to make ends meet and with no savings to invest. The models used by automated budgeting tools may have a poor fit with consumers living on very low incomes and for whom cutting back on discretionary spending is not an option available. In these circumstances, the tools will do little to improve equity, leaving unrepresented groups without advice, or relevantly personalised advice. Moreover, there may be a real risk of harm. Inept recommendations may subject consumers to harms of financial over-commitment or lull inexperienced consumers into a false sense of financial security. At a more systematic level, the availability of automated advice tools for improving financial well-being may feed into longstanding liberal rhetoric about the value of individual responsibility, as opposed to government initiatives for improving overall financial well-being.

It is possible to envisage services that would be useful to financially excluded consumers or consumers experiencing financial harshi, such as for example, advice on affordable loans and other services.Footnote 90 Emma Leong and Jodi Gardner point to proposed uses of Open Banking in the United Kingdom to provide tools that assist with better managing fluctuating incomes.Footnote 91 The United Kingdom Financial Conduct Authority notes there are some apps on the market providing legal aid and welfare support advice.Footnote 92 These kinds of initiatives are likely to require a deliberate policy decision to initiate rather than arising ‘naturally’ in the market.Footnote 93 This is because there would seem to be little commercial incentive for firms to invest in tools specifically tailored to low-income or otherwise marginalised consumers from whom there is little likelihood of ongoing lucrative return to the firm, without government support.

2.4 New Regulatory Responses to the Risks of Automated Financial Advice

Automated financial advice tools illustrate the continuing uncertainties in regulating consumer-facing fintech and AI informed consumer products. We have seen that regulators will need to adapt existing regimes to the new ways in which services are being provided to consumers, which requires attention not only to the risks in providing advice but in the automation of advice. We further suggest that regulators need to be cognisant of the ways in which the AI and digital technologies informing the tools raise unique challenges for regulation. Opacity is a key concern in any regulatory response to making AI systems more accountable.Footnote 94 Automated financial advice tools may not currently rely on sophisticated AI, in the sense of deep learning or neural networks. Nonetheless, they are for commercial (if not technical) reasons highly opaque as to the technology being utilised and how recommendations are reached. Their very purpose is to provide advice without significant human intervention and at scale, which may amplify harms of bias or error in the system.Footnote 95 The tools typically purport to provide output on factors personal to the consumer, which may make it difficult to determine whether an adverse outcome is unfortunate, a systematic error or failure of a legal duty.Footnote 96

One response to navigating the challenges of regulating consumer-facing fintech is provided by the principles of ethical AI.Footnote 97 Principles of AI ethics are sometimes criticised as too general to be useful.Footnote 98 The principles operate as a form of soft law – they are not legally binding and must necessarily be supplemented by legal rules.Footnote 99 However, principles of AI ethics may be effective when operationalised to apply to specific contexts and when used in conjunction with other forms of regulation. The principles provide the preconditions for responsible use of AI and automated decision tools by firms. They also provide an indication of what regulators should demand from firms deploying such technology to reduce the risk of harm to consumers.Footnote 100 While there are various formulations of the principles of ethical AI,Footnote 101 key features typically include requirements for AI to be transparent and explainable,Footnote 102 along with mechanisms for ensuring accountabilityFootnote 103 and – at least in the Australian government’s principlesFootnote 104 – contesting adverse outcomes.Footnote 105

2.4.1 Transparency and Explanations

Principles of ethical AI typically require the use of such technologies to be transparent.Footnote 106 A starting place for transparency is to inform consumers when AI is being used in an interaction with them. Applied to automated financial advice tools, transparency must mean more than informing consumers that AI is being used to provide advice. Consumers choosing to turn to a robo-adviser or budgeting app will usually be aware of the automated character of the advice. Consumers also require transparency in the kind of technology being used to provide that advice: i.e. is it based in machine learning or a hand coded expert system. Additionally, a principle of transparency would require firms to inform consumers clearly about the scope of the service that is being provided, including the limitations of the technology in terms of personalised or expert advice.Footnote 107 If the advice provided is generalised to broadly defined categories of consumers, then this should be made clear, to counter consumers’ expectations of a unique and personal experience.

To the extent that consumers overestimate the capacities of fintech, transparency in way the advice is produced is important to ground expectations and allow scrutiny of the veracity of claims made about it. For regulators, transparency is key to overseeing the performance of the tools. Transparency is key to allowing bias or distortions in the scope of advice to be identified, scrutinised and, in some instances, rectified. Regulation can support the imperative for firms to take these ethical demands seriously, including by treating them as necessary elements of statutory obligations of suitability or best interests, and essential to ensuring that claims about the operation of the product are not misleading. For example, the process of automation, and its claims to objectivity and consistency, may make consumers overconfident about the advice and more likely to act on it.Footnote 108 This might suggest an obligation on firms to be scrupulously clear on the limits of what is able to provided by automated advice tools, and of the insights that can be derived from the technology being utilised.Footnote 109

Transparency in ethical AI is closely associated with initiatives in AI ‘explanations’ or ‘explainability’.Footnote 110 Explanations in this sense do not lie in the details of the code. Rather, explainable AI considers the kind and degree of information that should be provided in assisting the various stakeholders in the decision or recommendation process to understand why decisions were taken or the factors that were significant in reaching a recommendation.Footnote 111 Explainable AI aims to provide greater transparency into the basis for automated decisions, predictions, and recommendations.Footnote 112 There are different ways in which explanations may be provided, and indeed the field of study in computer science is still developing.Footnote 113 Possibilities include the use of counterfactuals, feature disclosure scores, weightings of influential factors, or a preference for simpler models where high levels of accuracy are not as imperative.Footnote 114 Overall, however, a requirement for explanations would assist in scrutinising the basis of the recommendations produced through automated financial advice tools.

For lawyers, suggesting that a core element in the regulation of automated financial advice tools should focus on requirements related to transparency/explanations may seem a surprising aspiration.Footnote 115 Disclosure as a consumer protection strategy has increasingly fallen out of favour, particularly in the regulation of financial services and credit. The insights into decision-making from behavioural psychology have shown that mere information disclosure does not lead to better decisions by consumers. Consumers are subject to bounded rationality which means they rely on rules of thumb, heuristics, and behavioural bias rather than information.Footnote 116 In this light, it may be thought that any demand for greater transparency in automated financial advice tools may be of marginal utility. However, in a consumer protection context, consumers’ interests are substantially protected by regulators, and therefore transparency and explanations are relevant to both consumers seeking to protect their interests, and regulators charged with overseeing the market. Explanations should be provided in a form that is meaningful to the recipient.Footnote 117 This means that the detail and technicality of the information provided may need to differ between consumers and regulators.Footnote 118 In other words, the requirements should be scaled according to who is receiving the explanation.

2.4.2 Accountability

Principles of AI ethics typically require mechanisms for ensuring firms are accountable for the operation of the technologies.Footnote 119 To have impact, accountability will require more than allocating responsibility for supervising the AI to a person. There is little worth in having a ‘human in the loop’ in circumstances where the design of the AI or automated tool means it is difficult for that person genuinely to oversee, interrogate or control the tool.Footnote 120 Accountability for automated financial advice tools should therefore require a firm to implement systematic processes for reviewing the operations and performance of the tools.Footnote 121 A commitment to accountability may therefore require firms to have processes for scrutinising the data on which the AI is trained, its ongoing use, and its outputs.Footnote 122 A model for the kind of robust approach required might be found in the audits increasingly recommended for AI used in public sector decision-making.Footnote 123 Such processes should aim to ensure the veracity of the tools and are a critical element in addressing and redressing concerns about bias, equity, and inclusion.Footnote 124

2.4.3 Contestability

There is little utility in requiring transparency and accountability in AI systems if there is no mechanism available to those affected by an AI or automated decision for acting to challenge an outcome that is erroneous, discriminatory, or otherwise flawed. Some formulations of AI ethical principles respond to this issue by requiring processes for contesting adverse outcomes.Footnote 125 While accountability processes should aim to be proactive in preventing these kinds of problems, contestability is a mechanism for individuals, advocates, or regulators to respond to harms that do occur.

Lyons et al. make the point that little is currently known about ‘what contestability in relation to algorithmic decisions entails, and whether the same processes used to contest human decisions … are suitable for algorithmic decision-making’.Footnote 126 Contestability for automated decisions may not be able simply to follow existing mechanisms for dealing with individual complaints or concerns. The models informing AI may be complex and opaque, thus creating challenges for review by subject domain experts who may nonetheless be unfamiliar with the technology. Additionally, scale creates a challenge. This is because one of the benefits of automated decision-making is that it can operate on a scale that is not possible for human decision-makers or advisers, and yet this makes processes for individual review potentially unmanageable.

The inquiry into what contestability requires may be different in the context of automated financial advice tools, as opposed to public sector use of automated decision-making. Consumers using automated advice tools will not be challenging a decision made about their rights to access public resources or benefits. Rather they will be challenging the advice given to them, the consistency of this advice with any representations about the tool, or compliance with any applicable regulatory regimes. Nonetheless, complexity and scale remain significant challenges. It is possible that the field of consumer protection law may have insights given its focus on both legal rights and structural mechanisms for protecting consumers’ interests in circumstances where there are considerable imbalances in power, resources, and information, which in some ways mirrors concerns around AI contestability. For example, in this context of automated financial advice tools, contestability for poor outcomes may come through the oversight provided by ombudsmen and regulators, rather than traditional litigation. These inquiries have the capacity to look at systemic errors, thus bringing expertise and capacity to review processes through which advice or recommendations are provided, rather than necessarily reopening every decision.

2.5 Conclusion

The triad of money, power, and AI collide in fintech innovation, which sees public and private sector support for using AI, along with blockchain and big data, in the delivery of financial services. Currently, the most prominent forms of fintech available to consumers are automated advice tools for investing and budgeting. These tools offer advantages of low cost, convenient and consistent advice on matters consumers often find difficult. Without discounting these attractions, we have argued that the oft-stated aspiration of automated advice financial tools in democratising personal finance should not distract attention from their potential to provide only a marginally useful service, while extracting consumer data and perpetuating the exclusion of some consumer cohorts from adequate access to credit, advice and banking. From this perspective, consumer-facing fintech provides a exemplary example of the need for careful regulatory attention being provided to the use of AI and related technologies even in seemingly low-risk contexts. Fintech tools that hold out to consumers a promise of expertise and assistance should genuinely be fit for the purpose. Consumers are unlikely to be able to monitor this quality themselves. As such, robust standards of transparency, accountability, and contestability that facilitate good governance and allow adequate regulatory oversight are crucial, even for these modest applications of AI.

Footnotes

1 See also Jodi Gardner, Mia Gray, and Katharina Moser (eds), Debt and Austerity: Implications of the Financial Crisis (Edward Elgar, 2020).

2 See further Lucinda O’Brien et al, ‘More to Lose: The Attributes of Involuntary Bankruptcy’ (2019) 38 Economic Papers 15.

3 Jeannie Paterson, ‘Knowledge and Neglect in Asset-Based Lending: When Is It Unconscionable or Unjust to Lend to a Borrower Who Cannot Repay?’ (2009) 20 Journal of Banking and Finance Law and Practice 18.

4 See generally, Michael Trebilcock, Anthony Duggan, and Lorne Sossin (eds), Middle Income Access to Justice (University of Toronto Press, 2012).

5 AI is a disputed category – we are using the term to cover automated decision-making processes informed by predictive analytics, machine learning techniques, and natural language processing.

6 See, for example, Ross P Buckley et al, ‘Regulating Artificial Intelligence in Finance: Putting the Human in the Loop’ (2021) 43(1) Sydney Law Review 43.

7 See, for example, the UK Financial Conduct Authority’s innovation services, which aim to ‘create room for the brightest and most innovative companies to enter the sector, support positive innovation to come to market in a controlled and sustainable way, support innovation that has genuine potential to improve the lives of consumers across all areas of financial services [and] support innovation delivered by a diverse range of participants, both in terms of the type of firm, and the people behind the developments’: ‘Our Innovation Services’, Financial Conduct Authority (Web Page) <www.fca.org.uk/firms/innovation/our-innovation-services> accessed 11 July 2023. See also ‘Competition in the Technology Marketplace’, Federal Trade Commission (Web Page) <www.ftc.gov/advice-guidance/competition-guidance/industry-guidance/competition-technology-marketplace> accessed 11 July 2023; Bank of England and Financial Conduct Authority, ‘Machine Learning in UK Financial Services’ (Web Page, October 2019) 3 <www.bankofengland.co.uk/report/2022/machine-learning-in-uk-financial-services> accessed 11 July 2023; Commonwealth Government, Inquiry into Future Directions for the Consumer Data Right (Final Report, October 2020) 19.

8 See, for example, ‘Enhanced Regulatory Sandbox’, Australian Securities & Investments Commission (ASIC) (Web Page, 1 September 2020) <https://asic.gov.au/for-business/innovation-hub/enhanced-regulatory-sandbox> accessed 22 May 2022. Also, Philip Maume, ‘Regulating Robo-Advisory’ (2019) 55(1) Texas International Law Journal 49, 56.

9 OECD, Personal Data Use in Financial Services and the Role of Financial Education: A Consumer Centric Analysis (Report, 2020) 20 <www.oecd.org/daf/fin/financial-education/Personal-Data-Use-in-Financial-Services-andthe-Role-of-Financial-Education.pdf> accessed 20 May 2022.

10 Bank of England and Financial Conduct Authority, ‘Machine Learning in UK Financial Services’, 6.

11 See, for example, Zest (Web Page) <www.zest.ai/> accessed 11 July 2023.

12 OECD, Personal Data Use in Financial Services.

13 See, for example, Better (Web Page, 2022) <https://better.com>; Cashngo (Web Page, 2022) <www.cashngo.com.au>; Nano (Web Page, 2022) <https://nano.com.au>; Rocket Mortgage (Web Page, 2022) <www.rocketmortgage.com>.

14 See, for example, Petal (Web Page, 2022) <www.petalcard.com>.

15 See, for example, LoanOptions.ai (Web Page, 2022) <www.loanoptions.ai>.

16 OECD, Personal Data Use in Financial Services, 20.

17 ‘What You Need to Know about How FinTech Apps Work’, Consumer Action (Web Page, 16 February 2021) <www.consumer-action.org/english/articles/fintech_apps> accessed 20 May 2022.

18 See, for example, Paul Smith and James Eyers, ‘CBA in $134m Play to Be “AI Superpower”’ (8 November 2021) Australian Financial Review <www.afr.com/technology/cba-aims-to-be-ai-superpower-with-us100m-tech-plunge-20211105-p596bx> accessed 20 May 2022. See also Daniel Belanche, Luis V Casaló, and Carlos Flavián, ‘Artificial Intelligence in FinTech: Understanding Robo-Advisors Adoption among Customers’ (2019) 119(7) Industrial Management & Data Systems 1411, 1411.

19 Dirk A Zetsche et al,From Fintech to Techfin: The Regulatory Challenges of Data-Driven Finance’ (2018) 14(2) NYU Journal of Law & Business 393, 400; Bonnie G Buchanan, Artificial Intelligence in Finance (Report, The Alan Turing Institute, 2019) 1 <www.turing.ac.uk/sites/default/files/2019-04/artificial_intelligence_in_finance_-_turing_report_0.pdf> accessed 11 July 2023.

20 The Australian Government, The Treasury, Consumer Data Right Overview (Report, September 2019) 2 <https://treasury.gov.au/sites/default/files/2019-09/190904_cdr_booklet.pdf> accessed 11 July 2023; OECD, Personal Data Use in Financial Services, 15.

21 See, for example, ‘Built to Make Investing Easier’, Betterment (Web Page) <www.betterment.com/investing> accessed 20 May 2022: ‘Automated technology is how we make investing easier, better, and more accessible’. See also ‘About Us’, Robinhood (Web Page) <https://robinhood.com/us/en/about-us> accessed 11 July 2023: ‘We’re on a mission to democratize finance for all’.

22 Australian Securities & Investments Commission (ASIC), Providing Digital Financial Product Advice to Retail Clients (Regulatory Guide 255, August 2016) para 255.3 <https://download.asic.gov.au/media/vbnlotqw/rg255-published-30-august-2016-20220328.pdf> accessed 11 July 2023: ‘digital advice has the potential to be a convenient and low-cost option for retail clients who may not otherwise seek advice’.

23 Footnote Ibid para 255.3, noting that only around 20 per cent of adult Australians seek personal financial advice. See also The Australian Government, The Treasury, Financial System Inquiry: Interim Report (Report, July 2014) paras 3.69–3.70 <https://treasury.gov.au/sites/default/files/2019-03/p2014-fsi-interim-report.pdf> accessed 15 May 2022. See also Deloitte Access Economics, ASX Australian Investor Study (Report, 2017) <www2.deloitte.com/content/dam/Deloitte/au/Documents/Economics/deloitte-au-economics-asx-australian-investor-study-190517.pdf> accessed 20 May 2022; Australian Securities & Investments Commission, Regulating Complex Products (Report 384, January 2014) 16–18 <https://download.asic.gov.au/media/lneb1sbb/rep384-published-31-january-2014-03122021.pdf> accessed 11 July 2023.

24 Consumers more commonly seek advice from mortgage brokers when seeking to buy a home, which is paid by commissions from banks. Doubts have been raised about the extent to which conflicts of interest undermine the value of the service to consumers and indeed the extent of the benefit provided which is often of unreliable quality. See Australian Securities & Investments Commission, Review of Mortgage Broker Remuneration (Report 516, March 2017) 17 <https://download.asic.gov.au/media/4213629/rep516-published-16-3-2017-1.pdf> accessed 11 July 2023; Productivity Commission, Competition in the Australian Financial System (Inquiry Report No 89, 2018) 301 <www.pc.gov.au/inquiries/completed/financial-system/report> accessed 11 July 2023. See also generally Jeannie Marie Paterson and Elise Bant, ‘Mortgage Broking, Regulatory Failure and Statutory Design’ (2020) 31(1) Journal of Banking and Finance Law and Practice 7. Also, generally Maume, ‘Regulating Robo-Advisory’, 50: noting the FCA estimates that in the United Kingdom there are sixteen million people in this financial advice gap.

25 Bob Ferguson, ‘Robo Advice: An FCA Perspective’ (Annual Conference on Robo Advice and Investing: From Niche to Mainstream, London, 2 October 2017) <www.fca.org.uk/news/speeches/robo-advice-fca-perspective> accessed 20 May 2020; Maume, ‘Regulating Robo-Advisory’, 69.

26 Tom Baker and Benedict Dellaert, ‘Regulating Robo Advice across the Financial Services Industry’ (2018) 103 Iowa Law Review 713, 714.

27 ‘10 Things Consumers Need to Know about FinTech’, Consumers International (Web Page) <www.consumersinternational.org/news-resources/blog/posts/10-things-consumers-need-to-know-about-fintech> accessed 20 May 2022.

28 Australian Government, The Treasury, Consumer Data Right Overview, 2; Edward Corcoran, Open Banking Regulation around the World (Report, BBVA, 11 May 2020) <www.bbva.com/en/open-banking-regulation-around-the-world> accessed 20 May 2022.

29 See also Jeannie Marie Paterson, ‘Making Robo Advisers Careful’ (2023) Law and Financial Markets Review 18.

30 See, for example, Betterment (Web Page) <www.betterment.com> accessed 11 July 2023; Robinhood (Web Page) <https://robinhood.com/us/en/about-us> accessed 11 July 2023; Wealthfront (Web Page) <www.wealthfront.com/> accessed 11 July 2023.

31 ASIC, Providing Digital Financial Product Advice to Retail Clients, para 255.1.

32 Financial Conduct Authority, Automated Investment Services: Our Expectations (Report, 21 May 2018) <www.fca.org.uk/publications/multi-firm-reviews/automated-investment-services-our-expectations> accessed 11 July 2023.

33 Belanche et al, ‘Artificial Intelligence in FinTech’, 1413; Dominik Jung et al, ‘Robo-Advisory: Digitalization and Automation of Financial Advisory’ (2018) 60(1) Business & Information Systems Engineering 81, 81.

34 See, for example, Jaaims (Web Page) <www.jaaimsapp.com> accessed 11 July 2023.

35 Baker and Dellaert, ‘Regulating Robo Advice across the Financial Services Industry’, 734.

36 Jung et al, ‘Robo-Advisory: Digitalization and Automation of Financial Advisory’, 82.

37 Maume, ‘Regulating Robo-Advisory’, 53. But see, providing both investment and budgeting advice, Douugh (Web Page) <https://douugh.com/> accessed 11 July 2023.

38 Sophia Duffy and Steve Parrish, ‘You Say Fiduciary, I Say Binary: A Review and Recommendation of Robo-Advisors and the Fiduciary and Best Interest Standards’ (2021) 17 Hastings Business Law Journal 3, 5.

39 See, for example, Goodbudget (Web Page) <https://goodbudget.com/> accessed 11 July 2023; Mint (Web Page) <https://mint.intuit.com/> accessed 11 July 2023; MoneyBrilliant (Web Page) <https://moneybrilliant.com.au/> accessed 11 July 2023; Empower (Web Page) <www.personalcapital.com/> accessed 11 July 2023; Spendee (Web Page) <www.spendee.com/> accessed 11 July 2023; Toshl (Web Page) <https://toshl.com/> accessed 11 July 2023; Rocketmoney (Web Page) <www.rocketmoney.com/> accessed 11 July 2023; Wemoney (Web Page) <www.wemoney.com.au> accessed 11 July 2023.

40 See, for example, UpBank (Web Page) <https://up.com.au/> accessed 11 July 2023; Revolut (Web Page) <www.revolut.com/en-AU/> accessed 11 July 2023; Pluto Money (Web Page) <https://plutomoney.app/> accessed 11 July 2023.

41 See Han-Wei Liu, ‘Two Decades of Laws and Practice around Screen Scraping in the Common Law World and Its Open Banking Watershed Moment’ (2020) 30(2) Washington International Law Journal 28.

42 See e.g. Frollo using Australia’s open banking regime: <www.instagram.com/p/CHzG3winmBo/>.

44 Joris Lochy, ‘Budgeting Apps – A Red Ocean Looking for a Market’ (Blog Post, 8 March 2020) <https://bankloch.blogspot.com/2020/03/budgeting-apps-red-ocean-looking-for.html> accessed 20 May 2020.

45 See e.g. Spendee (Web Page) <www.spendee.com/> accessed 11 July 2023.

46 See e.g. Mint (Web Page) <https://mint.intuit.com/> accessed 11 July 2023; Rocketmoney (Web Page) <www.rocketmoney.com/> accessed 11 July 2023.

47 E.g., Zippay provides a budgeting function (Web Page) <https://zip.co/au> accessed 11 July 2023.

48 See e.g. ‘We Combine Best-in-Breed AI Driven Categorization and Analytics with a Deep Set of Features That Are Proven to Work’, Budget Bakers (Web Page) <https://budgetbakers.com/> accessed 11 July 2023.

49 Joris Lochy, ‘Budgeting Apps – A Red Ocean Looking for a Market’.

50 See Zofia Bednarz, ‘There and Back Again: How Target Market Determination Obligations for Financial Products May Incentivise Consumer Data Profiling’ [2022] International Review of Law, Computers & Technology <www.tandfonline.com/doi/10.1080/13600869.2022.2060469> accessed 20 May 2022.

51 Baker and Dellaert, ‘Regulating Robo Advice across the Financial Services Industry’, 723.

52 See, e.g., Tamika Seeto, ‘6 Budgeting and Savings Apps Worth Checking Out in 2022’, Canstar (Blog Post, 15 March 2022) <www.canstar.com.au/budgeting/budgeting-apps/> accessed 20 May 2022; Choice (Web Page) <www.choice.com.au/money/financial-planning-and-investing/creating-a-budget/articles/how-we-test-budgeting-apps> accessed 20 May 2022.

53 See also Christy Rakoczy, ‘Best Budgeting Software: Fight the Right Software for Any Budgeting Goal’, Investopedia (Web Page) <www.investopedia.com/personal-finance/best-budgeting-software/> accessed 22 May 2022: ‘We recommend the best products through an independent review process, and advertisers do not influence our picks. We may receive compensation if you visit partners we recommend. Read our advertiser disclosure for more info’.

54 Jung et al, ‘Robo-Advisory: Digitalization and Automation of Financial Advisory’, 84.

55 Lukas Brenner and Tobias Meyll, ‘Robo-Advisors: A Substitute for Human Financial Advice?’ (2020) 25 Journal of Behavioral and Experimental Finance 100275.

56 Duffy and Parish, ‘You Say Fiduciary, I Say Binary’, 23.

57 Yaron Levi and Shlomo Benartzi, ‘Mind the App: Mobile Access to Financial Information and Consumer Behavior’ (17 March 2020) 9: ‘The interpretation of our results is that the mobile apps have a causal impact on the attention and spending behavior among consumers that decided to adopt it.’ <http://dx.doi.org/10.2139/ssrn.3557689> accessed 22 May 2022.

58 Evan Kuh, ‘Budgeting Apps Have Major Faws When It Comes to Helping Users Actually Save’, CNBC (Halftime Report, 13 June 2019) <www.cnbc.com/2019/06/13/budgeting-apps-don’t-help-users-save-money.html>; Rhiana Whitson, ‘Would You Use a Budgeting App? There Are Some Big pros and cons to Consider’, ABC Online (News Report, 4 August 2021) <www.abc.net.au/news/2021-08-04/how-do-you-keep-track-of-your-budget-we-look-at-your-options/100342676>.

59 Stefan Angel, ‘Smart Tools? A Randomized Controlled Trial on the Impact of Three Different Media Tools on Personal Finance’ (2018) 74 Journal of Behavioral and Experimental Economics 104–11: adolescent users of a smartphone budgeting app check their current account balance more than a control group. However, the app did not have a significant effect on subjective or objective financial knowledge indicators.

60 See discussion of the data use below.

61 ASIC, Providing Digital Financial Product Advice to Retail Clients; United States Securities and Exchange Commission, Commission Interpretation Regarding Standard of Conduct for Investment Advisers (Release No IA-5248, 2019) 12–18.

62 See generally Simone Degeling and Jessica Hudson, ‘Financial Robots as Instruments of Fiduciary Loyalty’ (2018) 40 Sydney Law Review 63.

63 See e.g., Corporations Act 2001 (Cth) s 912A(1)(aa), requiring financial services licensees to have ‘adequate arrangements’ for ‘managing’ conflicts of interest.

64 Corporations Act 2001 (Cth) s 961B(1); Securities and Exchange Commission, ‘Commission Interpretation Regarding Standard of Conduct for Investment Advisers’; Securities and Exchange Commission, Regulation Best Interest: The Broker Dealer Standard of Conduct (Release No 34-86031, 5 June 2019). Also, Duffy and Parish, ‘You Say Fiduciary, I Say Binary’; Han-Wei Liu et al,In Whose Best Interests? Regulating Financial Advisers, the Royal Commission and the Dilemma of Reform’ (2020) 42 Sydney Law Review 37.

65 ‘COBS 9.2 Assessing suitability’, Financial Conduct Authority (United Kingdom) Handbook (Web Page) <www.handbook.fca.org.uk/handbook/COBS/9/2.html>; European Parliament and Council Directive 2014/65/EU of 15 May 2014 Markets in Financial Instruments Directive II [2014] OJ L 173/349, art 25(2). Also, Corporations Act 2001 (Cth) pt 7.8A (design and distribution obligations).

66 Australian Securities and Investments Commission Act 2001 (Cth) s 12ED; Investment Advisers Act Release No. 3060 (28 July 2010) (United States).

67 See Paterson, ‘Making Robo-Advisers Careful’; ASIC, Providing Digital Financial Product Advice to Retail Clients, para 255.55.

68 Melanie L Fein, ‘Regulation of Robo-Advisers in the United States’ in Peter Scholz (ed), Robo-Advisory (Palgrave Macmillan, 2021), 112.

69 ASIC, Providing Digital Financial Product Advice to Retail Clients, paras 255.60, 255.73; Division of Investment Management, Robo Advisers (IM Guidance Update No 2017-02, February 2017) 8 <www.sec.gov/investment/im-guidance-2017–02.pdf> accessed 20 May 2022.

70 Financial Conduct Authority, ‘Automated Investment Services – Our Expectations’; European Securities Markets Authority, Guidelines on Certain Aspects of the MiFID II Suitability Requirements (Guidelines, 28 May 2018); Division of Investment Management, Robo Advisers (IM Guidance Update No 2017-02, February 2017) 3–6 <www.sec.gov/investment/im-guidance-2017-02.pdf> accessed 22 May 2020.

71 Jeannie Paterson and Yvette Maker, ‘AI in the Home: Artificial Intelligence and Consumer Protection’ in Ernest Lim and Phillip Morgan (eds), The Cambridge Handbook of Private Law and Artificial Intelligence (Cambridge: Cambridge University Press, forthcoming, 2024).

72 On this trade-off, see also Matthew Adam Bruckner, ‘The Promise and Perils of Algorithmic Lenders’ Use of Big Data’ (2018) 93 Chicago-Kent Law Review 3. Also, Zetsche et al, ‘From Fintech to Techfin’, 427.

73 See, especially ‘Inuit Privacy Policy’, Mint (Web Page) <www.intuit.com/privacy/statement/> accessed 11 July 2023; ‘Privacy Policy’, Frollo (Web Page) <https://frollo.com.au/privacy-policy/> accessed 11 July 2023; ‘Privacy Policy’, Pocketguard (Web Page) <https://pocketguard.com/privacy/> accessed 11 July 2023.

74 See, eg, Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L119/1; Data Protection Act 2018 (UK); Privacy Act 1988 (Cth); California Consumer Privacy Act, 1.81.5 Cal Civ Code § 1798.100–1798.199.100 (2018).

75 OECD, Personal Data Use in Financial Services, 20.

76 Footnote Ibid; Bednarz, ‘There and Back Again’.

77 Centre for Data Ethics and Innovation, Review into Bias in Algorithmic Decision-Making (Report, November 2020) 21.

78 Ryan Calo, ‘Digital Market Manipulation’ (2014) 82 George Washington Law Review 995; Bednarz, ‘There and Back Again’.

79 See generally Ari Ezra Waldman, ‘Power, Process, and Automated Decision-Making’ (2019) 88(2) Fordham Law Review 613; Australian Human Rights Commission, Human Rights and Technology (Final Report, 2021).

80 Emmanuel Martinez and Lauren Kirchner, ‘The Secret Bias Hidden in Mortgage-Approval Algorithms’ (25 August 2021) The Markup <https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms> accessed 22 May 2022.

81 See, e.g., Ramnath Balasubramanian, Ari Libarikian, and Doug McElhaney, McKinsey & Co, Insurance 2030: The Impact of AI on the Future of Insurance (Report, 12 March 2021) <www.mckinsey.com/industries/financial-services/our-insights/insurance2030-the-impact-of-ai-on-the-future-of-insurance> accessed 22 May 2022; Zofia Bednarz and Kayleen Manwaring, ‘Keeping the (Good) Faith: Implications of Emerging Technologies for Consumer Insurance Contracts’ (2021) 43 Sydney Law Review 455, 470–75.

82 Jennifer Miller, ‘A Bid to End Loan Bias’ (20 September 2020) The New York Times <https://link.gale.com/apps/doc/A635945144/AONE?u=unimelb&sid=bookmark-AONE&xid=164a6017> accessed 22 May 2022.

83 Andeas Fuster et al, ‘Predictably Unequal? The Effects of Machine Learning on Credit Markets’ (2022) 77(1) Journal of Finance 1.

84 Will Douglas Heaven, ‘Bias Isn’t the Only Problem with Credit Scores – and No, AI Can’t Help’ MIT Technology Review (Blog Post, 17 June 2021) <www.technologyreview.com/2021/06/17/1026519/racial-bias-noisy-data-credit-scores-mortgage-loans-fairness-machine-learning/> accessed 20 May 2020; Laura Blattner and Scott Nelson, ‘How Costly Is Noise? Data and Disparities in Consumer Credit’ (2021) arXiv 2105.07554 <https://arxiv.org/abs/2105.07554> accessed 20 May 2022.

85 See also Zetsche et al, ‘From Fintech to Techfin’, 424.

86 See, e.g., Sian Townson, ‘AI Can Make Bank Loans More Fair’ Harvard Business Review (Article, 6 November 2020) <https://hbr.org/2020/11/ai-can-make-bank-loans-more-fair>.

87 Elisa Jillson, ‘Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI’ Federal Trade Commission Business Blog (Blog Post, 19 April 2021) <www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai> accessed 22 May 2022.

88 Commonwealth Government, Inquiry into Future Directions for the Consumer Data Right, 66, 172. See also Zetsche et al, ‘From Fintech to Techfin’, 418–22.

89 Emma Leong and Jodi Gardner, ‘Open Banking in the UK and Singapore: Open Possibilities for Enhancing Financial Inclusion’ (2021) 5 Journal of Business Law 424, 426.

90 See, e.g., Tully (Web Page) <https://tullyapp.com>; Touco (Web Page) <https://usetouco.com>.

91 Leong and Gardner, ‘Open Banking in the UK and Singapore’, 429.

92 Financial Conduct Authority, Call for Input: Open Finance (Publication, 2019) 8 [2.11], discussed in Commonwealth Government, Inquiry into Future Directions for the Consumer Data Right, 66.

93 Commonwealth Government, Inquiry into Future Directions for the Consumer Data Right, 171.

94 See Jenna Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3(1) Big Data & Society 1; Jennifer Cobbe, Michelle Seng Ah Lee, and Jatinder Singh, ‘Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems’ (ACM Conference on Fairness, Accountability, and Transparency, 1–10 March 2021) <https://ssrn.com/abstract=3772964> accessed 22 May 2022.

95 William Magnuson, ‘Artificial Financial Intelligence’ (2020) 10 Harvard Business Law Review 337, 340.

96 Bednarz, ‘There and Back Again’; Martinez and Kirchner, ‘The Secret Bias Hidden in Mortgage-Approval Algorithms’.

97 See Anna Jobin, Marcello Ienca, and Effy Vayena, ‘The Global Landscape of AI Ethics Guidelines’ (2019) 1 Nature Machine Intelligence 389, 389: ‘Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy)’.

98 Australian Human Rights Commission, Human Rights and Technology, 54; Brent Mittelstadt, ‘Principles Alone Cannot Guarantee Ethical AI’ (2019) 1 Nature Machine Intelligence 501.

99 Lorne Sossin and Charles W Smith, ‘Hard Choices and Soft Law: Ethical Codes, Policy Guidelines and the Role of the Courts in Regulating Government’ (2003) 40 Alberta Law Review 867.

100 Jake Goldenfein, ‘Algorithmic Transparency and Decision-Making Accountability: Thoughts for Buying Machine Learning Algorithms’ in Cliff Bertram, Asher Gibson, and Adriana Nugent (eds), Closer to the Machine: Technical, Social, and Legal Aspects of AI (Office of the Victorian Information Commissioner, 2019) 43: ‘[T]he time and place for instilling public values like accountability and transparency is in the design and development of technological systems, rather than after-the-fact regulation and review’.

101 See Jobin et al, ‘The Global Landscape of AI Ethics Guidelines’, 389: ‘Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy)’.

102 Australian Government Department of Industry, Science, Energy and Resources, Australia’s Artificial Intelligence Ethics Framework (Report, 2019) <www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework> accessed 22 May 2022; Australian Council of Learned Academics, The Effective and Ethical Development of Artificial Intelligence: An Opportunity to Improve Our Wellbeing (Report, July 2019) 132; Australian Human Rights Commission, Human Rights and Technology, 49; European Commission, Artificial Intelligence: A European Approach to Excellence and Trust (White Paper, 2020) 20; Select Committee on Artificial Intelligence, AI in the UK: Ready, Willing and Able? (Report, HL 2017–2019) 38.

103 Jobin et al, ‘The Global Landscape of AI Ethics Guidelines’; Institute of Electrical and Electronics Engineers, Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems (Report, 2019) 21; Australian Council of Learned Academics, The Effective and Ethical Development of Artificial Intelligence, 105; Australian Human Rights Commission, Human Rights and Technology, 50.

104 Henrietta Lyons, Eduardo Velloso, and Tim Miller, ‘Conceptualising Contestability: Perspectives on Contesting Algorithmic Decisions’ (2021) 5 Proceedings of the ACM on Human-Computer Interaction <https://arxiv.org/abs/2103.01774> accessed 22 May 2022.

105 See Australian Government Department of Industry, Science, Energy and Resources, Australia’s Artificial Intelligence Ethics Framework; European Commission, High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI (Guidelines, 8 April 2019) <https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai> accessed 20 May 2022.

106 See Australian Government Department of Industry, Science, Energy and Resources, Australia’s Artificial Intelligence Ethics Framework.

107 Financial Conduct Authority, Automated Investment Services: Our Expectations; ASIC, Providing Digital Financial Product Advice to Retail Clients, para 255.98.

108 See also Brenner and Meyll, ‘Robo-Advisors: A Substitute for Human Financial Advice?’ (substitution effect of robo-advisers is especially driven by investors concerned about investment fraud from human advisers).

109 See Jeannie Paterson, ‘Misleading AI’ (2023) 34 (Symposium) Loyola University Chicago School of Law Consumer Law Review 558.

110 See Select Committee on Artificial Intelligence, ‘AI in the UK’, 40; Australian Human Rights Commission, Human Rights and Technology, 75.

111 On explanations, see Tim Miller, ‘Explanation in Artificial Intelligence: Insights from the Social Sciences’ (2019) 267(1) Artificial Intelligence 1; Sandra Wachter, Brent Mittelstadt, and Chris Russell, ‘Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR’ (2018) 31 Harvard Journal of Law & Technology 841; Jonathan Dodge et al, ‘Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment’ (International Conference on Intelligent User Interfaces, Marina del Ray, 17–20 March 2019).

112 Tim Miller, ‘Explainable Artificial Intelligence: What Were You Thinking?’ in N Wouters, G Blashki, and H Sykes (eds), Artificial Intelligence: For Better or Worse (Future Leaders, 2019) 19, 21; Wachter et al, ‘Counterfactual Explanations without Opening the Black Box’, 844.

113 Umang Bhatt et al, ‘Explainable Machine Learning in Deployment’ (Conference on Fairness, Accountability, and Transparency, Barcelona, January 2020) 648.

114 See Miller, ‘Explanation in Artificial Intelligence: Insights from the Social Sciences’; Wachter et al, ‘Counterfactual Explanations without Opening the Black Box’.

115 See also Karen Yeung and Adrian Weller, ‘How Is “Transparency” Understood by Legal Scholars and the Machine Learning Community’ in Mireille Hildebrandt et al (eds), Being Profiled: Cogitas Ergo Sum (Amsterdam University Press, 2018); John Zerilli et al, ‘Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?’ (2019) 32 Philosophy and Technology 661.

116 See generally Robert A Hillman and Jeffrey J Rachlinski, ‘Standard-Form Contracting in the Electronic Age’ (2002) 77 New York University Law Review 429; Russell Korobkin, ‘Bounded Rationality, Standard Form Contracts, and Unconscionability’ (2003) 70 University of Chicago Law Review 1203.

117 Wachter et al, ‘Counterfactual Explanations without Opening the Black Box’, 843. See also Miller, ‘Explanation in Artificial Intelligence: Insights from the Social Sciences’.

118 See Wachter et al, ‘Counterfactual Explanations without Opening the Black Box’, 843.

119 Lyons et al, ‘Conceptualising Contestability’.

120 Madeleine Clare Elish, Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction (pre-print) (1 March 2019). Engaging Science, Technology, and Society (pre-print) <http://dx.doi.org/10.2139/ssrn.2757236>.

121 See also Cobbe et al, ‘Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems’ (discussing the principle of reviewability as a core element of accountability for automated decision-making systems).

122 Baker and Dellaert, ‘Regulating Robo Advice across the Financial Services Industry’, 724. Cf Proposal for a Regulation (EU) 2021/1016 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts [2021] (EU AI Draft Regulations).

123 Compare Cobbe et al, ‘Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems’.

124 Brent Mittelstad, ‘Auditing for Transparency in Content Personalization Systems’ (2016) 10 International Journal of Communication 4991.

125 See, e.g., Australian Government Department of Industry, Science, Energy and Resources, Australia’s Artificial Intelligence Ethics Framework.

126 Lyons et al, ‘Conceptualising Contestability’, 1–2.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×