To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The problem we address in this chapter is easy enough to state: Relatively simple algorithms, when duplicated many-fold and arrayed in parallel, produce systems capable of generating highly creative and nuanced solutions to real-world challenges. The catch is that the autonomy and architecture that make these systems so powerful also makes them difficult to control or even understand.
The First Amendment’s freedom of speech, the Supreme Court said in 1943, protects our capacity to use words or non-verbal symbols to create a “short-cut from mind to mind.”1 But does it continue to do so when one of the “minds” on either end of such a short cut is an artificial one? Does it protect my right to receive words or symbols not from another person, but from artificial intelligence (AI) – that is, a computer program that can write, compose music, or perform other tasks that used to be the sole province of human intelligence? If so, what kind of First Amendment protection does computer speech receive – and how, if it all, does it differ from that which protects the speech of human persons?
A specific software architecture, neural networks, not only takes advantage of the virtually perfect recollection and much faster processing speeds of any software, but also teaches itself and attains skills no human could directly program. We rely on these neural networks for medical diagnoses, financial decisions, weather forecasting, and many other crucial real-world tasks. In 2016, a program named AlphaGo beat the top-rated human player of the game of Go.3 Only a few years ago, this had been considered impossible.4 High-level Go requires remarkable skills, not just of calculation, at which computers obviously excel, but, more critically, of judgment, intuition, pattern recognition, and the weighing of ineffable considerations such as positional balance.5 These skills cannot be directly programmed. Instead, AlphaGo’s neural network6 trained itself with many thousands and, later, millions of games – far more than any individual human could ever play7 – and now routinely beats all human challengers.8 Because it learns and concomitantly modifies itself in response to experience, such a network is termed adaptive.9
Digital information and communications technologies (ICT) have been enthusiastically adopted by individuals, businesses, and government, altering the texture of commercial, social, and legal relationships in profound ways. In this decade, with the rapid development of “big data,” machine-learning tools, and the “Internet of Things,” it is clear that algorithms are becoming very important elements of modern society and a significant factor to consider when developing political or business strategies, developing new markets, or trying to solve problems.
In this chapter, we look at the global development of “people-scoring” and its implications. Unlike traditional credit scoring, which is used to evaluate individuals’ financial trustworthiness, social scoring seeks to comprehensively rank individuals based on social, reputational, and behavioral attributes. The implications of widespread social scoring are far-reaching and troubling. Bias and error, discrimination, manipulation, privacy violations, excessive market power, and social segregation are only some of the concerns we have discussed and elaborated on in previous works.1 In this chapter, we describe the global shift from financial scores to social credit, and show how, notwithstanding constitutional, statutory, and regulatory safeguards, the United States and other Western democracies are not as far from social credit as we seem to believe.
Automated systems that process vast amounts of data about individuals and communities have become a transformative force within contemporary societies and institutions. Governments and businesses, which adopt and develop new techniques of collecting and analyzing information, rely on algorithms in the decision-making process in various sectors: like banking, political marketing, health, and criminal justice. One of the early adopters of the automated systems are also welfare agencies responsible for the distribution of welfare benefits and management of social policies. These new ways of using technology highlight efficiency, standardization, and resource optimization as benefits. However, the debate about artificial intelligence (AI) and algorithms should not be limited to questions about its technical capabilities and functionalities. So too is the creation and implementation of technological innovations a significant normative and ethical challenge for our society. The decision to process data and use certain algorithms is structured and motivated by specific political and economic factors. Therefore, just as argued by Winner, technical artifacts pose political qualities and are far from being neutral.
The debate over algorithmic decision-making has focused primarily on two things: legal accountability and bias. Legal accountability seeks to leverage the institutions of law and compliance to put guard rails around the use of artificial intelligence (AI). This literature insists that if a state is going to use an algorithm to evaluate teachers or if a bank is going to use AI to make loan application decisions, both should do so transparently, in accordance with fair procedure, and be subject to interrogation. Algorithmic fairness seeks to highlight the ways in which AI discriminates on the basis of race, gender, and ethnicity, among other protected characteristics. This literature calls for making technologies that use AI, whether search engines or digital cameras, more inclusive by better training AI on diverse inputs and improving automated systems that have been shown to have a “disparate impact” on marginalized populations.
Our society in the twenty-first century is being shaped evermore by sets of instructions running at data centers spread around the world, commonly known as “algorithms.” Although algorithms are not a recent invention, they have become widely used to support decision systems, arguably triggering the emergence of an algorithmic society.1 These algorithmic decision systems (ADS) are deployed for purposes as disparate as pricing in online marketplaces,2 flying planes,3 generating credit scores,4 and predicting demand for electricity.5 Advanced ADS are characterized by two key features. First, they rely on the analysis of large amounts of data to make predictive inferences, such as the likelihood of a default for a potential borrower or an increase in demand for electricity consumption. Second, they automate in whole or in part the execution of decisions, such as refusing a loan to a high-risk borrower or increasing energy prices during peak hours, respectively. ADS may also refer to less advanced systems implementing only one of these features. Although ADS generally have proven to be beneficial in improving the efficiency of making decisions, the underlying algorithms remain controversial, among other issues, because they are susceptible to discrimination, bias, and a loss of privacy – with the potential to even be used to manipulate the democratic processes and structures underpinning our society6 – alongside lacking effective means of control and accountability.
In July 2014, Facebook ran a social experiment on emotional contagion2 by monitoring the emotional responses of 689,003 users to the omission of certain content containing positive and negative words. The project was severely criticized3 for manipulating emotions without the informed consent of the subjects, and raised concerns for users’ privacy. Most importantly, it brought about the question of respect toward the user’s autonomy in the era of automation.
Imagine: a FinTech lender, that is, a firm using computer programs to enable banking and financial services, which introduce a new product based on algorithmic artificial intelligence (AI) underwriting. The lender combs through the entirety of an applicant’s financial records to review where the applicant shopped, what purchases she made, purchase volumes and frequency, how much credit and debt she had, and whether she made utility and rent payments on time. The lender also reviews her mobile phone usage to understand how much time she spent on her phone and what she was engaged in, whether it was at work or at home, her typical geographic areas of travel, the frequency of her text messages, and how many spelling errors she made. (We’ll leave her social media usage out of this for now.) Through this mix of financial and behavioral data, the FinTech lender underwrites her application. It does the same for millions of other customers with little to no credit history, but who have long lived within their means, shopped responsibly, paid rent and utilities on time, and spent many hours at work.
If someone relies on algorithms1 to communicate to others, does that reliance change anything for First Amendment purposes?2 In this chapter I argue that, under the Supreme Court’s prevailing jurisprudence, the answer is no. Any words or pictures that would be speech under the First Amendment if produced entirely by a human are equally speech if produced via human-created algorithm. So long as humans are making the decisions that underlie the outputs, a human is sending whatever message is sent. Treatment as speech requires substantive editing by a human being, whether the speech is produced via algorithm or not. If such substantive editing exists, the resulting communication is speech under the current jurisprudence. Simply stated, if we accept Supreme Court jurisprudence, the First Amendment encompasses a great swath of algorithm-based decisions – specifically, algorithm-based outputs that entail a substantive communication.
In this chapter, I ask whether, and under what circumstances, the First Amendment should protect algorithms from regulation by government. This is a broad frame for discussion, and it is important to understand that constitutional “protection for algorithms” could take at least four forms that have little if anything to do with one another.
For more than sixty years, “obviousness” has set the bar for patentability. Under this standard, if a hypothetical “person having ordinary skill in the art” would find an invention obvious in light of existing relevant information, then the invention cannot be patented. This skilled person is defined as a non-innovative worker with a limited knowledge-base. The more creative and informed the skilled person, the more likely an invention will be considered obvious. The standard has evolved since its introduction, and it is now on the verge of an evolutionary leap: inventive algorithms are increasingly being used in research, and once the use of such algorithms becomes standard, the person skilled in the art should be a person augmented by algorithm, or just an inventive algorithm. Unlike the skilled person, the inventive algorithm is capable of innovation and considering the entire universe of prior art. As inventive algorithms continue to improve, this will increasingly raise the bar to patentability, eventually rendering innovative activities obvious. The end of obviousness means the end of patents, at least as they are now.
To many people, there is a boundary which exists between artificial intelligence (AI), sometimes referred to as an intelligent software agent, and the system which is controlled through AI primarily by the use of algorithms. One example of this dichotomy is robots which have a physical form, but whose behavior is highly dependent on the “AI algorithms” which direct its actions. More specifically, we can think of a software agent as an entity which is directed by algorithms that perform many intellectual activities currently done by humans. The software agent can exist in a virtual world (for example, a bot) or can be embedded in the software controlling a machine (for example, a robot). For many current robots controlled by algorithms, they represent semi-intelligent hardware that repetitively perform tasks in physical environments. This observation is based on the fact that most robotic applications for industrial use since the middle of the last century have been driven by algorithms that support repetitive machine motions. In many cases, industrial robots which typically work in closed environments, say, for example, factory floors, do not need “advanced” techniques of AI to function because they perform daily routines with algorithms directing the repetitive motions of their end effectors. However, lately, there is an emerging technological trend which has resulted from the combination of AI and robots, which, by using sophisticated algorithms, allows robots to adapt complex work styles and to function socially in open environments. We may call these merged technological products “embodied AI,” or in a more general sense, “embodied algorithms.”
The (un)limited potential of algorithmic decision-making is increasingly embraced by numerous private sector actors, ranging from pharmaceutical to banking, and from transport industries to powerful Internet platforms. The celebratory narratives about the use of big data and machine-learning algorithms by private companies to simulate intelligence, improve society, and even save humanity are common and widespread. The deployment of algorithms to automate decision-making also promises to make governments not only more efficient, but also more accurate and fair. Ranging from welfare and criminal justice, to healthcare, national security, and beyond, governments are increasingly relying on algorithms to automate decision-making – a development which has been met with concern by many activists, academics, and members of the general public.1 Yet, it remains incredibly difficult to evaluate and measure the nature and impact of automated systems, even as empirical research has demonstrated their potential for bias and individual harm.2 These opaque and elusive systems often are not subject to the same accountability or oversight mechanisms as other public actors in our legal systems, which raises questions about their compatibility with fundamental principles of public law. It is thus not surprising that numerous scholars are increasingly calling for more attention to be paid to the use of algorithms in government decision-making.3