To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
If someone relies on algorithms1 to communicate to others, does that reliance change anything for First Amendment purposes?2 In this chapter I argue that, under the Supreme Court’s prevailing jurisprudence, the answer is no. Any words or pictures that would be speech under the First Amendment if produced entirely by a human are equally speech if produced via human-created algorithm. So long as humans are making the decisions that underlie the outputs, a human is sending whatever message is sent. Treatment as speech requires substantive editing by a human being, whether the speech is produced via algorithm or not. If such substantive editing exists, the resulting communication is speech under the current jurisprudence. Simply stated, if we accept Supreme Court jurisprudence, the First Amendment encompasses a great swath of algorithm-based decisions – specifically, algorithm-based outputs that entail a substantive communication.
In this chapter, I ask whether, and under what circumstances, the First Amendment should protect algorithms from regulation by government. This is a broad frame for discussion, and it is important to understand that constitutional “protection for algorithms” could take at least four forms that have little if anything to do with one another.
For more than sixty years, “obviousness” has set the bar for patentability. Under this standard, if a hypothetical “person having ordinary skill in the art” would find an invention obvious in light of existing relevant information, then the invention cannot be patented. This skilled person is defined as a non-innovative worker with a limited knowledge-base. The more creative and informed the skilled person, the more likely an invention will be considered obvious. The standard has evolved since its introduction, and it is now on the verge of an evolutionary leap: inventive algorithms are increasingly being used in research, and once the use of such algorithms becomes standard, the person skilled in the art should be a person augmented by algorithm, or just an inventive algorithm. Unlike the skilled person, the inventive algorithm is capable of innovation and considering the entire universe of prior art. As inventive algorithms continue to improve, this will increasingly raise the bar to patentability, eventually rendering innovative activities obvious. The end of obviousness means the end of patents, at least as they are now.
To many people, there is a boundary which exists between artificial intelligence (AI), sometimes referred to as an intelligent software agent, and the system which is controlled through AI primarily by the use of algorithms. One example of this dichotomy is robots which have a physical form, but whose behavior is highly dependent on the “AI algorithms” which direct its actions. More specifically, we can think of a software agent as an entity which is directed by algorithms that perform many intellectual activities currently done by humans. The software agent can exist in a virtual world (for example, a bot) or can be embedded in the software controlling a machine (for example, a robot). For many current robots controlled by algorithms, they represent semi-intelligent hardware that repetitively perform tasks in physical environments. This observation is based on the fact that most robotic applications for industrial use since the middle of the last century have been driven by algorithms that support repetitive machine motions. In many cases, industrial robots which typically work in closed environments, say, for example, factory floors, do not need “advanced” techniques of AI to function because they perform daily routines with algorithms directing the repetitive motions of their end effectors. However, lately, there is an emerging technological trend which has resulted from the combination of AI and robots, which, by using sophisticated algorithms, allows robots to adapt complex work styles and to function socially in open environments. We may call these merged technological products “embodied AI,” or in a more general sense, “embodied algorithms.”
The (un)limited potential of algorithmic decision-making is increasingly embraced by numerous private sector actors, ranging from pharmaceutical to banking, and from transport industries to powerful Internet platforms. The celebratory narratives about the use of big data and machine-learning algorithms by private companies to simulate intelligence, improve society, and even save humanity are common and widespread. The deployment of algorithms to automate decision-making also promises to make governments not only more efficient, but also more accurate and fair. Ranging from welfare and criminal justice, to healthcare, national security, and beyond, governments are increasingly relying on algorithms to automate decision-making – a development which has been met with concern by many activists, academics, and members of the general public.1 Yet, it remains incredibly difficult to evaluate and measure the nature and impact of automated systems, even as empirical research has demonstrated their potential for bias and individual harm.2 These opaque and elusive systems often are not subject to the same accountability or oversight mechanisms as other public actors in our legal systems, which raises questions about their compatibility with fundamental principles of public law. It is thus not surprising that numerous scholars are increasingly calling for more attention to be paid to the use of algorithms in government decision-making.3
This chapter explores the legal protection awarded to algorithms and argues that in the coming decade, with changes in coding methods, awarding IP protection for algorithms might not prevail. Even today, machines controlled by algorithms are outsmarting humans in many areas. For example, advanced algorithms influence markets and affect finance, commerce, human resources, health, and transportation.
Risk assessment – measuring an individual’s potential for offending – has long been an important aspect of most legal systems, in a wide variety of contexts. In most countries, sentences are often heavily influenced by concerns about preventing reoffending. Correctional officials and parole boards routinely rely on risk assessments. Post-sentence commitment of “dangerous” offenders (particularly common in connection with sex offenders) is based almost entirely on determinations of risk, as is involuntary hospital commitment of people found not guilty by reason of insanity and of people who are not prosecuted but require treatment. Detention prior to trial is frequently authorized not only upon a finding that a suspect will otherwise flee the jurisdiction, but also when the individual is thought to pose a risk to society if left at large. And police on the streets have always been on the look-out for suspicious individuals who might be up to no good.