Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Facial recognition technology (FRT) has achieved remarkable progress in the last decade owing to the improvement of deep convolutional neural networks. The massive deployment of FRT in the United Kingdom has unsurprisingly tested the limits of democracy: where should the line be drawn between acceptable uses of this technology for collective or private purposes and the protection of individual entitlements that are compressed by the employment of FRT? The Bridges v South Wales Police case offered guidance on this issue. After lengthy litigation, the Court of Appeal of England and Wales ruled in favour of the applicant, a civil rights campaigner who claimed that the active FRT deployed by the police at public gatherings infringed his rights. Although the Bridges case offered crucial directives on the balancing between individual rights and the lawful use of FRT for law enforcement purposes under the current UK rules, several ethical and legal questions still remain unsolved. This chapter provides an overview of sociological and regulatory attitudes towards this technology in the United Kingdom; discusses the Bridges saga and its implications, and offers reflections on the future of FRT regulation in the United Kingdom.
This chapter examines facial recognition technology (FRT) and its potential for bias and discrimination against racial minorities in the criminal justice system. The chapter argues that by defining the technology as an automated process, there is an implied objectivity, suggesting that such technologies are free from errors and prejudices. However, facial recognition is dependent on data used to train an algorithm, and operators make judgements about the wider social system and structures it is deployed within. The algorithms that underpin FRT will continue to advance the status quo with respect to power relations in the criminal justice system, unless both data-based and societal-based issues of inequality and discrimination are remedied. FRT is imbued with biases that can negatively impact outcomes for minority groups. The chapter argues that there is a need to focus on systemic discrimination and inequality (rather than calling for a ban of the technology). While the data-based issues are more straightforward to address, this alone will not be sufficient: addressing broader and more complex social factors must be a key focus in working towards a more equal society.
Scholarly treatment of facial recognition technology (FRT) has focussed on human rights impacts with frequent calls for the prohibition of the technology. While acknowledging the potentially detrimental and discriminatory uses that FRT use by the state has, this chapter seeks to advance discussion on what principled regulation of FRT might look like. It should be possible to prohibit or regulate unacceptable usage while retaining less hazardous uses. In this chapter, we reflect on the principled use and regulation of FRT in the public sector, with a focus on Australia and Aotearoa New Zealand. The authors draw on their experiences as researchers in this area and on their professional involvement in oversight and regulatory mechanisms in these jurisdictions and elsewhere. Both countries have seen significant growth in the use of FRT, but regulation remains patchwork. In comparison with other jurisdictions, human rights protections, and avenues for individual citizens to complain and seek redress remain insufficient in Australia and New Zealand.
Protest movements are gaining momentum across the world, with Extinction Rebellion, Black Lives Matter, and strong pro-democracy protests in Chile and Hong Kong taking centre stage. At the same time, many governments are increasing their surveillance capacities in the name of protecting the public and addressing emergencies. Irrespective of whether these events and/or political strategies relate to the war on terror, pro-democracy or anti-racism protests, state resort to technology and increased surveillance as a tool to control the masses and population has been similar. This chapter focusses on the chilling effect of facial recognition technology (FRT) use in public spaces on the right to peaceful assembly and political protest. Pointing to the absence of oversight and accountability mechanisms on government use of FRT, the chapter demonstrates that FRT has significantly strengthened state power. Attention is drawn to the crucial role of tech companies in assisting governments in public space surveillance and curtailing protests, and it is argued that hard human rights obligations should bind these companies and governments, to ensure that political movements and protests can flourish in the post-COVID-19 world.
Although in all of the EU member states, law enforcement institutions have to adhere to European standards of facial recognition technology (FRT) usage, each country has local national standards that transpose these requirements into the framework of FRT in practice. However, recognising that each society has an important role in controlling the implementation of legal acts, especially where they relate to human rights, society and related interest groups have to regard the proper implementation of FRT regulation as necessary; otherwise it remains declarative and void. If public awareness and pressure to have a law implemented properly are high, the implementing institutions are forced to take action.
This chapter analyses the regulation of FRT usage by Lithuanian law enforcement institutions. Public discussion relating to FRT usage in the media, the involvement of non-governmental organisations, and other types of social control are also discussed. Finally, the chapter considers the changes that may be brought to national regulation of FRT by the EU Artificial Intelligence Act.
This chapter, authored by a computer scientist and an industry expert in computer vision, briefly explains the fundamentals of artificial intelligence and facial recognition technologies. The discussion encompasses the typical development life cycle of these technologies and unravels the essential building blocks integral to understanding the complexities of facial recognition systems. The author further explores key challenges confronting computer and data scientists in their pursuit of ensuring the accuracy, effectiveness, and trustworthiness of these technologies, which also drives many of the common concerns regarding facial recognition technologies.
As facial recognition technology (FRT) becomes more widespread, and its productive processes, supply chains, and circulations more visible and better understood, privacy concepts become more difficult to consistently apply. This chapter argues that privacy and data protection law’s clunkiness in regulating facial recognition is a product of how privacy and data protection conceptualise the nature of online images. Whereas privacy and data protection embed a ‘representational’ understanding of images, the dynamic facial recognition ecosystem of image scraping, dataset production, and searchable image databases used to build FRT suggest that online images are better understood as ‘operational’. Online images do not simply present their referent for easy human consumption, but rather enable and participate in a sequence of automated operations and machine–machine communications that are foundational for the proliferation of biometric techniques. This chapter demonstrates how privacy law’s failure to accommodate this theorisation of images leads to confusion and diversity in the juridical treatment of facial recognition and the declining coherence of legal concepts.
This chapter discusses the current state of laws regulating facial recognition technology (FRT) in the United States. The stage is set for the discussion with a presentation of some of the unique aspects of regulation in the United States and of the relevant technology. The current status of FRT regulation in the United States is then discussed, including general laws (such as those that regulate the use of biometrics) and those that more specifically target FRT (such as those that prohibit the use of such technologies by law enforcement and state governments). Particular attention is given to the different regulatory institutions in the United States, including the federal and state governments and federal regulatory agencies, as well as the different treatment of governmental and private users of FRT. The chapter concludes by considering likely future developments, including potential limits of or challenges to the regulation of FRT.
The key ethical requirements for all AI technologies, including facial recognition technology (FRT), is their transparency and explainability. This chapter first identifies the extent to which transparency and explainability is needed in relation to FRT among different stakeholders. Second, after briefly examining which types of information about AI could be potentially protected as trade secrets, it identifies situations where trade secret protection may inhibit transparent and explainable FRT. It then analyses whether the current trade secret law, in particular the ‘public interest’ exception, is capable of addressing the conflict between the proprietary interests of trade secret owners and artificial intelligence transparency needs of certain stake holders. This chapter focusses on FRT in law enforcement, with a greater emphasis on real-time biometric identification technologies that are considered the highest risk.
Russia’s invasion of Ukraine is the first major military conflict in which facial recognition technology (FRT) is being used openly, with Ukraine’s Ministry of Defence publicly acknowledging its use of FRT to assist in the identification of Russian soldiers killed in combat. The technology has also likely been used to investigate people at checkpoints and during interrogations. We can expect FRT to be used for tracing individuals responsible for war crimes in the near future. For the Russian Federation, FRT has become a powerful tool to suppress anti-war protests and to identify those taking part in them. In territories occupied by Russia, FRT has been used to identify political opponents and people opposing Russian rule. This chapter focusses on the potential and risks of the use of FRT in a war situation. It discusses the advantages that FRT brings to both sides of the conflict and underlines the associated concerns. It is argued that despite human rights concerns, FRT is becoming a tool of military technology that is likely to spread and develop further for military purposes.
Digital surveillance technologies using artificial intelligence (AI) tools such as computer vision and facial recognition are becoming cheaper and easier to integrate into governance practices worldwide. Morocco serves as an example of how such technologies are becoming key tools of governance in authoritarian contexts. Based on qualitative fieldwork including semi-structured interviews, observation, and extensive desk reviews, this chapter focusses on the role played by AI-enhanced technology in urban surveillance and the control of migration between the Moroccan–Spanish borders. Two cross-cutting issues emerge: first, while international donors provide funding for urban and border surveillance projects, their role in enforcing transparency mechanisms in their implementation remains limited; second, Morocco’s existing legal framework hinders any kind of public oversight. Video surveillance is treated as the sole prerogative of the security apparatus, and so far public actors have avoided to engage directly with the topic. The lack of institutional oversight and public debate on the matter raise serious concerns on the extent to which the deployment of such technologies affects citizens’ rights. AI-enhanced surveillance is thus an intrinsically transnational challenge in which private interests of economic gain and public interests of national security collide with citizens’ human rights across the Global North/Global South divide.
This chapter provides an introductory overview of the recent emergence of facial recognition technologies (FRTs) into everyday societal contexts and settings. It provides valuable social, political, and economic context to the legal, ethical, and regulatory issues that surround this fast-growing area of technology development. In particular, the chapter considers a range of emerging ‘pro-social’ applications of FRT that have begun to be introduced across various societal domains - from the application of FRTs in retail and entertainment, through to the growing prevalence of one-to-one ID matching for intimate practices such as unlocking personal devices. In contrast to this seemingly steady acceptance of FRT in everyday life, the chapter makes a case for continuing to pay renewed attention to the everyday harms of these technologies in situ. The chapter argues that FRT remains a technology that should not be considered a benign addition to the current digital landscape. It is technology that requires continued critical attention from scholars working in the social, cultural, and legal domains.
State actors in Europe, in particular security authorities, are increasingly deploying biometric methods such as facial recognition for different purposes, especially in law enforcement, despite a lack of independent validation of the promised benefits to public safety and security. Although some rules such as the General Data Protection Regulation and the Law Enforcement Directive are in force, a concrete legal framework addressing the use of facial recognition technology (FRT) in Europe does not exist so far. Given the fact that FRT is processing extremely sensitive personal data, does not always work reliably, and is associated with risks of unfair discrimination, a general ban on any use of artificial intelligence for automated recognition of human features at least in publicly accessible spaces has been demanded. Against this background, the chapter adopts a fundamental rights perspective, and examines whether and to what extent a government use of FRT can be accepted under European law.