Hostname: page-component-8448b6f56d-sxzjt Total loading time: 0 Render date: 2024-04-23T07:08:49.841Z Has data issue: false hasContentIssue false

General Recommendation No. 36 (2020) on Preventing and Combating Racial Profiling by Law Enforcement Officials (C.E.R.D.)

Published online by Cambridge University Press:  04 February 2022

Daniel Moeckli*
Affiliation:
Professor of Public International Law and Constitutional Law, University of Zurich, Switzerland.
Rights & Permissions [Opens in a new window]

Extract

The Committee on the Elimination of Racial Discrimination (CERD) adopted General Recommendation No. 36 on Preventing and Combating Racial Profiling by Law Enforcement Officials at its 102nd session, which took place from November to December 2020.

Type
International Legal Documents
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press on behalf of The American Society of International Law

Introduction

The Committee on the Elimination of Racial Discrimination (CERD) adopted General Recommendation No. 36 on Preventing and Combating Racial Profiling by Law Enforcement Officials at its 102nd session, which took place from November to December 2020.Footnote 1

The CERD is a body of eighteen independent experts that monitors the implementation of the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) by the 182 states that have ratified this treaty. Its General Recommendations provide authoritative guidance that assists states parties to fulfil their obligations under the ICERD. Although not legally binding, they are often invoked by states and complainants in reporting and complaints procedures, and occasionally by courts.Footnote 2

Background

Numerous human rights bodies, including the Office of the High Commissioner for Human Rights,Footnote 3 the UN Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance,Footnote 4 the UN Special Rapporteur on the Promotion and Protection of Human Rights and Fundamental Freedoms While Countering Terrorism,Footnote 5 the European Commission against Racism and Intolerance,Footnote 6 and the Inter-American Commission on Human Rights,Footnote 7 have provided guidance on preventing law-enforcement practices that are based on racial profiling. Victims of racial profiling have successfully challenged such practices before the UN Human Rights Committee,Footnote 8 the European Court of Human Rights,Footnote 9 and the Inter-American Court of Human Rights.Footnote 10

The CERD had already addressed the issue of racial profiling in some of its earlier General Recommendations, urging states to ensure that “non-citizens are not subjected to racial or ethnic profiling or stereotyping,”Footnote 11 to “take the necessary steps to prevent questioning, arrests and searches which are in reality based solely on the physical appearance of a person, that person's colour or features or membership of a racial or ethnic group, or any profiling which exposes him or her to greater suspicion,”Footnote 12 and to “take resolute action to counter any tendency to target, stigmatize, stereotype or profile people of African descent on the basis of race, by law enforcement officials.”Footnote 13 However, General Recommendation No. 36 is the CERD's first comprehensive analysis of racial profiling, culminating in an extensive list of recommendations.

Definition of Racial Profiling

As pointed out in General Recommendation No. 36, definitions of racial profiling adopted by various international and regional human rights bodies have four common elements: racial profiling is

(a) committed by law enforcement authorities; (b) is not motivated by objective criteria or reasonable justification; (c) is based on grounds of race, colour, descent, national or ethnic origin or their intersection with other relevant grounds (…); (d) is used in specific contexts, such as controlling immigration and combating criminal activity.Footnote 14

However, the CERD itself then settles for a definition that omits the second of these elements—that is, the lack of objective criteria or reasonable justification. It understands racial profiling as

the practice of police and other law enforcement relying, to any degree, on race, colour, descent or national or ethnic origin as the basis for subjecting persons to investigatory activities or for determining whether an individual is engaged in criminal activity.Footnote 15

It is doubtful whether law enforcement practices can meaningfully be assessed and condemned without considering whether they are based on objective and reasonable grounds. The problem of racial profiling mainly concerns the use of predictive profiles—that is, profiles designed to identify those who may be involved in some future, or as-yet-undiscovered, crime. Far less problematic is the use of descriptive profiles—profiles that are designed to identify those likely to have committed a particular criminal act, and that reflect the evidence the investigators have gathered concerning this act.Footnote 16 Thus, in specific cases, it may be perfectly permissible for the police to rely on a person's skin color or ethnic origin—for example, if, based on witness statements, they search specifically for a suspect with, among other characteristics, a certain skin color. Yet, according to the CERD's definition, this would constitute racial profiling.

Obligations under the ICERD

The term “racial profiling” does not appear in the ICERD. Nevertheless, the CERD makes it clear that the ICERD requires states parties to prevent and remedy this practice. Thus, states are obliged to ensure that law enforcement officials and authorities do not engage in racial profiling; to provide effective remedies for individuals who have been subject to racial profiling; to guarantee the right to seek just and adequate reparation or satisfaction for damage suffered as a result of racial profiling; and to inform law enforcement officials of their obligation not to engage in racial profiling and to ensure they are sufficiently aware of how to avoid racial profiling practices.Footnote 17

Consequences of Racial Profiling

General Recommendation No. 36 highlights the devastating impact the use of racial profiling by law enforcement officials may have. First, racial profiling can have serious effects on people's lives. It not only violates their right to non-discrimination but may also affect other rights, such as the right to life, the right to privacy, freedom of movement, or the right to health.Footnote 18 This may result in a sense of injustice and humiliation. People subject to racial profiling tend to have less trust in law enforcement officials and, as a result, to be less willing to cooperate with them.Footnote 19 Second, racial profiling has far-reaching consequences for the criminal justice system as it can lead to, inter alia, the over-criminalization and disproportionate incarceration of certain categories of persons, the reinforcement of misleading stereotypical associations between crime and ethnicity, and the cultivation of abusive operational practices.Footnote 20

Algorithmic Profiling

A particularly pressing challenge is posed by the use of artificial intelligence and algorithmic profiling, which is increasingly influencing the decisions of law enforcement officials. General Recommendation No. 36 is the first document by an international human rights body to thoroughly address this challenge. While the UN Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance, in a report published in June 2020, had already dealt with different forms of racial discrimination in the design and use of emerging digital technologies, that report did not focus specifically on law enforcement.Footnote 21

The CERD acknowledges that advances in technological development have the potential to increase the accuracy and effectiveness of law enforcement action, but rightly warns that they may also aggravate discriminatory practices. If the data used for algorithmic profiling systems are biased, such systems may reproduce, and thus reinforce, existing biases.Footnote 22 This is all the more problematic given that “discriminatory outcomes of algorithmic profiling can often be less obvious and more difficult to detect than those of human decisions and thus more difficult to contest.”Footnote 23 The CERD points to evidence of discriminatory outcomes of algorithmic profiling in the contexts of predictive policing,Footnote 24 sentencing by courts,Footnote 25 and facial recognition.Footnote 26 Finally, it also condemns the use of algorithms in DNA testing to determine the ethnicity or nationality of individuals.Footnote 27

Recommendations

General Recommendation No. 36 concludes with a long list of detailed recommendations. The CERD calls on states to, for example, adopt laws and policies that define and prohibit racial profiling by law enforcement officials;Footnote 28 collect and monitor data on law enforcement practices, such as identity checks, traffic stops, and border controls, that are disaggregated by prohibited grounds of discrimination;Footnote 29 and create an independent mechanism for receiving complaints of racial discrimination and racial profiling.Footnote 30

The most substantial section of the CERD's recommendations concerns the use of artificial intelligence for law enforcement purposes. The CERD urges states to adopt legislative and other measures to prevent human rights violations by algorithmic profiling systems if deployed;Footnote 31 provide transparency in the design of algorithmic profiling systems and allow researchers and civil society to scrutinize codes;Footnote 32 ensure that independent oversight bodies have a mandate to monitor the use of artificial intelligence tools by the public sector for its compliance with the ICERD;Footnote 33 and encourage companies that develop, sell, or operate algorithmic profiling systems for law enforcement purposes to carry out human rights due diligence processes.Footnote 34

Conclusion

As has been highlighted recently by the Black Lives Matter movement, racial profiling by law enforcement officials is a persistent and pervasive problem facing contemporary societies all over the world. It is to be welcomed that, like other human rights bodies before it, the CERD has now addressed this problem in a comprehensive way. While the CERD defines the term “racial profiling” in an imprecise and overly broad manner, its General Recommendation No. 36 still provides very helpful guidance, as it spells out in much detail how states should prevent and respond to racial profiling by law enforcement officials.

Many of the CERD's recommendations, such as those relating to training and recruitment of law enforcement officials,Footnote 35 reflect earlier recommendations by other human rights bodies.Footnote 36 In contrast, with its thorough analysis of the role played by artificial intelligence in racial profiling practices, the CERD breaks new ground. Its long list of very specific recommendations on how to ensure that algorithmic profiling systems used for law enforcement purposes comply with human rights law points the way for similar guidelines to come.

GENERAL RECOMMENDATION NO. 36 (2020) ON PREVENTING AND COMBATING RACIAL PROFILING BY LAW ENFORCEMENT OFFICIALS (C.E.R.D.)

This text was reproduced and reformatted from the text available at the UN Treaty Body Database website (visited January 7, 2022), https://tbinternet.ohchr.org/_layouts/15/treatybodyexternal/Download.aspx?symbolno=INT%2fCERD%2fGEC%2f7503&Lang=en.

Committee on the Elimination of Racial Discrimination

General recommendation No. 36 (2020) on preventing and combating racial profiling by law enforcement officialsFootnote **

General Recommendation No. 36 (2020) (C.E.R.D.)

I. Introduction

1. At its ninety-second session, the Committee on the Elimination of Racial Discrimination decided to hold a discussion on the theme “Racial discrimination in today's world: racial profiling, ethnic cleansing and current global issues and challenges”. The thematic discussion took place in Geneva on 29 November 2017 and was focused on analysing the experiences, challenges and lessons learned in working to combat racial profiling and ethnic cleansing to date and on how the Committee could strengthen its work against racial profiling and ethnic cleansing, for greater impact on the ground.

2. Following the discussion, the Committee expressed its intention to work on drafting a general recommendation to provide guidance on preventing and combating racial profiling in order to assist States parties in discharging their obligations, including reporting obligations. The present general recommendation is of relevance to all stakeholders in the fight against racial discrimination, and through its publication the Committee seeks to contribute to the strengthening of democracy, the rule of law, and peace and security among communities, peoples and States.

3. At its ninety-eighth session, the Committee began deliberations with a view to drafting a general recommendation on preventing and combating racial profiling, in consultation with all interested parties.Footnote 1 The Committee also held debates with academics from various fields, with an emphasis on the implications of artificial intelligence on racial profiling.

II. Established principles and practice

4. In drafting the present general recommendation, the Committee has taken account of its extensive practice in addressing racial profiling by law enforcement officials, primarily in the context of the review of State party reports and in key general recommendations. The Committee explicitly addressed the issue of racial profiling in its general recommendation No. 30 (2004) on discrimination against non-citizens, in which it recommended that States ensure that any measures taken in the fight against terrorism did not discriminate, in purpose or effect, on the grounds of race, colour, descent, or national or ethnic origin and that non-citizens were not subjected to racial or ethnic profiling or stereotyping (para. 10); in its general recommendation No. 31 (2005) on the prevention of racial discrimination in the administration and functioning of the criminal justice system, in which the Committee recommended that States parties take the necessary steps to prevent questioning, arrests and searches which are in reality based solely on the physical appearance of a person, that person's colour or features or membership of a racial or ethnic group, or any profiling which exposes him or her to greater suspicion (para. 20); and in general recommendation No. 34 (2011) on racial discrimination against people of African descent, in which the Committee recommended that States take resolute action to counter any tendency to target, stigmatize, stereotype or profile people of African descent on the basis of race, by law enforcement officials, politicians and educators (para. 31). Other recommendations are also relevant to racial profiling, such as general recommendation No. 13 (1993) on the training of law enforcement officials in the protection of human rights, in which the Committee stressed that law enforcement officials should receive training to ensure they uphold the human rights of all persons without distinction as to race, colour or national or ethnic origin (para. 2); general recommendation No. 23 (1997) on the rights of indigenous peoples, in which the Committee stressed that indigenous peoples should be free from any discrimination, in particular that based on indigenous origin or identity (para. 4 (b)); general recommendation No. 27 (2000) on discrimination against Roma, in which the Committee recommended that States, taking into account their specific situations, take measures to prevent illegal use of force by police against Roma, particularly in connection with arrest and detention (para. 13), and to build trust between Roma communities and the police; general recommendation No. 32 (2009) on the meaning and scope of special measures in the Convention, in which the Committee mentioned the concept of “intersectionality”, whereby the Committee addressed situations of double or multiple discrimination – such as discrimination on grounds of gender or religion – when discrimination on such a ground appeared to exist in combination with a ground or grounds listed in article 1 of the Convention (para. 7); and general recommendation No. 35 (2013) on combating racist hate speech.

5. The Committee, in its concluding observations, has repeatedly expressed concern about the use of racial profiling by law enforcement officials and has recommended that States parties take measures to put an end to the practice.Footnote 2

6. In addition, several other international human rights mechanisms have explicitly highlighted racial profiling as a violation of international human rights law. In 2009, through its decision in the case Williams Lecraft v. Spain,Footnote 3 the Human Rights Committee became the first treaty body to directly acknowledge racial profiling as unlawful discrimination. In more recent concluding observations, the Human Rights Committee has regularly expressed concern at the continuous practice of racial profiling by law enforcement officials, targeting in particular specific groups such as migrants, asylum seekers, people of African descent, indigenous peoples, and members of religious and ethnic minorities, including Roma;Footnote 4 the concern has been echoed by the Committee against Torture.Footnote 5

7. In the Durban Programme of Action, adopted by Member States at the World Conference against Racism, Racial Discrimination, Xenophobia and Related Intolerance, held in Durban, South Africa, in 2001, States were urged to design, implement and enforce effective measures to eliminate racial profiling, comprising the practice of police and other law enforcement officials relying, to any degree, on race, colour, descent or national or ethnic origin as the basis for subjecting persons to investigatory activities or for determining whether an individual was engaged in criminal activity (para. 72).

8. In a report submitted to the Human Rights Council in 2007, the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism noted that, since 11 September 2001, law enforcement authorities in various States had adopted measures on the basis of terrorist profiles, which included characteristics such as a person's presumed race, ethnicity, national origin or religion. The Special Rapporteur stressed that terrorist-profiling practices based on “race” were incompatible with human rights principles, were unsuitable and ineffective means of identifying potential terrorists, and entailed considerable negative consequences that might render such measures counterproductive in the fight against terrorism.Footnote 6

9. The present general recommendation is also drafted within the framework of, and as a contribution to, the implementation of the 2030 Agenda for Sustainable Development, with its overarching commitments to leave no one behind and reach the furthest behind first, which provide critical entry points to and opportunities for the Committee's work, particularly with regard to Sustainable Development Goal 10, on reducing inequality within and among countries, and Goal 16, on promoting peaceful and inclusive societies for sustainable development, providing access to justice for all and building effective, accountable and inclusive institutions at all levels.

III. Scope

10. The Committee has often expressed its concern about the use of racial profiling by law enforcement officials targeting various minority groups based on specific characteristics, such as a person's presumed race, skin colour, descent or national or ethnic origin. The Committee has expressed concern at reports of law enforcement officials, such as police officers and border control officials, while carrying out their duties, engaging in arbitrary police stops, arbitrary identity checks, random inspections of objects in the possession of any person in railway stations, trains and airports, and arbitrary arrests.Footnote 7 The Committee has noted with concern that racial profiling has increased owing to contemporary concerns about terrorism and migration that exacerbate prejudice and intolerance towards members of certain ethnic groups.

11. The Committee has recognized that specific groups, such as migrants, refugees and asylum seekers, people of African descent, indigenous peoples, and national and ethnic minorities, including Roma, are the most vulnerable to racial profiling.Footnote 8

12. In addition, the Committee observes that the increasing use of new technological tools, including artificial intelligence, in areas such as security, border control and access to social services, has the potential to deepen racism, racial discrimination, xenophobia and other forms of exclusion. However, in the present general recommendation, the Committee focuses on algorithmic decision-making and artificial intelligence in relation to racial profiling by law enforcement officials; therefore, many other topics related to potentially harmful artificial intelligence are outside of its scope. While aware that, in some areas, artificial intelligence can contribute to greater effectiveness in a number of decision-making processes, the Committee also realizes that there is a real risk of algorithmic bias when artificial intelligence is used in decision-making in the context of law enforcement. Algorithmic profiling raises serious concerns, and the consequences with regard to the rights of the victims could be very serious.

IV. Defining and understanding racial profiling

13. There is no universal definition of racial profiling in international human rights law. However, as a persistent phenomenon in all regions of the world, various international and regional human rights bodies and institutions have adopted definitions of racial profiling, which have a number of common elements. Racial profiling: (a) is committed by law enforcement authorities; (b) is not motivated by objective criteria or reasonable justification; (c) is based on grounds of race, colour, descent, national or ethnic origin or their intersection with other relevant grounds, such as religion, sex or gender, sexual orientation and gender identity, disability and age, migration status, or work or other status; (d) is used in specific contexts, such as controlling immigration and combating criminal activity, terrorism or other activities that allegedly violate or may result in the violation of the law.

14. Racial profiling is committed through behaviour or through acts such as arbitrary stops, searches, identity checks, investigations and arrests.

15. The Inter-American Commission on Human Rights has defined racial profiling as a tactic adopted for supposed reasons of public safety and protection motivated by stereotypes based on race, colour, ethnicity, language, descent, religion, nationality or place of birth, or a combination of these factors, rather than on objective suspicions, which tends to single out individuals or groups in a discriminatory way based on the erroneous assumption that people with such characteristics are prone to engage in specific types of crimes.Footnote 9 The Arab Human Rights Committee has submitted that racial profiling can be defined as the use by law enforcement agents of generalizations or stereotypes related to presumed race, colour, descent, nationality, place of birth, or national or ethnic origin – rather than objective evidence or individual behaviour – as a basis for identifying a particular individual as being, or having been, engaged in a criminal activity, resulting in discriminatory decision-making.Footnote 10 In its general policy recommendation No. 11 on combating racism and racial discrimination in policing, adopted in 2007, the European Commission against Racism and Intolerance defined racial profiling as the use by the police, with no objective and reasonable justification, of grounds such as race, colour, language, religion, nationality or national or ethnic origin in control, surveillance or investigative activities.

16. In a report submitted to the Human Rights Council in 2015, the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance indicated that racial and ethnic profiling by law enforcement officials was commonly understood to mean a reliance by law enforcement, security and border control personnel on race, colour, descent or national or ethnic origin as a basis for subjecting persons to detailed searches, identity checks and investigations, or for determining whether an individual was engaged in criminal activity.Footnote 11

17. The United Nations High Commissioner for Human Rights has stated that racial profiling refers to the process by which law enforcement relies on generalizations based on race, colour, descent or national or ethnic origin, rather than objective evidence or individual behaviour, to subject people to stops, detailed searches, identity checks and investigations, or for deciding that an individual was engaged in criminal activity. Racial profiling, then, results in discriminatory decision-making. The High Commissioner has further pointed out that, whether arising from the attitudes and practices of individual officers or the discriminatory culture or policies of law enforcement agencies, racial profiling is a long-standing practice in many agencies.Footnote 12

18. For the purposes of the present general recommendation, racial profiling is understood as it is described in paragraph 72 of the Durban Programme of Action, that is, the practice of police and other law enforcement relying, to any degree, on race, colour, descent or national or ethnic origin as the basis for subjecting persons to investigatory activities or for determining whether an individual is engaged in criminal activity. In this context, racial discrimination often intersects with other grounds, such as religion, sex and gender, sexual orientation and gender identity, disability, age, migration status, and work or other status.

19. Racial profiling by law enforcement officials may also include raids, border and custom checks, home searches, targeting for surveillance, operations to maintain or re-establish law and order or immigration decisions. These actions may variously take place in the context of street policing and antiterrorism operations.Footnote 13

20. Racial profiling is linked to stereotypes and biases, which can be conscious or unconscious, and individual or institutional and structural. Stereotyping becomes a violation of international human rights law when stereotypical assumptions are put into practice to undermine the enjoyment of human rights.Footnote 14

V. Principles and general obligations under the Convention

21. The identification, prevention and combating of the practice of racial profiling by law enforcement officials is integral to the achievement of the objectives of the International Convention on the Elimination of All Forms of Racial Discrimination. The practice of racial profiling by law enforcement officials violates fundamental principles of human rights, which rest on: (a) non-discrimination based on grounds of race, colour, descent, or national or ethnic origin, or other intersecting grounds; and (b) equality before the law. It may also violate due process and fair trial rights. These principles and rights are the anchors of the Universal Declaration of Human Rights (arts. 2 and 7) and the Convention (arts. 2 and 5 (a)).

22. In the preamble to the Convention it is emphasized that all human beings are equal before the law and are entitled to equal protection of the law against any discrimination and against any incitement to discrimination. While the term racial profiling is not explicitly referred to in the Convention, this has not impeded the Committee from identifying racial profiling practices and exploring the relationship between racial profiling and the standards set out in the Convention.

23. Under article 2 of the Convention, each State undertakes to engage in no act or practice of racial discrimination against persons, groups of persons or institutions and to ensure that all public authorities and institutions, national and local, act in conformity with this obligation. As racial profiling is a practice that has the potential to promote and perpetuate racist incidents and racial prejudice and stereotypes,Footnote 15 it runs counter to the very idea of the Convention. Accordingly, States parties are obliged to review their policies, laws and regulations with a view to ensuring that racial profiling does not take place and is not facilitated. States parties are obliged to actively take steps to eliminate discrimination through laws, policies and institutions. The prohibition on engaging in acts of racial profiling and the obligation to ensure that public authorities and institutions do not apply practices of racial profiling are furthermore derived from article 5 of the Convention. The practice of racial profiling is incompatible with the right of everyone, without distinction as to race, colour, or national or ethnic origin, to equality before the law and to equal treatment. It is, furthermore, incompatible with the non-discriminatory guarantee of other civil rights, such as the right to freedom of movement.

24. Under article 6 of the Convention, States parties have an obligation to assure to everyone within their jurisdiction effective protection against any acts of racial discrimination. Accordingly, States parties must take preventive measures in order to ensure that public authorities and public institutions do not engage in practices of racial profiling. Article 6 also requires States parties to ensure to everyone within their jurisdiction effective remedies against any act of racial discrimination. States parties are obliged to ensure that their domestic legal order contains adequate and effective mechanisms through which to assert that racial profiling has taken place and to bring such a practice to an end. States parties must furthermore guarantee the right to seek just and adequate reparation or satisfaction for damage suffered as a result of racial discrimination in the form of racial profiling. They must ensure that this right can be enforced in an effective manner. In light of the fact that the practice of racial profiling regularly affects members of a particular group or groups, States parties are encouraged to consider establishing mechanisms for the collective enforcement of rights in the context of racial profiling.

25. Article 7 of the Convention highlights the role of teaching, education, culture and information in combating racial discrimination. With regard to racial profiling, the fulfilment of the obligation of States parties not to engage in acts of racial discrimination depends upon the conduct of public authorities and public institutions. It is therefore of paramount importance that national law enforcement officials in particular are properly informed of their obligations.Footnote 16 Since racial profiling is often the result of well-established and unchallenged practices of public authorities and public institutions, States parties must ensure that national law enforcement officials are sufficiently aware of how to avoid engaging in practices of racial profiling. Raising such awareness can help to prevent the implementation of racial profiling practices and to overcome them where they are entrenched. Accordingly, States parties should ensure that the personnel of public authorities and institutions who engage in law enforcement are properly trained to ensure that they do not engage in practices of racial profiling.

VI. Consequences of racial profiling

26. Racial profiling has negative and cumulative effects on the attitudes and well-being of individuals and communities,Footnote 17 given that a person may be regularly subjected to racial profiling in his or her daily life. Victims of racial profiling often understate and interiorize its impact in the face of a lack of effective remedies and restorative tools. In addition to being unlawful, racial profiling may also be ineffective and counterproductive as a general law enforcement tool. People who perceive that they have been subjected to discriminatory law enforcement actions tend to have less trust in law enforcement and, as a result, tend to be less willing to cooperate, thereby potentially limiting the effectiveness of law enforcement. Racial profiling practices influence daily routines of law enforcement and undermine, whether through conscious or unconscious actions, the capacity to support victims of crimes belonging to the affected communities. A sense of injustice and humiliation, the loss of trust in law enforcement, secondary victimization, fear of reprisals and limited access to information about legal rights or assistance may result in reduced reporting of crimes and reduced information for intelligence purposes.

27. Racial profiling and hate speech are closely interrelated, and the Committee has often addressed those two forms of discrimination simultaneously.Footnote 18 The dissemination of ideas based on racial or ethnic hatred, the persistent use of hate speech in the media and the use of racist political discourse by public officials exacerbate discrimination and stereotyping by law enforcement officers. Ethnic groups that are subjected to hate speech will also become targets of racial profiling. Moreover, racial profiling by law enforcement portrays groups that face racial discrimination as more prone to commit crimes, which will influence the public discourse and increase the dissemination of racist hatred.

28. Racial profiling may also have a negative impact on people's enjoyment of civil and political rights, including the rights to life (article 6 of the International Covenant on Civil and Political Rights), liberty and security of person (art. 9), privacy (art. 17) and liberty of movement (art. 12), freedom of association (art. 22) and to an effective remedy (art. 2 (3)).

29. The full enjoyment of people's economic, social and cultural rights, such as the right to adequate housing (article 11 of the International Covenant on Economic, Social and Cultural Rights), health (art. 12), education (arts. 13–14) and work (art. 6), could also be affected by racial profiling.Footnote 19

30. Racial profiling by law enforcement officials has far-reaching consequences at all levels of administration of the justice system, particularly in the criminal justice system. Racial profiling can lead to, among other things: (a) the overcriminalization of certain categories of persons protected under the Convention; (b) the reinforcement of misleading stereotypical associations between crime and ethnicity and the cultivation of abusive operational practices; (c) disproportionate incarceration rates for groups protected under the Convention; (d) the higher vulnerability of persons belonging to groups protected under the Convention to abuse of force or authority by law enforcement officials; (e) the underreporting of acts of racial discrimination and hate crimes; and (f) the handing down by courts of harsher sentences against members of targeted communities.

VII. Algorithmic profiling and racial bias and discrimination

31. Owing to rapid advances in technological development, the actions of law enforcement officials are increasingly determined or informed by algorithmic profiling,Footnote 20 which may include big data, automated decision-making and artificial intelligence tools and methods.Footnote 21 While such advances have the potential to increase the accuracy, effectiveness and efficiency of the decisions and actions of law enforcement officials, there is a great risk that they may also reproduce and reinforce biases and aggravate or lead to discriminatory practices.Footnote 22 Given the opacity of algorithmic analytics and decision-making, in particular when artificial intelligence methods are employed, discriminatory outcomes of algorithmic profiling can often be less obvious and more difficult to detect than those of human decisions and thus more difficult to contest.Footnote 23 In addition, human rights defenders generally are not adequately equipped technologically to identify such discriminatory methods.

32. There are various entry points through which bias could be ingrained into algorithmic profiling systems, including the way in which the systems are designed, decisions as to the origin and scope of the datasets on which the systems are trained, societal and cultural biases that developers may build into those datasets, the artificial intelligence models themselves and the way in which the outputs of the artificial intelligence model are implemented in practice.Footnote 24 In particular, the following data-related factors may contribute to negative outcomes: (a) the data used include information concerning protected characteristics; (b) so-called proxy information is included in the data, for example, postal codes linked to segregated areas in cities often indirectly indicate race or ethnic origin; (c) the data used are biased against a group;Footnote 25 and (d) the data used are of poor quality, including because they are poorly selected, incomplete, incorrect or outdated.Footnote 26

33. Particular risks emerge when algorithmic profiling is used for determining the likelihood of criminal activity either in certain localities, or by certain groups or even individuals. Predictive policing that relies on historical data for predicting possible future events can easily produce discriminatory outcomes, in particular when the datasets used suffer from one or more of the flaws described above.Footnote 27 For example, historical arrest data about a neighbourhood may reflect racially biased policing practices. If fed into a predictive policing model, use of these data poses a risk of steering future predictions in the same, biased direction, leading to overpolicing of the same neighbourhood, which in turn may lead to more arrests in that neighbourhood, creating a dangerous feedback loop.

34. Similar mechanisms have been reported to be present in judicial systems.Footnote 28 When applying a sanction, or deciding whether someone should be sent to prison, be released on bail or receive another punishment, States are increasingly resorting to the use of algorithmic profiling, in order to foresee the possibilities that an individual may commit one or several crimes in the future. Authorities gather information regarding the criminal history of the individual, their family and their friends and their social conditions, including their work and academic history, in order to assess the degree of “danger” posed by the person from a score provided by the algorithm, which usually remains secret. This use of algorithmic profiling raises concerns similar to those described in paragraph 33 above.

35. The increasing use of facial recognition and surveillance technologies to track and control specific demographic groups raises concerns with respect to many human rights, including the right to privacy, freedom of peaceful assembly and association, freedom of expression and freedom of movement. It is designed to automatically identify individuals based on their facial geometry,Footnote 29 potentially profiling people based on grounds of discrimination such as race, colour, national or ethnic origin or gender.Footnote 30 Cameras equipped with real-time facial recognition technology are widely applied for the purpose of flagging and tracking of individuals,Footnote 31 which may enable Governments and others to keep records of the movements of large numbers of individuals, possibly based on protected characteristics.Footnote 32 Moreover, it has been demonstrated that the accuracy of facial recognition technology may differ depending on the colour, ethnicity or gender of the persons assessed, which may lead to discrimination.Footnote 33

36. In some instances, algorithms are being employed in DNA testing to determine the ethnicity or nationality of individuals. The results of such DNA testing can lead to profiling. The Committee notes, in line with consensus among the scientific community, that there are no direct linkages between an individual's DNA composition and their ethnicity or nationality. Therefore, the Committee condemns the use of DNA profiling by States and law enforcement authorities, especially border security. Additionally, results of DNA profiling have been used by law enforcement authorities to make false claims that certain ethnic minorities are more prone to violence and, in turn, those groups have been subjected to discriminatory police practices.Footnote 34

VIII. Recommendations

37. A variety of strategies have been adopted by Governments, law enforcement agencies and civil society organizations to counter the problem of racial profiling. The Committee is of the view that those strategies provide the basis for its recommendations to States and other actors.

A. Legislative and policy-related measures

38. As a prerequisite, and without prejudice to further measures, comprehensive legislation against racial discrimination, including civil and administrative law as well as criminal law, is indispensable to combating racial profiling effectively. States should develop and effectively implement laws and policies that define and prohibit racial profiling by law enforcement officials. Such measures should be accompanied by clear guidance for law enforcement agencies, ensuring that internal policies, including standard operating procedures and codes of conduct, are in line with human rights standards and principles. States should also be aware of laws and regulations that potentially enable or facilitate racial profiling. They should conduct studies to identify such laws and amend or repeal them accordingly.

39. States should ensure that law enforcement agencies develop, in consultation with relevant groups, detailed guidelines for stop-and-search practices with precise standards, in order to prevent racial profiling. They should establish effective, independent monitoring mechanisms, both internal and external, and envisage disciplinary measures for application in cases of misconduct. They should also carry out periodic audits, with the help of independent experts, to identify gaps in internal policies and practices. Transparency around the outcomes of such procedures is strongly recommended, as it may strengthen law enforcement accountability and trust among targeted individuals and communities.

40. In accordance with article 6 of the Convention, States must assure to everyone within their jurisdiction effective protection and remedies against any acts of racial discrimination which violate his or her human rights and fundamental freedoms contrary to the Convention, as well as the right to seek just and adequate reparation or satisfaction for any damage suffered as a result of such discrimination.

41. States are encouraged to adopt victim-centred approaches and to coordinate their support services effectively by promoting models of cooperation among the authorities, communities, civil society organizations, including those representing groups experiencing intersecting forms of discrimination, and national human rights institutions. The Committee stresses the interconnection between articles 5 (a) and 6 of the Convention and notes that judicial authorities and other organs administering justice should be effectively consulted and involved in such processes to prevent the perpetuation of a racial profiling effect in criminal proceedings.

B. Human rights education and training

42. States should develop specialized, mandatory training programmes for law enforcement agencies that raise awareness among law enforcement officials about the impact of biases on their work and that demonstrate how to ensure non-discriminatory conduct. Stigmatized groups, including those whose members experience intersecting forms of discrimination, should be engaged in the development and delivery of such training, where possible. Law enforcement agencies should ensure that in-service training to counter discrimination and bias-based policing is complemented by institutional interventions aimed at limiting discretion and increased oversight in areas vulnerable to stereotyping and biases. In addition, given concerns about the limitations of training on changing attitudes and behaviour, non-discrimination and bias training should be regularly evaluated and updated to ensure that it has the desired impact.

43. Both artificial intelligence experts and officials who interpret data must have a clear understanding of fundamental rights in order to avoid the entry of data that may contain or result in racial bias. States should provide training on racism and racial discrimination for experts and officials who interpret data, judicial officers and law enforcement officers, among others. States should develop procurement policies based on mandatory terms prohibiting racial discrimination.

44. States, in cooperation with national human rights institutions and specialized entities, should promote the training of civil society organizations on algorithmic bias and emerging technologies.

45. Human rights education and training are vital to ensuring that police officers do not discriminate. National human rights institutions, in cooperation with civil society organizations, can play a central role in training law enforcement officials, in auditing new technological tools that could lead to discrimination and in identifying other risks in practice.Footnote 35

C. Recruitment measures

46. States should ensure that law enforcement agencies develop recruitment, retention and advancement strategies that promote a diverse workforce that reflects the composition of the populations they serve. Such strategies could include setting internal quotas and developing a recruitment programme for ethnic minorities. This has the potential to influence the culture of agencies and the attitudes of staff with a view to producing less biased decision-making.

47. States should ensure that law enforcement agencies regularly evaluate recruitment and promotion policies and, if necessary, undertake temporary special measures to effectively address the underrepresentation of various national or ethnic minority groups and of groups experiencing intersecting forms of discrimination based on, inter alia, religion, sex and gender, sexual orientation, disability and age.

D. Community policing

48. States should ensure that law enforcement agencies develop strategies for effective engagement with individuals and groups facing racial discrimination that take into account the unique context, dynamics and needs of different communities. This should help to improve communication and reduce levels of distrust and of racial profiling. Police-community dialogue should be expanded beyond community leaders, as many groups, including women, are underrepresented at the community leadership level and may need dedicated and sensitive outreach efforts. Young people who are most commonly targeted by police would be a key example.

49. States should adopt measures to ensure that public information from the police and other law enforcement agencies is based on reliable and objective statistics and does not perpetuate stereotypes and bias against ethnic groups that are subjected to discrimination. In addition, States should refrain from releasing personal data about alleged perpetrators that are linked to presumed race, colour, descent or national or ethnic origin, unless such disclosure is strictly necessary and serves a legitimate purpose, such as in the case of a wanted notice.

E. Disaggregated data

50. States should regularly collect and monitor disaggregated quantitative and qualitative data on relevant law enforcement practices, such as identity checks, traffic stops and border searches, which include information on the prohibited grounds for racial discrimination, including its intersecting forms, as well as the reason for the law enforcement action and the outcome of the encounter. The anonymized statistics generated by such practices should be made available to the public and discussed with local communities. The data should be collected in accordance with human rights standards and principles, data protection regulations and privacy guarantees. This information must not be misused.

51. States should also guard against forms of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.Footnote 36

F. Accountability

52. States should create a reporting mechanism, independent of law enforcement and other related agencies, for receiving complaints of racial discrimination, racism and racial and ethnic profiling from citizens.Footnote 37 Such a mechanism must have the power to promptly and effectively investigate allegations and work in concert with civil society and human rights monitoring bodies. It must also report publicly on its findings in accordance with data protection regulations and human rights standards. Such mechanisms should take into account the special needs of persons with disabilities in cases of intersectional discrimination.

53. States should establish oversight mechanisms, both within and external to law enforcement bodies, in order to prevent discriminatory behaviour; such mechanisms should develop internal guidelines, policies and regulations to combat and prevent racial profiling and ensure internal accountability by taking disciplinary action against officials who violate them.

54. Incidents of racial profiling by law enforcement agencies should be investigated effectively, in accordance with international human rights standards. Those responsible should be prosecuted and, if convicted, they should be sanctioned with appropriate penalties and compensation should be granted to victims.

55. States should ensure that senior officials within law enforcement agencies promote non-discriminatory policies and practices within their agencies, rigorously monitor the conduct of staff and hold staff accountable for misconduct through the internal, independent oversight mechanism.Footnote 38 These actions can be supported through the availability of data on and analysis of the decision-making and practices of staff. Senior officials should also review the impact of the application of legislation and operations, including those for countering terrorism, which may have a disproportionate impact on marginalized groups and communities.

56. National human rights institutions and civil society organizations are encouraged to monitor incidents of racial profiling and assist victims of such profiling. They should increase public awareness, publicize findings, lobby for reforms and engage constructively with law enforcement agencies and other national and local institutions.

57. International and regional human rights mechanisms, national human rights institutions and equality bodies, civil society groups and members of the public should have the possibility to make complaints regarding discriminatory practices of law enforcement agencies. Members of the public should be able to file complaints through independent mechanisms.

G. Artificial intelligence

58. States should ensure that algorithmic profiling systems used for the purposes of law enforcement are in full compliance with international human rights law. To that effect, before procuring or deploying such systems States should adopt appropriate legislative, administrative and other measures to determine the purpose of their use and to regulate as accurately as possible the parameters and guarantees that prevent breaches of human rights. Such measures should, in particular, be aimed at ensuring that the deployment of algorithmic profiling systems does not undermine the right not to be discriminated against, the right to equality before the law, the right to liberty and security of person, the right to the presumption of innocence, the right to life, the right to privacy, freedom of movement, freedom of peaceful assembly and association, protections against arbitrary arrest and other interventions, and the right to an effective remedy.

59. States should carefully assess the potential human rights impact prior to employing facial recognition technology, which can lead to misidentification owing to a lack of representation in data collection. Before national deployment, States should consider a pilot period under the supervision of an independent oversight body that is inclusive of individuals who reflect the diverse composition of the population, to mitigate against any potential instances of misidentification and profiling based on skin colour.

60. States should ensure that algorithmic profiling systems deployed for law enforcement purposes are designed for transparency, and should allow researchers and civil society to access the code and subject it to scrutiny. There should be continual assessment and monitoring of the human rights impact of those systems throughout their life cycle, and States should take appropriate mitigation measures if risks or harms to human rights are identified. Those processes should examine potential and actual discriminatory effects of algorithmic profiling based on grounds of race, colour, descent, or national or ethnic origin and their intersection with other grounds, including religion, sex and gender, sexual orientation and gender identity, disability, age, migration status and work or other status. They should be conducted prior to the development or acquisition of such systems, wherever possible, and at the very least prior to and during the full period of the use of the systems. Such processes should include community impact assessments. Groups that are potentially or actually affected and relevant experts should be included in the assessment and mitigation processes.

61. States should take all appropriate measures to ensure transparency in the use of algorithmic profiling systems. This includes public disclosure of the use of such systems and meaningful explanations of the ways in which the systems work, the data sets that are being used, and the measures in place to prevent or mitigate human rights harms.

62. States should adopt measures to ensure that independent oversight bodies have a mandate to monitor the use of artificial intelligence tools by the public sector, and to assess them against criteria developed in conformity with the Convention to ensure they are not entrenching inequalities or producing discriminatory results. States should also ensure that the functioning of such systems is regularly monitored and evaluated in order to assess deficiencies and to take the necessary corrective measures. When the results of an assessment of a technology indicate a high risk of discrimination or other human rights violations, States should take measures to avoid the use of such a technology.

63. States should adopt measures to ensure that private sector design, deployment and implementation of artificial intelligence systems in the area of law enforcement comply with human rights standards. States should also ensure the adoption and periodic revision of guidelines and codes of conduct that companies must observe in the programming, use and commercialization of algorithms that may lead to racial discrimination or, in general, any other form of discrimination likely to be in violation of the Convention.

64. States should adopt regulations ensuring that public sector bodies, private business enterprises and other relevant organizations, in the process of developing, learning, marketing and using algorithms: (a) comply with the principle of equality and non-discrimination, and respect human rights in general, in line with the Guiding Principles on Business and Human Rights (in particular guiding principles 1–3, 11 and 24); (b) respect the precautionary principle and any administrative or legislative measure enacted to ensure transparency; (c) disclose publicly whether law enforcement has access to private data on individuals; and (d) avoid causing disparate or disproportionate impact on the social groups protected by the Convention.

65. States should ensure that all instances of algorithmic bias are duly investigated and that sanctions are imposed.

66. States should ensure that companies that are developing, selling or operating algorithmic profiling systems for law enforcement purposes have a responsibility to involve individuals from multiple disciplines, such as sociology, political science, computer science and law, to define the risks to, and ensure respect for, human rights. To that end, States should encourage companies to carry out human rights due diligence processes, which entail: (a) conducting assessments to identify and assess any actual or potentially adverse human rights impacts; (b) integrating those assessments and taking appropriate action to prevent and mitigate adverse human rights impacts that have been identified; (c) tracking the effectiveness of their efforts; and (d) reporting formally on how they have addressed their human rights impacts.Footnote 39

67. In the process of identifying, assessing, preventing and mitigating adverse human rights impacts, companies should pay particular attention to the data-related factors outlined in paragraph 27 above. Training data should be selected, and models designed, so as to prevent discriminatory outcomes and other adverse impacts on human rights. Moreover, companies should pursue diversity, equity and other means of inclusion in the teams developing algorithmic profiling systems. Companies should also be open to independent third-party audits of their algorithmic profiling systems.Footnote 40 Where the risk of discrimination or other human rights violations has been assessed to be too high or impossible to mitigate, including because of the nature of a planned or foreseeable use by a State, private sector actors should not sell or deploy an algorithmic profiling system.

68. States should document cases of racial discrimination associated with artificial intelligence, as well as prevention measures, sanctions and remedies, and include such information in their reports to the Committee.

69. Human rights bodies, States, national human rights institutions and civil society organizations should carry out, and disseminate the results of, studies, identify good practices on effective measures addressing racial biases derived from artificial intelligence, including those related to human rights compliance and ethical aspects of machine learning, and identify relevant criteria in terms of interpretation or transparency in the processes of the programming and training of algorithms, and should do so through the lens of the International Convention on the Elimination of All Forms of Racial Discrimination.

Footnotes

** Adopted by the Committee at its 102nd session (16–24 November 2020).

1 The contributions for the draft general recommendation are available at: www.ohchr.org/EN/HRBodies/CERD/Pages/GC36.aspx.

2 CERD/C/RUS/CO/23-24, paras. 15–16; CERD/C/CAN/CO/21-23, paras. 15–16; CERD/C/ITA/CO/19-20, paras. 27–28; CERD/C/ESP/CO/21-23, para. 27; CERD/C/SVN/CO/8-11, para. 8 (d); CERD/C/POL/CO/20-21, para. 11; CERD/C/NLD/CO/19-21, paras. 13–15; CERD/C/CHE/CO/7-9, para. 14; and CERD/C/USA/CO/7-9, paras. 8 and 18.

3 CCPR/C/96/D/1493/2006.

4 CCPR/C/NZL/CO/6, paras. 23–24; CCPR/C/AUT/CO/5, paras. 19–20; CCPR/C/FRA/CO/5, para. 15; CCPR/C/ESP/CO/6, para. 8; CCPR/C/RUS/CO/7, para. 9; and CCPR/C/USA/CO/4, para. 7.

5 CAT/C/USA/CO/3-5, para. 26; CAT/C/CPV/CO/1, para. 20; CAT/C/ARG/CO/5-6, para. 35; and CAT/C/NLD/CO/7, paras. 44–45.

6 A/HRC/4/26, paras. 34 and 83.

7 CERD/C/MUS/CO/20-23 and Corr.1, paras. 20–21; CERD/C/BLR/CO/20-23, paras. 23–24; CERD/C/ESP/CO/21-23, para. 27; and CERD/C/DEU/CO/19-22, para. 11.

8 CERD/C/MUS/CO/20-23 and Corr.1, para. 20; and CERD/C/RUS/CO/23-24, paras. 15 (b) and 16 (c); CERD/C/CAN/CO/21-23, paras. 15 and 16 (a)–(d); and CERD/C/ITA/CO/19-20, paras. 27–28.

9 Inter-American Commission on Human Rights, “The situation of people of African descent in the Americas” (2011), para. 143.

10 See the contribution of the Arab Human Rights Committee.

11 A/HRC/29/46, para. 2.

12 United Nations, Preventing and Countering Racial Profiling of People of African Descent: Good Practices and Challenges (2019), p. v.

13 See A/73/354.

14 See, for example, Simone Cusack, “Gender stereotyping as a human rights violation”, research report commissioned by the Office of the United Nations High Commissioner for Human Rights (2013).

15 See, for example, CERD/C/IRL/CO/3-4, para. 18.

16 See general recommendation No. 13 (1993).

17 See, for example, A/HRC/24/52/Add.2, para. 57.

18 CERD/C/RUS/CO/23-24, paras. 15–16; CERD/C/SVN/CO/8-11, paras. 8–9; and CERD/C/AUS/CO/18-20, para. 14.

19 See also article 5 of the International Convention on the Elimination of All Forms of Racial Discrimination.

20 Algorithmic profiling includes any step-by-step computerized technique used for analysing data to identify trends, patterns or correlations. European Union Agency for Fundamental Rights, Preventing Unlawful Profiling Today and in the Future: A Guide (2018), p. 97.

21 Although widely used, the term “artificial intelligence” is not clearly defined. The Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression has noted that artificial intelligence is often used as shorthand for the increasing independence, speed and scale connected to automated, computational decision-making. It is not one thing only, but rather refers to a “constellation” of processes and technologies enabling computers to complement or replace specific tasks otherwise performed by humans, such as making decisions and solving problems (A/73/348, para. 2).

22 See A/HRC/44/57.

23 AI Now, “The AI Now report: the social and economic implications of artificial intelligence technologies in the near-term”, summary of the AI Now public symposium hosted by the White House and the Information Law Institute of New York University on 7 July 2016, p. 7.

24 A/73/348, para. 38.

25 For example, when past discriminatory practices, such as arrests disproportionately affecting members of one group, are reflected in the data used for profiling, it will affect the outcomes of algorithmic profiling.

26 European Union Agency for Fundamental Rights, “#BigData: discrimination in data-supported decision making”, FRA Focus paper (2018), pp. 4–5.

27 See Rashida Richardson, Jason M. Schultz and Kate Crawford, “Dirty data, bad predictions: how civil rights violations impact police data, predictive policing systems, and justice”, New York University Law Review, vol. 94 (May 2019).

28 See, for example, Julia Angwin and others, “Machine bias”, ProPublica, 23 May 2016.

29 A/HRC/44/57, para. 14.

30 A/HRC/41/35, para. 12.

31 A/HRC/39/29, para. 14.

32 A/HRC/41/35, para. 12.

33 See Joy Buolamwini and Timnit Gebru, “Gender shades: intersectional accuracy disparities in commercial gender classification”, Proceedings of Machine Learning Research, vol. 81 (2018) on the proceedings of the Conference on Fairness, Accountability, and Transparency; and Inioluwa Deborah Raji and Joy Buolamwini, “Actionable auditing: investigating the impact of publicly naming biased performance results of commercial AI products” (2019), Conference on Artificial Intelligence, Ethics, and Society.

34 Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (Polity, 2019).

35 Contribution from Nicaragua.

36 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA, art. 3 (4).

37 European Commission against Racism and Intolerance, general policy recommendation No. 11, para. 10 and explanations.

38 See paragraph 53 above.

39 A/HRC/39/29, para. 45; and Guiding Principles on Business and Human Rights, guiding principles 17–21. See also Amnesty International and Access Now, Toronto Declaration: Protecting the Right to Equality and Non-Discrimination in Machine Learning Systems.

40 See the Toronto Declaration: Protecting the Right to Equality and Non-Discrimination in Machine Learning Systems.

References

ENDNOTES

1 CERD, General Recommendation No. 36 (2020) on Preventing and Combating Racial Profiling by Law Enforcement Officials, U.N. Doc. CERD/C/GC/36 (Nov. 24, 2020) [hereinafter General Recommendation No. 36].

2 See, e.g., HCJ 11437/05 ‘Kav LaOved’—Worker's Hotline v. Ministry of Interior (2011) (Isr.); Misca No. 52 of 2002 Sesana and Setlhobogwa v. Attorney General (2006) (High Ct. Bots.).

3 Office of the High Commissioner for Human Rights (OHCHR), Preventing and Countering Racial Profiling of People of African Descent: Good Practices and Challenges (2019), https://www.un.org/sites/un2.un.org/files/preventracialprofiling-en.pdf.

4 Report of the Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance, Mutuma Ruteere, U.N. Doc. A/HRC/29/46 (Apr. 20, 2015).

5 Report of the Special Rapporteur on the Promotion and Protection of Human Rights and Fundamental Freedoms while Countering Terrorism, Martin Scheinin, U.N. Doc. A/HRC/4/26 (Jan. 29, 2007).

6 European Commission against Racism and Intolerance, Policy Recommendation No. 11 on Combating Racism and Racial Discrimination in Policing, CRI(2007)39 (June 29, 2007).

7 Inter-American Commission on Human Rights, The Situation of People of African Descent in the Americas, OEA/Ser.L/V/II (Dec. 5, 2011).

8 Williams Lecraft v. Spain, U.N. Doc. CCPR/C/96/D/1493/2006 (July 27, 2009).

9 Lingurar v. Romania, App. No. 48474/14 (Apr. 16, 2019), https://hudoc.echr.coe.int/eng?i=001-192466.

10 Acosta Martínez v. Argentina, Judgment, Inter-Am. Ct. H.R. (ser. C) No. 410 (Spanish) (Aug. 31, 2020).

11 CERD, General Recommendation No. 30 (2004) on Discrimination against Non-Citizens, U.N. Doc. CERD/C/64/Misc.11/rev.3, ¶ 10 (Mar. 12, 2004).

12 CERD, General Recommendation No. 31 (2005) on the Prevention of Racial Discrimination in the Administration and Functioning of the Criminal Justice System, U.N. Doc. A/60/18, ¶ 20 (2005).

13 CERD, General Recommendation No. 34 (2011) on Racial Discrimination against People of African Descent, U.N. Doc. CERD/C/GC/34, ¶ 31 (Sep. 2, 2011).

14 General Recommendation No. 36, ¶ 13.

15 Id. ¶ 18.

16 Daniel Moeckli, Racial and Ethnic Profiling, in Elgar Encyclopedia of Human Rights, ¶ 3 (Christina Binder et al. eds., 2021), https://www.elgaronline.com/view/nlm-book/9781789903614/b-9781789903621.racial.and.ethnic.profiling.xml?rskey=2IohU0&result=3.

17 General Recommendation No. 36, ¶¶ 23–25.

18 Id. ¶¶ 28–29.

19 Id. ¶ 26.

20 Id. ¶ 30.

21 Report of the Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance, E. Tendayi Achiume, U.N. Doc. A/HRC/44/57 (June 18, 2020). See, however, ¶¶ 35–37.

22 General Recommendation No. 36, ¶ 31.

23 Id. ¶ 31.

24 Id. ¶ 33.

25 Id. ¶ 34.

26 Id. ¶ 35.

27 Id. ¶ 36.

28 Id. ¶ 38.

29 Id. ¶ 50.

30 Id. ¶ 52.

31 Id. ¶ 58.

32 Id. ¶ 60.

33 Id. ¶ 62.

34 Id. ¶ 66.

35 Id. ¶¶ 42–47.

36 See, e.g., OHCHR, supra note 3, ¶¶ 30–32.