Hostname: page-component-76fb5796d-5g6vh Total loading time: 0 Render date: 2024-04-26T05:36:08.637Z Has data issue: false hasContentIssue false

Race and Rights in the Digital Age

Published online by Cambridge University Press:  10 December 2018

Catherine Powell*
Affiliation:
Professor of Law, Fordham University.
Rights & Permissions [Opens in a new window]

Extract

This essay discusses how, despite the liberatory potential of technology, racial bias pervades the digital space. This bias creates tension with both the formal, de jure equality notion of “colorblindness” (in U.S. constitutional law) as well as the broader, substantive, de facto equality idea (in international human rights law). The essay draws on the work of Osagie Obasogie to show how blind people perceive race in the same way as sighted people, despite not being able to see race. It then uses blindness as a metaphor to explore how race is seen and not seen online, and analyzes the implications of this for human rights.

Type
Essay
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © 2018 by The American Society of International Law and Catherine Powell

This essay discusses how, despite the liberatory potential of technology, racial bias pervades the digital space. This bias creates tension with both the formal, de jure equality notion of “colorblindness”Footnote 1 (in U.S. constitutional law) as well as the broader, substantive, de facto equality idea (in international human rights law). The essay draws on the work of Osagie ObasogieFootnote 2 to show how blind people perceive race in the same way as sighted people, despite not being able to see race. It then uses blindness as a metaphor to explore how race is seen and not seen online, and analyzes the implications of this for human rights.

In contributing to this Symposium on the seventy-year anniversary of the Universal Declaration on Human Rights (UDHR),Footnote 3 this essay argues that while artificial intelligence (AI)—among other new technologies—seemed poised to offer a brave new way to realize race equality guarantees in human rights law, it has, in fact, failed to do so. AI is being used to make decisions about individuals’ lives without humans in the loop,Footnote 4 yet it imports many of the biases that humans have.Footnote 5 Optimistic observers had speculated that new technologies could help usher in a postracial future, given that automated decision-making seemed to be free of human bias.Footnote 6 But instead of stripping away race identity, these technologies have in fact imported our attention to race using a variety of algorithms, markers, and proxies that predict or reflect our identities. So rather than helping us embrace the humanity of others, regardless of their race, AI has replicated the dehumanizing features of racism.Footnote 7 That machines can now make such decisions with adverse racial impacts poses a challenge for human rights, which was developed to protect humans (as rights-holders) and to hold humans accountable, as opposed to machines and AI.Footnote 8

Drawing on insights about how we see and do not see race, this essay argues that even as we move into a digital age of posthumanism and transhumanism, race endures as a construct. Even when race cannot be literally “seen” online, it can still be perceived. While other scholars have explored how algorithms (which power AI) can be used in discriminatory ways,Footnote 9 this essay makes a different, novel point, about how we can skewer the mythology of colorblindness not only offline, but also online, by building on important new research on literal blindness.

Expanding on my earlier work on gender and big data,Footnote 10 which utilizes feminist theory to critique the use of big data in decision-making, this essay turns to critical race theory to explore data privacy and equality in the context of AI. This perspective is particularly useful in exploring the implications of online data collection for the UDHR's promise that “[e]veryone is entitled to all the rights and freedoms set forth in th[e] Declaration, without distinction of any kind, such as race, [etc.].”Footnote 11 In reconsidering what it means to be “human” within the context of human rights, this essay's focus on automated decision-making and race equality is significant because AI may be used to determine which school a child may attend, how a credit score is calculated, whether an individual is offered credit from a bank, whether someone will receive an interview for a job, whether to allow an individual out of jail on bail, and where to dedicate police resources, based on predictions about where crimes will occur and who is likely to commit a crime.Footnote 12

The Liberatory Potential and Challenges of Technology

The adage “on the internet, nobody knows you're a dog”Footnote 13 reflects a now naïve belief in the emancipatory potential of cyberspace. You could “cross-dress” online and cloak identifying features such as race, gender, accent, and age. While residential segregation both produces and reinforces racial hierarchy,Footnote 14 the digital universe promised to dissolve such geographic and identity boundaries. Alas, the cyber utopian vision quickly proved illusory, as cyber mobs beset women with online abuse, hate groups crept from the hidden corners of bulletin boards to mainstream sites, and thieves and hackers targeted the vulnerable with spam of every stripe.Footnote 15

The rise of AI—paired with a data collection imperativeFootnote 16—is another major development that, despite all of its benefits, can undermine privacy when we go online and lead to discriminatory practices. Since the advent of the commercial web in the mid-1990s, advertisers, web analytics companies, and other third parties have tracked and stored data on our online activity. Based on this collected data, various forms of AI are used to learn from and make predictions about our online activity. AI also enables third parties to secure a treasure trove of personal information (and monetize this information), based on thin notions of consent.Footnote 17 As we surf the web, third parties collect information on our buying habits, political preferences, and other data, enabling them to predict not only our consumer and political desires, but also our racial identities. The collection of information about individuals’ race and nationality has particularly dire consequences for minorities at risk of police profiling, other adverse governmental determinations, or predatory private decision-making.Footnote 18

The fact that race can be perceived online is part and parcel of broader concerns regarding online privacy and “surveillance capitalism,”Footnote 19 based on the commodification of information, as it is transformed into behavioral data for analysis and sales.Footnote 20 This form of “information capitalism”Footnote 21 depends on knowing who we are and our consumer habits. We are no longer merely citizens (or subjects) of particular nations, but also “algorithmic citizens”Footnote 22 (or “data subjects”),Footnote 23 where consumer preferences—and racial identities—can be gleaned through each person's digital footprint.

Thus, just as the nonvirtual world is not “colorblind,” neither is the online world. While there are ways to anonymize one's geography and even identity,Footnote 24 for the most part, online behavioral advertisers and online sites know who we are in excruciating detail.

“Blinded by Sight”: How Race is Seen and Not Seen

Using Osagie Obasogie's powerful critical race analysis on blindness and race,Footnote 25 this section examines the visibility (and invisibility) of race. As Obasogie notes in Blinded by Sight, sighted people often assume that because blind people cannot see, they are unable to perceive race and are incapable of racial bias or race consciousness and are, thus, truly “colorblind.” However, even those who have been blind since birth are not colorblind, despite not being able to see race. Blind people think about race visually and refer to visual cues, such as skin color, though they cannot literally see it. Since blind people understand race visually, they often organize their lives around these understandings in terms of their friendships, romantic affiliations, and community ties.Footnote 26 Rather than being visually oblivious, blind people are—like sighted individuals—socialized to view race in specific ways, such that they “see” race.

Through extensive interviews, Obasogie documents how blind people depend on visual cues, such as skin color, through processes of socialization. Family members and friends teach blind children (as with sighted children) about racial difference based on a variety of associations, including differences in speaking patterns, residency, food preparation and preferences, and even smell.Footnote 27 Once blind people learn about racial difference through these cues, cognitively, race develops a visual significance for them as well.Footnote 28 “[S]ocializing the visual significance of race is an ongoing process that requires maintenance and reinforcement in order to elicit a continued ‘buy in’ from blind people that race is visually significant.”Footnote 29 In fact, “[t]hese are the same social forces that give visual understandings of race their coherency to the sighted, yet they remain hidden due to sighted individuals’ overemphasis on visual fields,” and “[i]t is in this sense that sighted people are blinded by their sight.”Footnote 30

While race is biologically irrelevant, it has a social relevance based on our social associations and lived-experience. On the one hand, racial bias often seems invisible—particularly when measured against the de jure formal equality standard and mythology of colorblindness. On the other hand, Obasogie's research demonstrates how embedded racial bias is in the structure of our social relationships and institutions (i.e., where and how people live, talk, eat, work, socialize, network, etc.).Footnote 31

Our Digital Lives Reveal Critical Race Insights Anew

Turning to the digital space reveals Obasogie's insights anew. Given its arms-length nature, technology seemed to offer a solution. Big data and AI appear “scientific,” “objective,” and “nonbiased.”Footnote 32 However, just as the myth of colorblindness masks the structural and institutional nature of racial injustice, so too the internet folklore that technology represents a postracial utopia cloaks the reality. Even though we often cannot literally see a person's race online (short of using FaceTime or related technology), we can perceive race, based on the products that an individual buys, the websites she visits, and the digital dossiers sold by data brokers.Footnote 33

Far from moving us to a postracial future, our digital lives reveal race for what it is. Race is a deeply entrenched social construct—both online and offline—even when we cannot literally always “see” it. A couple examples serve to illustrate this point.

Until recently, Facebook used AI to categorize its users by “ethnic affinities,” based on posts liked or engaged with on Facebook.Footnote 34 Housing advertisers used the company's “ethnic affiliations” to exclude particular groups as part of niche advertising strategies. While Facebook does not allow this anymore,Footnote 35 the fact that the companies have free rein to provide the tools of discrimination (and to take them away and redesign them at their whim) is itself a problem. To challenge related practices concerning disability and family status, the U.S. Department of Housing and Urban Development (HUD) recently sued Facebook. In a statement, a HUD spokeswoman said, “When Facebook uses the vast amount of personal data it collects to help advertisers to discriminate, it's the same as slamming the door in someone's face.”Footnote 36

Airbnb's online rental platform has also experienced problems with discrimination by hosts who have refused to rent to African Americans, leading to the hashtag #AirbnbWhileBlack.Footnote 37 While Airbnb denounced one discriminatory host and kicked him off the platform, “the incident exposes the gray zones in the rules that guide the gig economy.”Footnote 38 Airbnb has a policy requiring that users “comply with local laws and regulations” including federal antidiscrimination laws, but there are carve-outs for smaller dwellings with fewer rooms in many jurisdictions.Footnote 39 Users no longer necessarily need to create profiles with photos of themselves (which would indicate race). However, research suggests that even individuals with black-sounding names may face online discrimination.Footnote 40

Conclusion

Shedding light on these blind spots—in the way we view technology—may ultimately yield lessons for how to dismantle discrimination. Given the persistence of race and racism online and offline, we must shed the earlier optimism that technology will move us to a postracial, colorblind society and instead embrace more robust equality approaches by both the public and private sectors. Ultimately, a possible solution is to embrace disparate impact approaches to equality law, which international human rights law embraces—if not for the purposes of liability ex post, then at least for the purposes of incentivizing technology companies in the design of AI ex ante.Footnote 41 Identifying creative, effective approaches would be an appropriate way to celebrate the UDHR's spirit in today's digital economy.

Footnotes

*

I would like to thank the participants in the International Law and Race Workshop at University of Colorado Law School, August 3-4, 2018, as well as Danielle Citron, Clare Huntington, Margot Kaminski, Craig Konneth, Olivier Sylvain, and Ari Ezra Waldman for valuable input. I'm also grateful to my superb research assistants, Mary Katherine Cunningham, Christine Gabrellian, Aryian Kohandel-Shirazi, and Mindy Nam.

References

1 While laudable in theory, in practice, the U.S. colorblindness doctrine has been used as a cover for law to ignore the ongoing effects of America's long legacy of slavery and Jim Crow. Fortunately, some U.S. civil rights statutes go beyond this narrow formal approach to adopt the broader substantive approach found in international human rights law.

3 See Universal Declaration of Human Rights art. 1, G.A. Res. 217A (III) (Dec. 8, 1948) [hereinafter UDHR].

5 This essay focuses on the “narrow” AI that currently exists, which utilizes machine learning algorithms that harvest personal data to detect patterns and make predictions about individuals based on those patterns. Past data may reflect underlying discriminatory patterns (such as in past policing practices), which imports preexisiting bias into making predictions about the future. See, e.g., Cathy O'Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2016).

6 See, e.g., Tim Jordan, Cyberpower: The Culture and Politics of Cyberspace and the Internet 66 (1999) (discussing the belief “that cyberspace will be liberatory because ‘race’ will be absent there”); Lisa Nakamura, Cybertypes: Race, Ethnicity and Identity on the Internet (2002) (critiquing the notion of cyberspace as a utopian web of fluid identities and unlimited possibilities).

7 On the positive side, new technologies can help build racial solidarity (and transracial solidarity) and offer the freedom to construct and perform online identities across various intersections as well as build online communities.

8 The scholarship concerning questions of accountability and fairness in automated decision-making is vast. For a helpful review, see, e.g., Margot Kaminski, Binary Governance: A Two-Part Approach to Accountable Algorithms 2, n.4 (draft on file with author).

9 See, e.g., Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (2018); Pasquale, supra note 4; Solon Barocas & Andrew Selbst, Big Data's Disparate Impact, 104 Calif. L. Rev. 671 (2016); Anupam Chander, The Racist Algorithm, 115 Mich. L. Rev. 1023 (2017).

10 Catherine Powell, Gender Indicators as Global Governance: Not Your Father's World Bank, 17 Geo. J. Gender & L. 777 (2016).

11 UDHR, supra note 3, art. 2 (emphasis added). See also id. art. 1 (providing “[a]ll human beings are born free and equal in dignity and rights”).

12 See, e.g., Saranya Vijayakumar, Algorithmic Decision-Making, Harv. Pol. Rev. (June 28, 2017).

13 This phrase began as a caption to a New Yorker cartoon in 1993. Peter Steiner, On the Internet, Nobody Knows You're a Dog, New Yorker (July 5, 1993). See also Glenn Fleishman, Cartoon Captures Spirit of the Internet, N.Y. Times (Dec. 14, 2000) (explaining the creation and the cultural relevance of the cartoon after its publication).

14 See, e.g., Elise Boddie, Racial Territoriality, 58 UCLA L. Rev. 401 (2010).

15 See, e.g., Danielle Keats Citron, Hate Crimes in Cyberspace (2014); Mary Anne Franks, Unwilling Avatars: Idealism and Discrimination in Cyberspace, 20 Colum. J. Gender 224 (2011).

16 Khiara Bridges, The Poverty of Privacy Rights (2018); Danielle Keats Citron, A Poor Mother's Right to Privacy: A Review, 98 B.U. L. Rev. 1, 4 (2018) (noting that “the constitutional right to information privacy should be understood to limit the state's collection of personal data from poor mothers”).

18 Cf. Anita Allen, Unpopular Privacy: What Must We Hide? (2011) (discussing a range of new privacy concerns, including data and racial privacy).

19 Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2018).

20 Nicholas Confessore, The Unlikely Activists Who Took On Silicon Valley — and Won, N.Y. Times (Aug. 14, 2018) (discussing activism before and after the Cambridge Analytica revelations); Bruce Sterling, Shoshanna Zuboff Condemning Google “Surveillance Capitalism”, Wired (Mar. 8, 2016).

21 Shoshana Zuboff, Big Other: Surveillance Capitalism and the Prospects of an Information Civilization, 30 J. Info. Tech. 75, 81 (2015) (describing “new market form that is a radically disembedded and extractive variant of information capitalism, one that can be identified as surveillance capitalism”).

22 Citizen Ex, Algorithmic Citizenship (proposing the notion of algorithmic citizenship, based on how one appears on the internet—as a collection of data, extending across several countries—as “a new form of citizenship, one where your citizenship, and therefore both your allegiances and your rights, are constantly being questioned, calculated and rewritten”).

23 Atossa Araxia Abrahamian, Data Subjects of the World, Unite!, N.Y. Times (May 28, 2018) (noting that the EU General Data Protection Regulation imbues us as data subjects with new rights).

24 A Virtual Private Network can be used “to send your data via another country, from where you can watch … videos or download all the files via another country.” Citizen Ex, supra note 22.

25 Obasogie, supra note 2.

26 Id. at 87. See also Solangel Maldonado, Racial Hierarchy and Desire: How Law’s Influence on Interracial Intimacies Perpetuates Inequality (manuscript on file with author) (discussing segregation in dating markets on and offline).

27 Obasogie, supra note 2, at 81–83.

28 Id. at 83.

29 Id. at 84.

30 Id. at 81 (emphasis in original).

31 See also Harvard Implicit Bias Project. Consider also the spate of stories on driving, napping, visiting Starbucks, and living while Black. See, e.g., Desire Thompson, Oblivious Person Calls 911 on “Suspicious” Black Cop & Other #LivingWhileBlack Stories, Vibe (Apr. 24, 2018).

32 See, e.g., Kevin Anthony Hoff & Masooda Bashir, Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust, 57 Hum. Factors 407 (2015).

34 Julie Angwin & Terry Parris, Jr., Facebook Lets Advertisers Exclude Users by Race, ProPublica (Oct. 28, 2016). In addition to the data collected on the Facebook platform itself, the tech giant also collects information from other websites (with Facebook sharing buttons) as well as from Instagram and WhatsApp accounts (both of which Facebook owns). Julie Angwin et al., What Facebook Knows About You, ProPublica (Sept. 18, 2016).

36 Brakkton Booker, HUD Hits Facebook for Allowing Housing Discrimination, NPR (Aug. 19, 2018) (describing how “Facebook permitted advertisers to discriminate based on disability by blocking ads to users the company categorized as having interests in ‘mobility scooter’ or ‘deaf culture’”). See also Olivier Sylvain, Intermediary Design Duties, 50 Conn. L. Rev. 1 (2018) (discussing liabilities for intermediary platforms, such as Facebook, under the Communications Decency Act).

37 Jeremy Quittner, Airbnb and Discrimination: Why It's All So Confusing, Fortune (June 23, 2016).

38 Id.

39 Id.

40 LaTanya Sweeney, Racism is Poisoning Online Ad Delivery, Says Harvard Professor, MIT Tech. Rev. (Feb. 4, 2013). See also Nancy Leong & Aaron Belzer, The New Public Accommodations: Race Discrimination in the Platform Economy, 105 Geo. L.J. 1271, 1293–95 (2017) (discussing how the guest-rating systems on platforms such as Airbnb and Uber entrench discrimination).

41 In a separate project, I am exploring linkages between privacy-based and equality-based responses, applying Kenji Yoshino's insights on this to the online context. Kenji Yoshino, The New Equal Protection, 124 Harv. L. Rev. 747, 748-50 (2011) (noting that “constitutional equality and liberty claims are often intertwined” yet liberty themes have found broader acceptance in the Supreme Court).