This essay discusses how, despite the liberatory potential of technology, racial bias pervades the digital space. This bias creates tension with both the formal, de jure equality notion of “colorblindness”1 (in U.S. constitutional law) as well as the broader, substantive, de facto equality idea (in international human rights law). The essay draws on the work of Osagie Obasogie2 to show how blind people perceive race in the same way as sighted people, despite not being able to see race. It then uses blindness as a metaphor to explore how race is seen and not seen online, and analyzes the implications of this for human rights.
In contributing to this Symposium on the seventy-year anniversary of the Universal Declaration on Human Rights (UDHR),3 this essay argues that while artificial intelligence (AI)—among other new technologies—seemed poised to offer a brave new way to realize race equality guarantees in human rights law, it has, in fact, failed to do so. AI is being used to make decisions about individuals’ lives without humans in the loop,4 yet it imports many of the biases that humans have.5 Optimistic observers had speculated that new technologies could help usher in a postracial future, given that automated decision-making seemed to be free of human bias.6 But instead of stripping away race identity, these technologies have in fact imported our attention to race using a variety of algorithms, markers, and proxies that predict or reflect our identities. So rather than helping us embrace the humanity of others, regardless of their race, AI has replicated the dehumanizing features of racism.7 That machines can now make such decisions with adverse racial impacts poses a challenge for human rights, which was developed to protect humans (as rights-holders) and to hold humans accountable, as opposed to machines and AI.8
Drawing on insights about how we see and do not see race, this essay argues that even as we move into a digital age of posthumanism and transhumanism, race endures as a construct. Even when race cannot be literally “seen” online, it can still be perceived. While other scholars have explored how algorithms (which power AI) can be used in discriminatory ways,9 this essay makes a different, novel point, about how we can skewer the mythology of colorblindness not only offline, but also online, by building on important new research on literal blindness.
Expanding on my earlier work on gender and big data,10 which utilizes feminist theory to critique the use of big data in decision-making, this essay turns to critical race theory to explore data privacy and equality in the context of AI. This perspective is particularly useful in exploring the implications of online data collection for the UDHR's promise that “[e]veryone is entitled to all the rights and freedoms set forth in th[e] Declaration, without distinction of any kind, such as race, [etc.].”11 In reconsidering what it means to be “human” within the context of human rights, this essay's focus on automated decision-making and race equality is significant because AI may be used to determine which school a child may attend, how a credit score is calculated, whether an individual is offered credit from a bank, whether someone will receive an interview for a job, whether to allow an individual out of jail on bail, and where to dedicate police resources, based on predictions about where crimes will occur and who is likely to commit a crime.12
The Liberatory Potential and Challenges of Technology
The adage “on the internet, nobody knows you're a dog”13 reflects a now naïve belief in the emancipatory potential of cyberspace. You could “cross-dress” online and cloak identifying features such as race, gender, accent, and age. While residential segregation both produces and reinforces racial hierarchy,14 the digital universe promised to dissolve such geographic and identity boundaries. Alas, the cyber utopian vision quickly proved illusory, as cyber mobs beset women with online abuse, hate groups crept from the hidden corners of bulletin boards to mainstream sites, and thieves and hackers targeted the vulnerable with spam of every stripe.15
The rise of AI—paired with a data collection imperative16—is another major development that, despite all of its benefits, can undermine privacy when we go online and lead to discriminatory practices. Since the advent of the commercial web in the mid-1990s, advertisers, web analytics companies, and other third parties have tracked and stored data on our online activity. Based on this collected data, various forms of AI are used to learn from and make predictions about our online activity. AI also enables third parties to secure a treasure trove of personal information (and monetize this information), based on thin notions of consent.17 As we surf the web, third parties collect information on our buying habits, political preferences, and other data, enabling them to predict not only our consumer and political desires, but also our racial identities. The collection of information about individuals’ race and nationality has particularly dire consequences for minorities at risk of police profiling, other adverse governmental determinations, or predatory private decision-making.18
The fact that race can be perceived online is part and parcel of broader concerns regarding online privacy and “surveillance capitalism,”19 based on the commodification of information, as it is transformed into behavioral data for analysis and sales.20 This form of “information capitalism”21 depends on knowing who we are and our consumer habits. We are no longer merely citizens (or subjects) of particular nations, but also “algorithmic citizens”22 (or “data subjects”),23 where consumer preferences—and racial identities—can be gleaned through each person's digital footprint.
Thus, just as the nonvirtual world is not “colorblind,” neither is the online world. While there are ways to anonymize one's geography and even identity,24 for the most part, online behavioral advertisers and online sites know who we are in excruciating detail.
“Blinded by Sight”: How Race is Seen and Not Seen
Using Osagie Obasogie's powerful critical race analysis on blindness and race,25 this section examines the visibility (and invisibility) of race. As Obasogie notes in Blinded by Sight, sighted people often assume that because blind people cannot see, they are unable to perceive race and are incapable of racial bias or race consciousness and are, thus, truly “colorblind.” However, even those who have been blind since birth are not colorblind, despite not being able to see race. Blind people think about race visually and refer to visual cues, such as skin color, though they cannot literally see it. Since blind people understand race visually, they often organize their lives around these understandings in terms of their friendships, romantic affiliations, and community ties.26 Rather than being visually oblivious, blind people are—like sighted individuals—socialized to view race in specific ways, such that they “see” race.
Through extensive interviews, Obasogie documents how blind people depend on visual cues, such as skin color, through processes of socialization. Family members and friends teach blind children (as with sighted children) about racial difference based on a variety of associations, including differences in speaking patterns, residency, food preparation and preferences, and even smell.27 Once blind people learn about racial difference through these cues, cognitively, race develops a visual significance for them as well.28 “[S]ocializing the visual significance of race is an ongoing process that requires maintenance and reinforcement in order to elicit a continued ‘buy in’ from blind people that race is visually significant.”29 In fact, “[t]hese are the same social forces that give visual understandings of race their coherency to the sighted, yet they remain hidden due to sighted individuals’ overemphasis on visual fields,” and “[i]t is in this sense that sighted people are blinded by their sight.”30
While race is biologically irrelevant, it has a social relevance based on our social associations and lived-experience. On the one hand, racial bias often seems invisible—particularly when measured against the de jure formal equality standard and mythology of colorblindness. On the other hand, Obasogie's research demonstrates how embedded racial bias is in the structure of our social relationships and institutions (i.e., where and how people live, talk, eat, work, socialize, network, etc.).31
Our Digital Lives Reveal Critical Race Insights Anew
Turning to the digital space reveals Obasogie's insights anew. Given its arms-length nature, technology seemed to offer a solution. Big data and AI appear “scientific,” “objective,” and “nonbiased.”32 However, just as the myth of colorblindness masks the structural and institutional nature of racial injustice, so too the internet folklore that technology represents a postracial utopia cloaks the reality. Even though we often cannot literally see a person's race online (short of using FaceTime or related technology), we can perceive race, based on the products that an individual buys, the websites she visits, and the digital dossiers sold by data brokers.33
Far from moving us to a postracial future, our digital lives reveal race for what it is. Race is a deeply entrenched social construct—both online and offline—even when we cannot literally always “see” it. A couple examples serve to illustrate this point.
Until recently, Facebook used AI to categorize its users by “ethnic affinities,” based on posts liked or engaged with on Facebook.34 Housing advertisers used the company's “ethnic affiliations” to exclude particular groups as part of niche advertising strategies. While Facebook does not allow this anymore,35 the fact that the companies have free rein to provide the tools of discrimination (and to take them away and redesign them at their whim) is itself a problem. To challenge related practices concerning disability and family status, the U.S. Department of Housing and Urban Development (HUD) recently sued Facebook. In a statement, a HUD spokeswoman said, “When Facebook uses the vast amount of personal data it collects to help advertisers to discriminate, it's the same as slamming the door in someone's face.”36
Airbnb's online rental platform has also experienced problems with discrimination by hosts who have refused to rent to African Americans, leading to the hashtag #AirbnbWhileBlack.37 While Airbnb denounced one discriminatory host and kicked him off the platform, “the incident exposes the gray zones in the rules that guide the gig economy.”38 Airbnb has a policy requiring that users “comply with local laws and regulations” including federal antidiscrimination laws, but there are carve-outs for smaller dwellings with fewer rooms in many jurisdictions.39 Users no longer necessarily need to create profiles with photos of themselves (which would indicate race). However, research suggests that even individuals with black-sounding names may face online discrimination.40
Shedding light on these blind spots—in the way we view technology—may ultimately yield lessons for how to dismantle discrimination. Given the persistence of race and racism online and offline, we must shed the earlier optimism that technology will move us to a postracial, colorblind society and instead embrace more robust equality approaches by both the public and private sectors. Ultimately, a possible solution is to embrace disparate impact approaches to equality law, which international human rights law embraces—if not for the purposes of liability ex post, then at least for the purposes of incentivizing technology companies in the design of AI ex ante.41 Identifying creative, effective approaches would be an appropriate way to celebrate the UDHR's spirit in today's digital economy.