Skip to main content Accessibility help
×
Hostname: page-component-cb9f654ff-h4f6x Total loading time: 0 Render date: 2025-08-23T22:17:27.551Z Has data issue: false hasContentIssue false

Introduction: The Transparency Formula

Published online by Cambridge University Press:  27 September 2019

Mikkel Flyverbom
Affiliation:
Copenhagen Business School

Summary

Created in a dorm room at Harvard University, Facebook was a simple website set up to compare two pictures of female students at a time, inviting fellow students to mark them as hot or not. Since then, the scope and ambitions of Facebook have expanded considerably. Here is what Mark Zuckerberg said about the role of Facebook at a meeting on the financial results of the company ten years later: “Next, let’s talk about understanding the world. What I mean by this is that every day, people post billions of pieces of content and connections into the graph and in doing this, they’re helping to build the clearest model of everything there is to know in the world” (Facebook, 2013, italics added).

Information

Type
Chapter
Information
The Digital Prism
Transparency and Managed Visibilities in a Datafied World
, pp. 1 - 24
Publisher: Cambridge University Press
Print publication year: 2019

Introduction: The Transparency Formula

Created in a dorm room at Harvard University, Facebook was a simple website set up to compare two pictures of female students at a time, inviting fellow students to mark them as hot or not. Since then, the scope and ambitions of Facebook have expanded considerably. Here is what Mark Zuckerberg said about the role of Facebook at a meeting on the financial results of the company ten years later: “Next, let’s talk about understanding the world. What I mean by this is that every day, people post billions of pieces of content and connections into the graph and in doing this, they’re helping to build the clearest model of everything there is to know in the world” (Facebook, 2013, italics added).

This book is about the promise that digital technologies and data can help us understand everything in the world. The hope that digital transformations will create transparency and clarity has spread beyond Silicon Valley and shapes all sorts of discussions about technology, politics and society. Contemporary moves toward openness and transparency in corporate and political affairs, we are told, are a direct result of developments in the realm of digital technologies (Finel and Lord, 2002; Sifry, Reference Sifry2011). This hope for societal and political re-engineering through technology is driven by a belief in transparency as a panacea – a form of sunlight that will work as a disinfectant (Brandeis, Reference Brandeis1913) on all societal illnesses – and has given rise to an increasingly institutionalized transparency movement consisting of organizations, corporate actors and activists peddling this ideal. This “triumph of transparency” (Braithwaite and Drahos, 2000) revolves around a belief in increased information and communication as a direct path to accountability, trust and legitimacy (Power, Reference Power1997; Garsten and de Montoya, Reference Garsten and de Montoya2008). That is, if information is shared, we can see everything, we can understand everything and we can take command of everything so that no bad behavior takes place. Such hopes about transparency rely on a simple and deceptive formula that equates information with clarity and considers more information as a direct path to accountability. However, as this book argues, they overlook what is a much more intricate and paradoxical relationship between digital transformations, transparency projects and organizational and regulatory effects (Hansen and Flyverbom, Reference Hansen and Flyverbom2015).

The companies driving contemporary digital transformations are a good starting point for this book’s attempt to problematize the transparency formula. Google prides itself as a driver of transparency. Its search engine makes the world’s information available, the company has developed a work culture focused on openness and it pushes for more transparency in politics and societies broadly. However, the first thing you do when entering the Google headquarters in Silicon Valley is to sign a non-disclosure agreement stating that you cannot disclose any information afterwards, or even tell anyone about the agreement. Also, it is impossible to get information about the earnings of the company in specific countries, and no one at Google will talk about how user data is commercialized across the different services and projects that it develops. Digital transformations make it possible to see everything, but some things are kept in the dark, and many of the tech companies pushing for transparency prefer to remain out of sight.

Similarly, one of Facebook’s promises is also transparency. This goes for the internal workings of the company, and the way it tells users that they can see and control the data they share via the platform. Questioned by the US Congress over the Cambridge Analytica scandal, Zuckerberg repeatedly stressed that Facebook users have “complete control” over their data. As he explained in response to the many Congressmen posing vague, but critical questions, all you need to do is go to the privacy settings and tools of your account, and you can decide what to share with whom. However, these user settings are only part of the story, and the very reason he was being questioned was that a wealth of user data travel in ways that users have not asked for, and have no way of seeing or controlling. Third parties can track and extract data. Facebook compiles data about users and their friends, even people not using Facebook, as well as from data brokers who can fill in the final gaps. So rather than transparency and control, users get an opportunity to manage the data they make visible on the front page. In contrast, Facebook’s business model is based on extracting all data, and the company refuses to explain how it uses these data. Digital platforms like Facebook allow for extensive transparency in some areas, but also ways of limiting who can see and control what.

Also smaller tech companies are increasingly committed to transparency as a mission and core value. For instance, at the social media company Buffer, all emails can be seen by everyone, and employees can take as much vacation time as they want, as long as they tell everyone about it. As a result, people use the shared email service less and less, and switch to other ways of communicating. And hardly anyone goes on holiday, especially because employees can see that their bosses never take time off. Digital possibilities for increased transparency create not just more clarity and insight, but also the need for strategies for concealing and staying out of sight.

I start with these peeks into corporations engaging with transparency efforts to highlight the argument that this book pursues, namely that transparency is no simple matter of opening up and sharing information, but rather a matter of managing visibilities in careful and strategic ways. Tech companies and the digital transformations they pursue are, at one and the same time, very visible, secretive, transparent, hidden and open, and the curiosity about this paradox is central to the book. The pages to follow explore, conceptually and critically, these intersections of digital transformations and transparency, secrecy and other visibility practices and their consequences for individual, organizational and societal affairs.

Digital Transformations

Digital transformations shape our lives in myriad ways. The reliance on digital technologies and the internet, and the emergence of big data and artificial intelligence have widespread consequences for economic, cultural, political and social activities. At present, these discussions take a number of shapes and are marked by disagreements as to whether we should think of digital transformations as blessings or curses.

Early discussions of the invention and spread of the internet focused on the potentials for dialogue, expression and democratization offered by this open and inclusive communications platform. Suddenly, it seemed like established institutions and elites, such as governments, editors and corporations could be challenged, we would all have possibilities for expression, and information would be set free. Celebrations of this new space – often referred to as cyberspace – and its potentials often focused on the importance of keeping it separate from governmental and corporate interests. Today, both the promises and this separation seem like wishful thinking, particularly when we consider how governments rely on the internet for surveillance schemes and how large companies like Amazon, Google and Facebook seek to dominate large chunks of this space. But the depictions of the internet as a liberating force continue to shape many discussions. The emergence of social media and the spread of user-generated content reignited hopes about democratization and possibilities for free expression; however, at present, the picture seems murkier, and we have a growing focus on surveillance, corporate dominance and the negative effects of digital transformations. Especially the scope of US surveillance schemes exposed by Edward Snowden in 2013 propelled these discussions into the public domain. Surveillance, it turned out, was not a targeted or abnormal effort, but most governments’ default approach to dealing with citizens’ activities in digital spaces. These developments, including the largely hidden role of internet companies as suppliers of data for government surveillance schemes, such as that of the US National Security Agency, have also propelled discussions about the relationship between digital transformations and citizen rights. The key questions, it seems, are increasingly about the roles and responsibilities of tech giants and the way they both come to govern many spheres of social life, and are subjected to new forms of governance (Flyverbom, Reference Flyverbom2016a; Gillespie, Reference Gillespie, Burgess, Poell and Marwick2017b).

Another key issue is that digital technology makes it easier than ever to collect, store and distribute information. In a largely digital world, searches for information, comments and messages and walks through the city produce vast amounts of digital traces that can be picked up and used again and again. Internet companies have access to mind-blowing amounts of data showing everything we do, care about and search for in digital spaces. In many ways, we have gone from a situation where information was scarce and expensive to store to a situation where information is abundant and easily stored (Mayer-Schönberger, Reference Mayer-Schönberger2009; Andrejevic, Reference Andrejevic2013). While processes of digitalization have been under way for a long time, a new development that we can think of as datafication (Mayer-Schönberger and Cukier, 2014) is increasingly important. Datafication means that many parts of social life take the shape of digital traces. Friendships become “likes” on Facebook, movements through the city produce extensive digital footprints in GPS-enabled devices, and our searches for information show what we value or wish for as individuals and societies. In combination with automated sorting mechanisms, such as algorithms and artificial intelligence, these massive streams of digital traces can be used to identify important patterns and inform decisions about anything from consumers to health conditions to criminal activities. The excitement surrounding these developments has been massive and, despite its fuzziness, the term big data has been taken on by the public, by companies and by politicians.

Increasingly, much of what we know about people, organizations and societies comes from digital sources. We are told that these developments will solve many of the problems that have always marked science, statistics and other ways of producing knowledge. Soon, we will have access to all or most of the data about a given phenomenon, and have consistent and neutral forms of intelligence that can give us exact answers. Finally, we will know everything and have unbiased answers to all our questions. Some have even suggested that we will no longer need theories and other types of structured explanations, because the digital evidence will speak for itself (Anderson, Reference Anderson2008), and some have expectations that we will have forms of superintelligence that do not require human interference (Good, Reference Good1965; Bostrom, Reference Bostrom2014). At the same time, we are also increasingly concerned about the fate of the masses of data that result from our reliance on digital technologies. As automated forms of analysis and artificial intelligence weave into the governance of social affairs, such as crime, risk assessments and other kinds of decision-making, we need to consider what happens to our ways of running societies and securing fundamental rights. Datafication not only gives us more insights, but also makes it possible to keep track of people and regulate behavior in new and problematic ways.

As a result of these developments, another concern is how digital transformations lead to the disruption of established industries. Newspapers and media companies are losing their ways of creating revenue as contents become digital and largely free, and advertising becomes a primary competence and the core business model of internet companies such as Google and Facebook. Traditional taxi companies are losing the fight against the influx of cheaper Uber drivers working under less rigid forms of regulation. Similarly, the hotel industry and existing models of urban development are challenged by the possibilities for short-term rentals and new sources of income offered by Airbnb. These developments point to the gap between the conditions and approaches of established industries and the more unruly possibilities that digital transformations afford when it comes to new business models and ways of organizing our societies.

At present, the disruptive and possibly negative consequences of digital transformations are on the minds of scholars, policymakers and the wider public (Foer, Reference Foer2017; Taplin, Reference Taplin2017). To grapple with these, we need to consider what it means that digital technologies – often developed and controlled by a few giant companies – are becoming the backbones of commercial, political, cultural and other social affairs. We used to think of digital technologies as simple tools, or as spaces that we could enter and leave again. Earlier accounts focused on how individuals, organizations and societies made use of digital technology and sought to capture what the implications of these uses were (Zuboff, Reference Zuboff1988; Castells, Reference Castells1996). How will work routines be altered as a result of new, automated production techniques? Will personal computers change the way we learn and think? And what happens to power relations when we start to rely on information and communication technologies that may challenge existing strategies, hierarchies and authorities? Such questions were important and relevant to ask when hardware, software and computer networks emerged as new tools to be taken up or rejected. However, digital technologies are no longer simply tools that we pick up to do a particular task or use to find a quicker way of working. They can no longer be put down, because they have merged with social life and become societal backbones, rather than tools at the periphery of what we do and are. We no longer go online – i.e. enter a new space – because we are already and almost always connected. These infrastructural developments have consequences for how we think about digital technologies and their relation to social life. It no longer makes sense to talk about cyberspace as an independent, separate sphere or to distinguish between online and offline activities. We are, as DeNardis and Musiana (Reference DeNardis, Musiani, Musiani, Cogburn, DeNardis and Levinson2016: 19) put it, entering “an era of global governance by Internet infrastructure,” and we need to consider what this entails and where it takes us, as individuals, organizations and societies.

The Powers of Tech Companies

At present, digital spaces are dominated by a small handful of tech companies, and we need to understand these commercial and technical forces. We can think of tech companies as powerful in a number of ways – as financially, technologically, politically and culturally potent. Certainly, tech giants exercise these forms of power. They are now among the most profitable companies in the world. Measured by market value, Apple, Google, Microsoft, Amazon and Facebook have surpassed the giants of the past, such as banks and energy companies (Economist, 2016). By leading technological innovation, these companies control large chunks of the internet and seek to build monopolies by crushing or acquiring competitors. As one of the key figures in Silicon Valley puts it, “competition is for losers” (Thiel, Reference Thiel2014) and tech companies pursue market dominance in aggressive ways. Tech companies also take on roles in politics and regulation, either by invitation as experts or innovators, or through their extensive lobbying efforts (Flyverbom, 2018). Furthermore, we are increasingly aware that tech companies shape cultural production through the tools and services they offer. For instance, digital platforms, and in particular Facebook, have become the primary gateway to news and other kinds of cultural products. This creates increased pressure on all sorts of content production, and the foundations of, for instance, newspapers seem to be eroding: People no longer need a subscription because a lot of content is circulated via digital platforms. Advertising revenue also ends up elsewhere, because the same digital platforms offer more agile and targeted ways of reaching customers and more elaborate ways of documenting the effectiveness and reach of their services. Without these financial pillars, quality journalism and news production are under pressure, and a lot of newspapers and similar companies are searching for new business models and ways of producing, distributing and extracting value from their work. Digital platforms also increasingly take on the role as archives and editors of social life. When Google publishes its yearly list of words we have searched for in the past year – aptly named Google Zeitgeist and later renamed Google Trends – it reflects what we focus on, value or want to know, as individuals and societies. It may come as no surprise that in 2016, Pokémon Go, iPhone 7 and Donald Trump made the top of the list of search terms. Or that in 2017, people were highly interested in iPhone X, Hurricane Irma and Harvey Weinstein.

These many and entangled forms of power are important, but not exhaustive of how tech companies shape social affairs. What media critics, such as Walter Lippmann, said about newspapers in the 1920s, also goes for digital platforms: they play an important part in controlling how we view the world. Because they focus on what people like and share most, popular phenomena like sensational news stories come to the fore, while more complex ones move in the background. The content policies and values of digital platforms set limits to what we see in the first place. Facebook’s automatic deletion of Nick Ut’s iconic photo of a naked Vietnamese girl fleeing the US napalm bombings is one example. The photo was uploaded first as part of a series of images that changed our perception of wars, and later by a Norwegian newspaper reporting on the story. In both cases, it was automatically identified as nudity, which clashes with the guidelines and policies of the platform, and taken down. The ensuing critique of Facebook’s role as the “world’s most powerful editor” censoring an important historical image highlighted how such companies increasingly shape public domains.

As they become our entry points for an increasing number of daily activities, digital platforms become intimately involved in editing and the ordering of social life. Without being very explicit about it, internet companies seek to become our gateways into whatever slice of social life they focus on, whether it is search, social relations, images, books or movies. From a business perspective, the ambition is obviously to become dominant in areas where they can gain access to more data and insights about people using their platform, and thus make it more difficult for competitors to gain a foothold. But the consequences of these attempts at carving out chunks of social life by offering infrastructures and services cut deeper. Facebook, for instance, is not interested in news as such, but as content that people are eager to click on and share. While the company has insisted that it is a technological utility, and not a media company, developments such as the deletion of the picture of the Vietnamese girl and the widespread circulation of “fake news” have increased the pressure for more reflections on the roles and responsibilities of digital platforms.

The power of tech companies extends beyond finances, technologies and politics. To grasp their significance, we need to include a focus on how they have access to information that was previously invisible or inaccessible, because it was controlled by corporations or individuals, and how they use such information to shape what we see and give attention to. There is a growing awareness that internet companies and technological innovation are mainly US phenomena. Also, questions about job creation, economic growth and taxes come up in many discussions about digital transformations. Internet companies and social media platforms may not be very labor intensive industries, but the concerns about the effects of Europe’s poor performance in the digital domain are widespread. When internet giants operate outside the USA, Europe gets very few taxes and a small number of data centers with a minimum of local job openings. Related discussions of digital transformations focus on US cultural dominance. Because of the popularity and size of these companies, they can turn US values and standards into global ones. This often happens simply by demanding users to accept an overwhelming list of terms of service or through the adherence to community standards defining what can and cannot be shared or done on their platform (Gillespie, Reference Gillespie2018b). On most social media sites, some forms of violence are acceptable, but no naked breasts, and free speech is balanced against concerns about discrimination and hatred. Contents and users that violate these standards simply disappear from the site, either through handheld or automated forms of content moderation. In itself, this is not at all surprising or controversial: internet companies, like all other companies, are free to make these decisions about what they want and do not want to accommodate. However, these digital platforms increasingly function as backbones for a wide number of human activities, such as building social relations, finding information or shaping cultural formations, and we need to consider their roles and responsibilities.

Digital technologies and tech companies do not simply transmit information, but also organize, order and transform societies. To appreciate the ordering capacities (Flyverbom, Reference Flyverbom2011) of digital transformations, we need to consider how technologies and data afford and constrain possibilities for action, but also produce particular ways of seeing, knowing and governing. This is why we need to focus on the management of visibilities. This extends and differs from the more well-known arguments that technologies have societal effects (Introna, 2007), like the creation of new forms of community, or that they can be used strategically to disrupt established approaches to politics. In their role as ubiquitous infrastructures, digital technologies are foundational mechanisms or operating systems (Peters, Reference Peters2015) that filter and shape what we see and do not see, what we consider important and what we seek to control. The word prism in the title of this book alludes to this idea. Digital technologies are our eyes and gateways to the world.

Gillespie’s (Reference Gillespie, Gillespie, Boczkowski and Foot2014) work on algorithms addresses one aspect of this by pointing to the many ways in which automated decisions about categorization and sorting have relevance for social life. Like editors of newspapers, algorithms decide what is placed front and center and what is left out of sight. Only the kinds of information that are algorithm-ready get picked up, and we get the most popular points of view served up first. What comes out on top – in social relations, in culture and many other sorts of valuations – is increasingly a result of datafied processes and algorithmic operations. The starting point for this book is that digital technologies are fundamental to the production and circulation of data, information and knowledge, and that they guide our attention in particular directions and facilitate governance in significant ways. What I set out to show is that digitalization and datafication afford visibility management in new ways and to new degrees, and that these developments should be at the center when we grapple with the societal consequences of digital technologies and big data. But we need to start elsewhere and consider how different kinds of technological developments have been understood and conceptualized.

I summarize these broad discussions about digital transformations both to highlight their importance, and to hint at their limitations. Big data, algorithms and artificial intelligence are important topics. It is likewise important that we discuss the size, monopoly ambitions and market shares of big internet companies. But it also makes us overlook what I consider to be the most fundamental importance of digital transformations, namely that they make us see and know things in new ways. As digital platforms move closer and closer to the core of social and cultural life, we should also be asking questions about how digital transformations shape how we see, know and govern social life. These questions are about the relationship between digital transformations and what I term the management of visibilities. Compared to internet giants’ size, financial advantages and number of users, these concerns are much more fuzzy and subtle. But they are central if we want to grasp the shape and consequences of contemporary digital transformations.

The Transparency Formula

One of the most dominant dreams about digital transformations is that they will give us access to perfect and total information – full transparency. Such hopes about digital technologies as facilitators and drivers of transparency shape contemporary life in all sorts of ways. States tell us they need extensive surveillance programs to prevent terrorist attacks, organizations promise to open their books and show us their insides, and fitness trackers offer us new ways of understanding our bodies and health conditions.

Accounts of this relationship between digital transformations and transparency take a number of distinct shapes: To some, digital technologies foster hopes about the positive effects of transparency for societal and organizational conduct. Consider, for instance, Sifry’s (Reference Sifry2011: 189) diagnosis that: “More information, plus the Internet’s power to spread it beyond centralized control, is our best defense against opacity and the bad behavior it can enable.” These developments promise to put an end to secrecy and centralized forms of power, because “we are living in an age of unprecedented transparency. Thanks to the revolution in information technology, the spread of democratic institutions, and the rise of global media, keeping secrets has become harder than ever before. These trends have distributed power away from centralized governments and placed it in the hands of organizations, multinational corporations, and international regimes, among others” (Finel and Lord, 2002: 2). Technological developments mean that everything can be known, seen, tracked, profiled and used against us. This means that what “happens in Vegas, stays on YouTube,” (Qualman, 2014) and it is tempting to think that we are entering the age of total information and perfect clarity.

The promises of digitally driven transparency go beyond calls for more information or democratization. The desire for transparency is a hallmark of our times, hailing back to Enlightenment ideals. Corporate scandals, such as Enron’s collapse, the Volkswagen emissions scam, Edwards Snowden’s leaks about government surveillance and the extensive tax avoidance systems revealed in the Panama papers all point to the value of shining a light on organizational affairs. As citizens, consumers and publics, we have a “right to know,” and this demand is increasingly institutionalized in cultural and political life (Hood and Heald, Reference Hood and Heald2006; Schudson, Reference Schudson2015). The focus on insight and oversight takes the shape of Freedom of Information Acts, extensive reporting and labeling demands in industries and a wide range of other attempts to make information accessible and useful. Digital transformations are intimately tied to these hopes about more and better information as a source of human progress.

As a recipe for progress, the transparency formula goes something like this: If more information is shared, we can see things as they really are, and as a result we will make smarter decisions and behave better. With digital technologies, big data and algorithmic intelligence, we will be able to see, know and govern in better ways. No more hiding, no more bias and no more uncertainty. And as a result of this clarity and insight, people and organizations will behave better.

This equation between transparency and control was expressed most famously by the US judge Louis Brandeis (Reference Brandeis1913) when he stated that “sunlight is the best disinfectant, and electric light the most efficient policeman.” Along these lines, transparency takes the shape of a formula for the governance of individuals, organizations and societies (Hood and Heald, Reference Hood and Heald2006; Fenster, Reference Fenster2015), and a solution to all sorts of societal problems and challenges. However, to others, these same processes raise fears about surveillance and disciplinary control. In particular, spectacular revelations of government-led surveillance schemes have propelled these discussions into the news, political debates and the public domain. Surveillance has become such a central part of social life that terms like “surveillance societies” (Lyon, Reference Lyon2006; Marx, 2016) and “surveillance capitalism” (Zuboff, Reference Zuboff2019) have entered contemporary academic and popular vocabularies. Edward Snowden’s revelation of surveillance programs set up the US National Security Agency (NSA) – including one with the code name PRISM, set up to collect communication through major internet companies – also played a significant role in these developments. The growing awareness that internet companies and data brokers collect and reuse all sorts of digital traces have highlighted the importance of questions about surveillance, anonymity and privacy. These concerns involve a different understanding of digitally driven transparency as an extension of long-standing hopes about total information and possibilities for unhindered insights into the lives of citizens.

Possibilities for observation are not limited to the state. In particular, digital transformations have enabled citizens and stakeholders to keep an eye on governments and companies, and these developments have given rise to new conceptualizations: it no longer suffices to speak of surveillance along the lines of panoptic metaphors, where one centrally located actor observes a population of prisoners or others. Increasingly, surveillance also takes the shape of what Mathieson (1997) calls synoptic observation, where many observe the few, such as when the public oversees those in charge. As a result, the demarcations between those governing and those being governed are less clear-cut (Hansen and Flyverbom, Reference Hansen and Flyverbom2015). Despite these developments, surveillance remains a dominant issue in the broader debates about digital transformations and transparency.

In some accounts, the key concern is how increased clarity creates unintended opportunities for strategic disclosure of certain forms of information, secrecy and opacity in some areas. Such perspectives challenge prevalent assumptions about transparency as a direct path to surveillance or accountability and stress “how late modernity creates blind spots, invisibility and therefore to some extent less accountability” (Zyglidopoulos and Fleming, Reference Zyglidopoulos and Fleming2011: 703). Transparency, then, becomes a new kind of guise for manipulation or distraction. As Leonardi, Stohl and Stohl (Reference Stohl, Stohl and Leonardi2016) suggest, the contemporary obsession with transparency allows organizations to both disclose masses of information and hide in plain sight. For instance, dumping lots of information or disclosing selected materials can be an effective way to create peace and space for a given organization to pursue activities that it may not wish to make public. Information, paradoxically, can be a useful tool if you want to distract people’s attention.

While drawing on these different insights, this book sets out to show that there is more to digitally driven transparency than the end of secrecy, the growth of surveillance or opportunities for selective and strategic disclosure. Or put in more blunt terms: it sets out to show that the promises of digital transparency formula do not hold up. We will not have access to perfect or total information. We will not be able to see things as they really are. We will not have technologies that give us all the answers. And people, organizations and societies will continue to have secrets and show what they want, despite technological promises about full disclosure.

From Windows to Prisms

As a repertoire for understanding transparency, glass metaphors are useful. When organizations decide or are forced to be transparent, we tend to think of it as a simple matter of opening a window on reality, allowing us to see what happens inside and make our own judgment about what is revealed. Glass metaphors, however, also remind us that transparency efforts, like windows, not only let light in – they are also forms of decoration that can be used to showcase something, or to shield us from the world outside. One of the important tricks used by department stores is to showcase products behind glass at night, so that customers have to return during opening hours to touch and try the product on. Also, windows can be shut or opened as we please, and they offer protection from intruders. Finally, glass does not simply or only let light pass through unchanged, but may refract and reconfigure whatever passes through or is framed. Glass with polished surfaces, such as prisms, are particularly prone to refract and change what enters into something else. With digital transformations, such refractions become more widespread and worth exploring, hence the title of this book.

Along the same lines, Gabriel (2005: 22) reminds us of the variety of effects that glass may have:

Glass is a hard and fragile medium, providing an invisible barrier that allows the insider to see outside and the outsider to see inside. … [I]t is also a distorting medium that reflects and refracts light, creating illusions and false images. Looking into glass, it is sometimes easy to mistake your own reflection for an image facing from behind. Finally, glass is a framing medium – its mere presence, as in the case of Damien Hirst’s famous artistic displays, defines what lies behind it as something worthy of attention, protection and admiration.

This discussion of the complex effects of glass is a useful analogy when it comes to problematizing laudatory accounts of transparency. Those who trust in transparency measures tend to consider information as a direct form of access to social phenomena and processes. But what if the relationship between information and reality is more complex? And what if more information also allows for more opacity or leads to new forms of misconduct?

At its core, the conceptual shift that this book makes is to challenge the metaphorical understanding of transparency as windows being opened on reality, and to exchange it for an understanding of transparency projects as prisms that create extensive and manifold reconfigurations. The prism metaphor offers a more dynamic and nuanced conception of transparency as a matter of creating refractions and a growing need for the management of visibilities.

To develop this conceptualization of transparency, we also need to probe the underlying views of communication and representation at work in the transparency formula. The link between disclosing information and creating insight is not direct or clear-cut. Windows, open offices and corporate reports only give access to selected or filtered parts of organizations. Leaked files by the millions require extensive sorting and editing before they become useful, and videos, like all other kinds of documentation, require interpretation and contexts to be meaningful. The problem is that most transparency projects are seen as unmediated. We expect them to provide direct, immediate access, and show us things as they really are, but it is not that simple. As Frissen (2017: 15) reminds us “it is often suggested that transparency requires no form of mediation or representation whatsoever. When directness is total, facts are assumed to not only to speak for themselves but actually exist in an objective sense.” Most theories of transparency, particularly in political science and business and public administration, rely on surprisingly simplistic and old-fashioned views of transparency as a matter of transmitting information from a sender to a receiver. Such conduit models of communication overlook a range of complications, including power differences between senders and receivers, how mediating technologies afford and shape communication, and a host of complexities related to meaning-making and interpretation (Christensen and Cheney, Reference Christensen and Cheney2015; Fenster, Reference Fenster2015). Along similar lines, most work on transparency considers reality and representations to be the same. But representations, such as transparency efforts, are never a simple mirror of reality. Despite social constructivists’ enduring efforts to establish that objects and representations are not the same – ceci n’est pas une pipe, but a picture of a pipe, as Magritte’s famous painting reminds us – transparency initiatives still preserve the ideal that we can see things as they really are. But transparency projects provide representations, rather than presentations (Frissen, 2017: 16), and they constitute people, processes and objects in particular ways, and must be understood as performative (Albu and Flyverbom, Reference Albu and Flyverbom2016). This argument points to the generative capacities (Rubio and Baert, Reference Rubio and Baert2012) of transparency as a force in the reconfiguration of social realities and relations. Furthermore, it paves the way for dynamic conception of transparency as a sociopolitical phenomenon intimately tied to power (Flyverbom, Hansen and Christensen, Reference Flyverbom, Christensen and Hansen2015), and reminds us that we cannot make sense of transparency without giving attention to the devices, mediations and processes of knowledge production involved (Hansen and Flyverbom, Reference Hansen and Flyverbom2015). Such questions have recently become a component of what we can think of as critical transparency studies (Birchall, Reference Birchall2015), which this book also seeks to contribute to. Such approaches pave the way for more inquisitive accounts of transparency and a focus on dynamics of visibility management. Rather than simply associate control with surveillance and transparency with empowerment, they help us recognize the complex relations between processes of seeing, knowing and governing as they enter and are refracted in digital prisms and spaces.

The key concept of this book, managed visibilities, suggests that when we disclose something, it is always a managed and mediated process. The resulting insights are rather refractions and manifold visibilities than direct observation, pure insight or full clarity. Reconceptualizing transparency along these lines – as a matter of refracting and managing visibilities – allows us to move beyond transmission views and Enlightenment ideals of full disclosure to understand the complications and mediations at play when individuals, organizations and societies become entangled with transparency efforts. Also, this approach stresses that digital technologies are not transparency machines – they do not simply transmit information or create clarity. Rather, they are social and material forces that contribute to refractions and needs for the management of visibilities. This alternative to predominant conceptualizations of digital transparency offers a novel vocabulary that may help us make sense of what happens when transparency becomes an individual, organizational and societal concern. The conceptualization of visibility management brings into play a range of fundamental human activities having to do with seeing, knowing and governing, and brings together issues normally pursued under disconnected headings such as surveillance, secrecy, openness, transparency and leaks. At the same time, this approach speaks to a more extensive challenge, namely to explore and understand what Walters (Reference Walters2012: 52) terms the “new territories of power” associated with “the entanglement of the digital, the informational and the governmental.” By focusing on the management of visibilities, this book seeks to rethink and explore the workings of digital transformations and the effects of transparency efforts in the lives of individuals, organizations and societies.

Visibilities and Power

There is an intimate relationship between what you see, what you know and what you control. Just think of how the invention of the microscope paved the way for modern medicine and the treatment of diseases. It was not until we were able to observe viruses and bacteria that we could understand these phenomena and start to develop ways of acting on them. Or consider how the emergence of maps made it possible to see, know and conquer new parts of the world. Like earlier inventions, digital transformations fundamentally alter how we make things visible, knowable and possible to control. We can think of many such examples of intersections between material objects, possibilities for insight and a sense of control. Buildings exude invitation and candidness through glass facades and open layouts that signal the end of hierarchy and new possibilities for engagement and oversight. Companies craft extensive reports and disclose information to show that they are accessible and accountable. When WikiLeaks published a flood of classified documents on its web site, the goal was to give us direct access to the secrets and hidden workings of governments. And when cities are plastered with surveillance cameras, it is because they promise to show us what happens in the street and make crime prevention and control more effective. In such accounts, it seems increasingly obvious that digital transformations make it possible to see more and see clearer. The excitement about these possibilities for transparency mainly revolves around questions about quality and quantity – that we have more information, better data, and thus perfect clarity. Similarly, when critical voices speak out against secrecy, it is often with a call for more and better information. Dominant approaches also have a primary focus on observation as a one-way, asymmetric process. That is, who watches who, and what is available to the observer, but not the observed? Such understandings of transparency shape both public and academic concerns about managers watching their employees, principals observing agents or a state carrying out surveillance of its citizens. The issue is mainly to show the asymmetrical and one-directional relations at work in such situations. This focus is an important starting point, but ignores a number of complications at work when information gets disclosed. The situation is not simply that we are under constant surveillance or controlled by an invisible watchdog as depicted in Foucault’s famous account of the prison layout, the Panopticon, where inmates never saw their guards, but still acted as if they were monitored all the time. What we have are much more dynamic and complex flows of digital information that increasingly make up our reality.

Many discussions about digital transformations and transparency ignore a wealth of social and technological dynamics, and the concept of managed visibilities seeks to highlights these dynamics. It suggests that the intricate processes involved in producing, circulating, selecting and making sense of information should be our main concern.

To sum up: some tell us that we live in an age of unprecedented transparency and openness where the power of the internet and the wealth of digital data will eliminate secrecy and decentralize power (Finel and Lord, 2002). Others warn us that these “new weapons of mass detection” (Zuboff, 2014) or “weapons of math destruction” (O’Neill, Reference O’Neil2016) not only allow for blanket forms of surveillance, but also new forms of inequality and injustice. However, we must think of such sweeping diagnoses of the relation between digital technologies and societal affairs in terms of a much more fundamental concern that this book seeks to unfold. That is, how do digital technologies make us see and know in particular ways, and how do these shape the way we govern and order social affairs?

Managing Visibilities

The focus on these relations between technologies, visibilities and ordering frames the investigation of a range of dynamics related to observation, knowledge production, organizational processes and regulatory efforts in the digital domain. The primary ambition is to conceptualize digital transformations in terms of visibility management, and use this conceptualization to articulate how dynamics of seeing and knowing made possible by digital technologies shape a variety of human, material, organizational and regulatory arrangements. To understand these developments, we need analytical vocabularies that help us articulate the workings and implications of digital transformations. Dynamics of visibility are at the center of this investigation and conceptualization. If we want to grasp the role of digital technologies in organizational and political developments, we need to first understand how they produce particular visibilities and permit certain kinds of knowledge production and governance. Digital transformations involve not only technologies that allow for new forms of observation, so that we can see into work processes, organizational activities and so on. The emergence of big data analytics means that the world can be visualized and accounted for in news ways. Such forms of observation do not simply produce transparency, but much more complex and paradoxical visibility practices and forms of social ordering.

The choice of the term managed visibilities – rather than transparency, clarity, disclosure, openness or related alternatives – is motivated by a wish to foreground mediated, strategic and dynamic attempts to govern through vision and observation. In many accounts, openness is perceived as a characteristic or feature of an entity, such as an organization. In contrast, transparency is a more relational phenomenon, which, for instance, requires an interpreter, somebody who can process information and make sense of what is disclosed (Etzioni, Reference Etzioni2010; Hood and Heald, Reference Hood and Heald2006). Conceptualizations of transparency may point to more relational intersections between information and governance. However, as noted above, the focus is often on the quality and quantity of information and how effectively information is transmitted. The conceptualization of visibility management that I propose is more sociological and focused on the paradoxes and dynamics at play. Whereas more narrow understandings consider transparency to be a matter of ensuring accountability through the timely and public disclosure of information (Schneiberg and Bartley, 2008), this book seeks to conceptualize how all visibility practices reconfigure, rather than represent, objects and subjects. Also, it offers a more nuanced view of digital transformations than the one underpinning transparency studies, and problematizes the assumption that more information creates clarity and better conduct.

Dynamics of visibility and invisibility deserve more scholarly attention. These include the dynamics and tensions involved in the production of transparency, surveillance, opacity and secrecy in the digital age, but also complex relations between objects and their representation in data crunches and other digital formats. Engaging insights from organizational communication, political science and communication theory, the book explores the entanglement of digital transformations and the forms of seeing, knowing and governing at play in the management of visibilities. Unlike most accounts that either celebrate transparency and openness or condemn opacity and surveillance, I seek to disentangle what may be understood as an intricate conglomerate or family of forms, norms and functions of managing visibilities in a datafied world.

The Urgency of Visibility Management

Digital transformations fundamentally alter the way we produce, circulate and make sense of information, how our attention is guided, and how we go about the steering of social affairs. As a result, managing visibilities is one of the key challenges of our times. By exploring the workings and implications of the management of visibilities in different realms – the lives of individuals, organizational processes and societal developments – we get a more nuanced and comprehensive grasp of how digital transformations reconfigure the way we see, know and govern.

The internet promised to give us all a voice, to make information easily available and to create a level playing field for processes of democratization and participation. While hopes about digital transformations are still alive and digital spaces continue to deliver such opportunities, we are also increasingly aware of other consequences of digital transformations. The most obvious of these have to do with surveillance, threats to privacy and the multiple ways in which digital technologies facilitate crimes and illicit activities, but there are also less spectacular dynamics worth exploring. Increasingly, a small number of powerful, private companies are installing platforms and infrastructures that perform important social functions. We access information via Google, we build and maintain friendships via Facebook, assess the value and quality of services and products through online reviews, we buy and sell stuff via Amazon and eBay, and we gain access to cultural products via Netflix and Spotify. As these platforms become the fabric of our lives, organizations and societies they have significant consequences for how we act, think and order social relations. As more and more parts of social life rely on digital technologies and data, their architecture, the values they embody and the roles they play in social and cultural transformations become more important to consider. What kinds of societies, publics and politics are they part and parcel of? And what kind of power do they exert as backbones for the production of knowledge, our cultural memory and the way we craft social relations? These are questions about the roles and responsibilities of digital platforms, and about the role they play in societal affairs. Also, they require multiple kinds of answers and reflections. What this book suggests is that digital transformations can be understood as forces that shape how we manage visibilities and guide attention. To set up this argument, it challenges the contemporary excitement about transparency as a solution to all sorts of problems. But transparency projects have lots of complications, unintended consequences and paradoxical effects, and to grasp these we need to think and talk differently about transparency. At the same time, digital transformations, including digitalization and datafication, condition the pursuit of transparency in significant ways and fundamentally reconfigure how we see, know and govern the world. In response to these issues, the suggestion is that we need to think differently about transparency: as a matter of managing visibilities. What this concept means and implies, and why it is relevant for studies of transparency across personal, organizational and societal contexts is what this book sets out to articulate.

Accessibility standard: Unknown

Accessibility compliance for the HTML of this book is currently unknown and may be updated in the future.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×