We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The internet has altered how people engage with each other in myriad ways, including offering opportunities for people to act distrustfully. This fascinating set of essays explores the question of trust in computing from technical, socio-philosophical, and design perspectives. Why has the identity of the human user been taken for granted in the design of the internet? What difficulties ensue when it is understood that security systems can never be perfect? What role does trust have in society in general? How is trust to be understood when trying to describe activities as part of a user requirement program? What questions of trust arise in a time when data analytics are meant to offer new insights into user behavior and when users are confronted with different sorts of digital entities? These questions and their answers are of paramount interest to computer scientists, sociologists, philosophers and designers confronting the problem of trust.
As the contributions to the first and last sections of this volume indicate, trust is a problem for those who build Internet services and those who are tasked with policing them. If only they had good models and even better specifications of users, use, and usage, or so they seem to say, they could build systems that would ensure and enhance the privacy, security, and safety of online services. Understandably (but perhaps not wisely), they tend to be impatient with what appears to be overly precious concept mongering and theoretical hairsplitting by those disciplines to which they look to provide these models and specifications. But perhaps an understanding of the provenance and distinctiveness of the range of models being offered might give those who wish to deploy them deeper insight into their domains of application as well as their limitations. Each is shaped by the presuppositions on which it is based and the conceptual and other choices made in its development. No one model, no individual summary of requirements can serve for all uses.
Awareness of this “conceptual archaeology” is especially important when the model's presuppositions are orthogonal to those that are conventional in the field. In such cases, it is critical to understand both why different starting points are taken and the benefits that are felt to be derived thereby. Difference is rarely an expression of simple contrariness but usually reflects deliberate choice made in the hope that things might be brought to light which otherwise are left obscure.
Any glance at the contemporary intellectual landscape would make it clear that trust, society, and computing are often discussed together. And any glance would also make it clear that when this happens, the questions that are produced often seem, at first glance, straightforward. Yet, on closer examination, these questions unravel into a quagmire of concerns. What starts out as, say, a question of whether computers can be relied on to do a particular job often turns into something more than doubts about a division of labor. As Douglas Rushkoff argues in his brief and provocative book, Program or be Programmed (2010), when people rely on computers to do some job, it is not like Miss Daisy trusting her chauffeur to take her car to the right destination. But it is not what computers are told to do that is the issue. At issue is what computers tell us, the humans, as they get on with whatever task is at hand. And this in turn implies things about who and what we are because of these dialogues we have with computers. I use the word dialogues purposefully here because it is suggestive of how interaction between person and machine somehow alters the sense a person has of themselves and of the machine they are interacting with, and how this in turn alters the relationship the two have – that is, the machine and the “user.” According to Rushkoff, it is not possible to know what the purpose of an interaction between a person and a machine might be; it is certainly not as simple as a question of a command and its response. In his metaphor about driving, what come into doubt are rarely questions about whether the computer has correctly heard and identified the destination the human wants – the place to which they have instructed the machine to navigate them. The interaction we have with computers lead us to doubt why a particular destination is chosen. This in turn leads to doubts about whether such choices should be in the hands of the human or the computer.
I approach the topic of trust from two converging directions. The first derives from work primarily in the domains of Information and Computing Ethics (ICE) –work that also includes perspectives from phenomenology and a range of applied ethical theories. The second draws from media and communication studies most broadly, beginning with Medium Theory or Media Ecology traditions affiliated with the likes of Marshall McLuhan, Harold Innis, Elizabeth Eisenstein, and Walter Ong. In these domains, attention to communication in online environments, including distinctively virtual environments, began within what was first demarcated as studies of Computer-Mediated Communication (CMC). The rise of the Internet and then the World Wide Web in the early 1990s inspired new kinds of research within CMC; by 2000 or so, it became possible to speak of Internet Studies (IS) as a distinctive field in its own right, as indexed, for example, by the founding of the Oxford Internet Institute.
Drawing on both of these sources to explore a range of issues at their intersections – most certainly including trust – is useful first of all as the more empirically oriented research constituting CMC and IS work thereby grounds the often more theoretical approaches of ICE in the fine-grained details of praxis. At the same time, the more theoretical approaches of ICE, as we will see, help us complement the primarily social scientific theories and methodologies that predominate in CMC and IS. By taking both together, I hope to provide an account of trust in online environments that is at once strongly rooted in empirical findings while also grounded in and illuminated by a very wide range of theoretical perspectives. This approach requires at least one important caveat, to which I return shortly.
The topics covered in this collection have been wide and varied. Some have been investigated in depth, others merely identified. As we move now to summarize what has been covered, it is important to remember that the goal has been to provide the reader with a sensibility for the various perspectives and points of view that can be brought to bear on the combined subject of trust, computing, and society. The book commenced with a call to arms: Chapter 2 by David Clark. Part of the sensibility in question demands one be alert, he argues, alert to the way issues of trust in society come in by the back door provided by technology and the Internet in particular. Other chapters made it clear that other capacities are required, too. A further sensibility is to be open to the diverse treatments that different perspectives (or disciplines) offer and to have the acuity not to allow those treatments to muddle each other. One has to be sensitive too to how the concept of “trust” is essentially a vernacular, used by ordinary people in everyday ways. Analysis of it must focus on that use and not be distracted by hypothesized uses, ones constructed through, say, theory or experiment – although these treatments might afford more nuanced understandings of the vernacular. Part of these vernacular practices entails inducing fear and worry. Such fear and worry can undermine some of the other aspects of the sensibility already mentioned; such as awareness of differences in points of view, and of course, beyond this, simply clarity and calmness of thought that might lead one to correctly resist the “crowding out” of other explanations that use of the word trust sometimes produces.
I came to the consideration of trust not because it is currently a public issue, nor because it seems to be in vogue in sociology. Instead, I was dismayed at some of the recent sociological treatments of the subject and, in particular, at these studies’ cursory treatment of one of what many would consider to be the leading foundational modern study of trust in social interaction, that of the late Harold Garfinkel who, in 1963, published a paper titled “A Conception of, and Experiments with, ‘Trust’ as a Condition of Stable, Concerted Actions,” (Garfinkel 1963b).
During the time that he was Professor of Sociology at the University of California at Los Angeles, Garfinkel devised a radically innovative approach that he termed “ethnomethodology.” This meant “members’ methods.” By this, then, Garfinkel intended not a technical or professional research method per se but, instead, a topic for study; namely the study of society members’ interactionally deployed cultural methods of making sense of the everyday contexts in which they find themselves, methods also of sharing this sense and incorporating it into their joint projects of action – in a phrase, sense-making-in-action.