To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Computer-based Corpsman Training System (CBCTS) was developed by ECS, Inc. for the U.S. Army Research, Development and Engineering Command. Game design elements complement the instructional design elements to produce an award-winning learning game. Notable design features include a well-designed tutorial, opportunities for decision making, time to reflect and replay a scenario, and implicit and explicit feedback. While game and instructional elements work very well together in CBCTS, suggestions are made in this chapter to increase instructional guidance to gain learning efficiencies without jeopardizing gameplay. These suggestions will benefit all learning game designers striving to improve their own games. Game designers are cautioned that additional elements may increase the design and development resource requirements, and instructional and gameplay trade-offs have to be considered. Some of these trade-offs are briefly addressed.
Introduction
The Computer-based Corpsman Training System (CBCTS) is a learning game that provides combat corpsmen realistic training to prepare them to apply their skills in a combat situation. CBCTS was developed by ECS, Inc. for the U.S. Army Research, Development and Engineering Command (RDECOM). The game supports training for Navy combat medics who are assigned to the U.S. Marine Corps. CBCTS is used at the Army Medical Department (AMEDD) Center and School as part of the curriculum to prepare combat medics.
The internet has altered how people engage with each other in myriad ways, including offering opportunities for people to act distrustfully. This fascinating set of essays explores the question of trust in computing from technical, socio-philosophical, and design perspectives. Why has the identity of the human user been taken for granted in the design of the internet? What difficulties ensue when it is understood that security systems can never be perfect? What role does trust have in society in general? How is trust to be understood when trying to describe activities as part of a user requirement program? What questions of trust arise in a time when data analytics are meant to offer new insights into user behavior and when users are confronted with different sorts of digital entities? These questions and their answers are of paramount interest to computer scientists, sociologists, philosophers and designers confronting the problem of trust.
As the contributions to the first and last sections of this volume indicate, trust is a problem for those who build Internet services and those who are tasked with policing them. If only they had good models and even better specifications of users, use, and usage, or so they seem to say, they could build systems that would ensure and enhance the privacy, security, and safety of online services. Understandably (but perhaps not wisely), they tend to be impatient with what appears to be overly precious concept mongering and theoretical hairsplitting by those disciplines to which they look to provide these models and specifications. But perhaps an understanding of the provenance and distinctiveness of the range of models being offered might give those who wish to deploy them deeper insight into their domains of application as well as their limitations. Each is shaped by the presuppositions on which it is based and the conceptual and other choices made in its development. No one model, no individual summary of requirements can serve for all uses.
Awareness of this “conceptual archaeology” is especially important when the model's presuppositions are orthogonal to those that are conventional in the field. In such cases, it is critical to understand both why different starting points are taken and the benefits that are felt to be derived thereby. Difference is rarely an expression of simple contrariness but usually reflects deliberate choice made in the hope that things might be brought to light which otherwise are left obscure.
Any glance at the contemporary intellectual landscape would make it clear that trust, society, and computing are often discussed together. And any glance would also make it clear that when this happens, the questions that are produced often seem, at first glance, straightforward. Yet, on closer examination, these questions unravel into a quagmire of concerns. What starts out as, say, a question of whether computers can be relied on to do a particular job often turns into something more than doubts about a division of labor. As Douglas Rushkoff argues in his brief and provocative book, Program or be Programmed (2010), when people rely on computers to do some job, it is not like Miss Daisy trusting her chauffeur to take her car to the right destination. But it is not what computers are told to do that is the issue. At issue is what computers tell us, the humans, as they get on with whatever task is at hand. And this in turn implies things about who and what we are because of these dialogues we have with computers. I use the word dialogues purposefully here because it is suggestive of how interaction between person and machine somehow alters the sense a person has of themselves and of the machine they are interacting with, and how this in turn alters the relationship the two have – that is, the machine and the “user.” According to Rushkoff, it is not possible to know what the purpose of an interaction between a person and a machine might be; it is certainly not as simple as a question of a command and its response. In his metaphor about driving, what come into doubt are rarely questions about whether the computer has correctly heard and identified the destination the human wants – the place to which they have instructed the machine to navigate them. The interaction we have with computers lead us to doubt why a particular destination is chosen. This in turn leads to doubts about whether such choices should be in the hands of the human or the computer.
I approach the topic of trust from two converging directions. The first derives from work primarily in the domains of Information and Computing Ethics (ICE) –work that also includes perspectives from phenomenology and a range of applied ethical theories. The second draws from media and communication studies most broadly, beginning with Medium Theory or Media Ecology traditions affiliated with the likes of Marshall McLuhan, Harold Innis, Elizabeth Eisenstein, and Walter Ong. In these domains, attention to communication in online environments, including distinctively virtual environments, began within what was first demarcated as studies of Computer-Mediated Communication (CMC). The rise of the Internet and then the World Wide Web in the early 1990s inspired new kinds of research within CMC; by 2000 or so, it became possible to speak of Internet Studies (IS) as a distinctive field in its own right, as indexed, for example, by the founding of the Oxford Internet Institute.
Drawing on both of these sources to explore a range of issues at their intersections – most certainly including trust – is useful first of all as the more empirically oriented research constituting CMC and IS work thereby grounds the often more theoretical approaches of ICE in the fine-grained details of praxis. At the same time, the more theoretical approaches of ICE, as we will see, help us complement the primarily social scientific theories and methodologies that predominate in CMC and IS. By taking both together, I hope to provide an account of trust in online environments that is at once strongly rooted in empirical findings while also grounded in and illuminated by a very wide range of theoretical perspectives. This approach requires at least one important caveat, to which I return shortly.
The topics covered in this collection have been wide and varied. Some have been investigated in depth, others merely identified. As we move now to summarize what has been covered, it is important to remember that the goal has been to provide the reader with a sensibility for the various perspectives and points of view that can be brought to bear on the combined subject of trust, computing, and society. The book commenced with a call to arms: Chapter 2 by David Clark. Part of the sensibility in question demands one be alert, he argues, alert to the way issues of trust in society come in by the back door provided by technology and the Internet in particular. Other chapters made it clear that other capacities are required, too. A further sensibility is to be open to the diverse treatments that different perspectives (or disciplines) offer and to have the acuity not to allow those treatments to muddle each other. One has to be sensitive too to how the concept of “trust” is essentially a vernacular, used by ordinary people in everyday ways. Analysis of it must focus on that use and not be distracted by hypothesized uses, ones constructed through, say, theory or experiment – although these treatments might afford more nuanced understandings of the vernacular. Part of these vernacular practices entails inducing fear and worry. Such fear and worry can undermine some of the other aspects of the sensibility already mentioned; such as awareness of differences in points of view, and of course, beyond this, simply clarity and calmness of thought that might lead one to correctly resist the “crowding out” of other explanations that use of the word trust sometimes produces.
I came to the consideration of trust not because it is currently a public issue, nor because it seems to be in vogue in sociology. Instead, I was dismayed at some of the recent sociological treatments of the subject and, in particular, at these studies’ cursory treatment of one of what many would consider to be the leading foundational modern study of trust in social interaction, that of the late Harold Garfinkel who, in 1963, published a paper titled “A Conception of, and Experiments with, ‘Trust’ as a Condition of Stable, Concerted Actions,” (Garfinkel 1963b).
During the time that he was Professor of Sociology at the University of California at Los Angeles, Garfinkel devised a radically innovative approach that he termed “ethnomethodology.” This meant “members’ methods.” By this, then, Garfinkel intended not a technical or professional research method per se but, instead, a topic for study; namely the study of society members’ interactionally deployed cultural methods of making sense of the everyday contexts in which they find themselves, methods also of sharing this sense and incorporating it into their joint projects of action – in a phrase, sense-making-in-action.
Ken Thompson was the 1984 recipient of the Turing Award, the equivalent of the Nobel Prize in Computer Science. His recognized contributions include the design, while at Bell Labs, of the UNIX operating system, that later led to the free software flagship Linux, and today Android, which has the largest share of the smartphone market. Yet, in his acceptance address, Thompson did not choose to talk about operating system design, but instead about computer security, and specifically “trust.” His thoughts were later collected in an essay entitled “Reflections on Trusting Trust” (Thompson, 1984), which has become a classic in the computer security literature. In this work, Thompson examines to what extent one can trust – in an intuitive sense, but as we will see, also a technical sense – a computer system. He recounts how computer systems are built on layers of hardware, but also by layers of software – computer code that provide instructions for how they should perform high-level operations, such as writing a document or loading a Web page. How could anyone foresee what these devices could do under different circumstances? First, they would have to examine what the hardware would do under all conditions. Although this is expensive, it is possible, as hardware is composed of fixed arrangements of wires, silicon, and plastic. Naively, one could assume that software could also be examined in a similar fashion: by reading the source code of the software to understand how it would behave.
The main thesis of this chapter is: trust in the context of the Internet, and elsewhere too, is usually best understood as a continuation of the normal run of life, not as an exception to it. We need to look at those usually unchallenged background activities, contacts, and commitments that, at some point, lead up to situations in which questions about trust are asked. This is not to say that we constantly trust each other, but it means that the question only has an application in particular situations, and that the meaning it has must be understood in the context of the situation. As a further, methodological remark, continuous with the previous point, I suggest that what trust “is” is best seen in situations in which “trust” is raised as an issue. To understand trust, we should not be looking for a mental state, attitude, or behavioral pattern “out there” for which the word stands. We should focus on the various kinds of worry that invite talk about trust; on what prompts us to apply the vocabulary of trust in certain problematic situations; and on how applications of that vocabulary contribute to solving, creating, or transforming those situations.
This also invites the question to what extent particular worries about trust are specific to the use of the Internet, as opposed to being continuous with what happens in other walks of social life. There exists a misleading picture that represents the Internet as a world unto itself, an incorporeal realm facing us with a specific set of philosophical and ethical conundrums. This looks to me like a romanticization of the Internet. It is more fruitful to think of our various uses of the Internet as so many extensions of our off-line practices.
If you've used a PC, a mobile phone or some other digital device, you've experienced the output of my discipline of interaction design, the field in which I’ve worked for the past seventeen years. The goal of an interaction designer is to design digital tools that help people achieve a task in their life, be it sending an email to a colleague, making a phone call to a friend, or creating a Web page for everyone to see. Interaction designers make choices about what a person sees on screen, when they see it, and how it reacts to their mouse clicks or finger presses. We design experiences that are intended to lead a person successfully through the stages of their task, hopefully in a way that feels effortless and even delightful or fun.
Design is a processional discipline. Designers start with an often vague set of needs and technologies; their goal is a gradual prioritization and synthesis of these and a narrowing down to something specific and buildable. This process, which is an iterative one based on constantly testing ideas, is primarily visual. We use tools like sketching, modeling, and prototyping to test these ideas and make choices of those we think are most successful.