To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The chapter defines the concept of approach, along with its derivation and exemplification. A general definition of approach is proposed. The latter is then applied to HCI research in particular. An approach to HCI comprises the addressing of the topic or problem of human–computer interaction research; the performing of actions to progress the approach to the addressing of the topic or problem of human–computer interaction research; the evaluating of the success of the actions performed to progress an approach to the addressing of the topic or problem of human–computer interaction research; and the cumulating of the successes of whether the topic or problem of human–computer interaction research has been addressed or not. The definition is both explicit and sufficiently well specified for the later application of frameworks to the approaches, retained here – innovation, art, craft, applied, science and engineering.
The chapter assesses the General Framework for HCI research for completeness against other frameworks for HCI. The General Framework comprises concepts of discipline, general as common, general problem, particular scope, general research, general knowledge and general practices. These concepts are assessed for completeness against 10 individual other frameworks, which are in turn assessed for completeness against the General Framework. Overall, the General Framework is considered to be complete, although this depends much on the coherence of individual concepts. This assessment, however, does not constitute a validation of the General Framework. It is nevertheless indicative of its comparability with other HCI frameworks with respect to its completeness. The frameworks are further considered as concerns their dissemination in the HCI research literature. Dissemination is understood to include the framework’s disseminators, as well as the dissemination’s content, media and means. The General Framework is then assessed for its accommodation of the factors that appear to influence the success of framework dissemination.
The chapter presents the science approach to HCI research, including an illustration from the literature. The latter presents the case for developing new forms of psychology deep theory, based on generic systems of interactors. The chapter then presents the specific science framework for HCI research comprising science as discipline, general problem, particular scope, research, knowledge and practices. The specific science framework is followed by the science design research exemplar, as the science design cycle, the applied design research cycle and the science design research cycle. The lower-level science framework comprises the science application, the science interactive system, and the science interactive system performance. Both the exemplar and the lower-level framework are applied to the same illustration of the science approach, taken from the literature, which presents the case for developing new forms of psychology deep theory, based on generic systems of interactors.
This research textbook, designed for young Human-Computer Interaction (HCI) researchers beginning their careers, surveys the research models and methods in use today and offers a general framework to bring together the disparate concepts. HCI spans many disciplines and professions, including information science, applied psychology, computer science, informatics, software engineering and social science making it difficult for newcomers to get a good overview of the field and the available approaches. The book's rigorous 'approach-and-framework' response is to the challenge of retaining growth and diversification in HCI research by building up a general framework from approaches for Innovation, Art, Craft, Applied, Science and Engineering. This general framework is compared with other HCI frameworks and theories for completeness and coherence, all within a historical perspective of dissemination success. Readers can use this as a model to design and assess their own research frameworks and theories against those reported in the literature.
It has been ten years since video game giant Electronic Arts first released The Sims, the best-selling game that allows its players to create a household and then manage every aspect of daily life within it. And since its debut, gamers young and old have found ways to 'mod' The Sims, a practice in which gamers manipulate the computer code of a game, and thereby alter it to add new content and scenarios. In Players Unleashed! 'the first study of its kind' Tanja Sihvonen provides a fascinating examination of modding, tracing its evolution and detailing its impact on The Sims and the game industry as a whole. Along the way, Sihvonen shares insights into specific modifications and the cultural contexts from which they emerge.
The successful transmediation of books and documents through digitization requires the synergetic partnership of many professional figures, that have what may sometimes appear as contrasting goals at heart. On one side, there are those who look after the physical objects and strive to preserve them for future generations, and on the other those involved in the digitization of the objects, the information that they contain, and the management of the digital data. These complementary activities are generally considered as separate and when the current literature addresses both fields, it does so strictly within technical reports and guidelines, concentrating on procedures and optimal workflow, standards, and technical metadata. In particular, more often than not, conservation is presented as ancillary to digitization, with the role of the conservator restricted to the preparation of items for scanning, with no input into the digital product, leading to misunderstanding and clashes of interests. Surveying a variety of projects and approaches to the challenging conservation-digitization balance and fostering a dialogue amongst practitioners, this book aims at demonstrating that a dialogue between apparently contrasting fields not only is possible, but it is in fact desirable and fruitful. Only through the synergetic collaboration of all people involved in the digitization process, conservators included, can cultural digital objects that represent more fully the original objects and their materiality be generated, encouraging and enabling new research and widening the horizons of scholarship.
We live in an algorithmic world. There is currently no area of our lives that has not been touched by computation and its language and tools. Since when, in the early 1940s, a small group of people led by John von Neumann gathered to turn into reality the vision of a universal computing machine, humankind is experiencing a sort of permanent revolution in which our understanding of the world and our ways of acting on it are steadily transformed by the steps forward we make in processing information. Such a condition is vividly depicted by Alan Turing in one of the founding documents of the quest for artificial intelligence (AI): “in attempting to construct machines … we are providing mansions for the souls.”1 Computers and algorithms can be seen as the building blocks of a new, ever-expanding building – a cathedral, to use George Dyson’s metaphor2 – in which every human activity is going to be shaped by the digital architecture hosting it.
Algorithms in society are both innocuous and ubiquitous. They seamlessly permeate both our on- and offline lives, quietly distilling the volumes of data each of us now creates. Today, algorithms determine the optimal way to produce and ship goods, the prices we pay for those goods, the money we can borrow, the people who teach our children, and the books and articles we read – reducing each activity to an actuarial risk or score. “If every algorithm suddenly stopped working,” Pedro Domingos hypothesized, “it would be the end of the world as we know it.”1
Public administration in Norway and in many other countries has used computers for more than fifty-five years. It is normal and necessary. Of course, it is possible to imagine many more office buildings where thousands of men and women would do all the detailed processing of individual cases that are processed today by computers, but this alternative is not very realistic: Modern taxation systems, national social insurance schemes and management of many other welfare programs would not be feasible without the use of computers and the algorithmic law that is integrated in the software. Thus, the question is not if public administration should apply computer technology, but how this should be done. This chapter deals with important how-to questions.
Transparency has been in the crosshairs of recent writing about accountable algorithms. Its critics argue that releasing data can be harmful, and releasing source code won’t be useful.1 They claim individualized explanations of artificial intelligence (AI) decisions don’t empower people, and instead distract from more effective ways of governing.2 While criticizing transparency’s efficacy with one breath, with the next they defang it, claiming corporate secrecy exceptions will prevent useful information from getting out.3
This chapter’s thesis is simple: as a general matter, agreements are a functional and conceptually straightforward way for the law to recognize algorithms. In particular, using agreements to recognize algorithms into the law is better than trying to use the law of agency to do so.1 Casual speech and conceptualism have led to the commonplace notion of “electronic agents,” but the law of agreements is a more functional entry point for algorithms to interact with the law than the concept of vicarious action. Algorithms need not involve any vicarious action, and most of the law of agency translates very poorly to algorithms that lack intent, reasonable understanding, and legal personality in their own right; instead, algorithms cause activity that may have contractual or other agreement-based legal significance. Recognizing the power (and perhaps the necessity) of addressing algorithms by means of the law governing agreements and other legal instruments can free us from formalistic attempts to shoehorn algorithms into a limited set of existing legal categories.
The United States’ transition from an economy built on form contracts to an economy built on algorithmic contracts has been as subtle as it has been thorough. In the form contract economy, many firms used standard order forms to make and receive orders. Firms purchased products and services with lengthy terms of services. Even negotiated agreements between fairly sophisticated businesses involved heavy incorporation of standard form terms selected by lawyers.
Online reputational injury can occur in a number of ways; and one way is through the use of algorithms that pervade the Internet. The Internet is comprised of complex technologies that enable the dissemination of information rapidly, providing global reach in a matter of seconds through the click of a button. The Internet provides robust public discourse over a gamut of topics in real-time and allows individuals from different parts of the world to interact with one another while preserving some sense of anonymity (that is, if they so choose). Many online communications stem from one piece of content that is regurgitated and redistributed on multiple platforms. For example, consider the social application Twitter. Twitter allows individuals to transmit bite-sized pieces of data among millions of users. Twitter has surprisingly become an outlet for a recent US President, and the fact that the public has direct access to a sitting President in this manner is undoubtedly incredible. Further, the Internet emboldens individuals to act behind the shield of a screen. There is little cost to spreading information, ideas, and gossip online, with seemingly few ramifications. Despite what may be perceived as a few keystrokes that have an ephemeral impact, content on the Internet has a tendency of permanence.
The use of “algorithms” in criminal investigation, adjudication, and punishment is not a new phenomenon. That is, to the extent that “algorithms” are simply sets of rules capable of being executed by a machine, the criminal justice system has long incorporated their use. For example, the US sentencing guidelines are so mechanistic that they were, at least for a time, literally calculated by software. Likewise, the New York Police Department’s erstwhile “stop and frisk” program was a mechanistic means of deciding whom to search and when. And so-called “per se” impaired driving laws have for nearly half a century mechanistically imposed criminal liability based on a machine’s determination that a person’s blood-alcohol level is over a certain threshold, without a jury determination of dangerous impairment.