To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, a consensus-based control strategy is presented to gather formation for a group of differential-wheeled robots. The formation shape and the avoidance of collisions between robots are obtained by exploiting the properties of weighted graphs. Since mobile robots are supposed to move in unknown environments, the presented approach to multi-robot coordination has been extended in order to include obstacle avoidance. The effectiveness of the proposed control strategy has been demonstrated by means of analytical proofs. Moreover, results of simulations and experiments on real robots are provided for validation purposes.
Ken Thompson was the 1984 recipient of the Turing Award, the equivalent of the Nobel Prize in Computer Science. His recognized contributions include the design, while at Bell Labs, of the UNIX operating system, that later led to the free software flagship Linux, and today Android, which has the largest share of the smartphone market. Yet, in his acceptance address, Thompson did not choose to talk about operating system design, but instead about computer security, and specifically “trust.” His thoughts were later collected in an essay entitled “Reflections on Trusting Trust” (Thompson, 1984), which has become a classic in the computer security literature. In this work, Thompson examines to what extent one can trust – in an intuitive sense, but as we will see, also a technical sense – a computer system. He recounts how computer systems are built on layers of hardware, but also by layers of software – computer code that provide instructions for how they should perform high-level operations, such as writing a document or loading a Web page. How could anyone foresee what these devices could do under different circumstances? First, they would have to examine what the hardware would do under all conditions. Although this is expensive, it is possible, as hardware is composed of fixed arrangements of wires, silicon, and plastic. Naively, one could assume that software could also be examined in a similar fashion: by reading the source code of the software to understand how it would behave.
The main thesis of this chapter is: trust in the context of the Internet, and elsewhere too, is usually best understood as a continuation of the normal run of life, not as an exception to it. We need to look at those usually unchallenged background activities, contacts, and commitments that, at some point, lead up to situations in which questions about trust are asked. This is not to say that we constantly trust each other, but it means that the question only has an application in particular situations, and that the meaning it has must be understood in the context of the situation. As a further, methodological remark, continuous with the previous point, I suggest that what trust “is” is best seen in situations in which “trust” is raised as an issue. To understand trust, we should not be looking for a mental state, attitude, or behavioral pattern “out there” for which the word stands. We should focus on the various kinds of worry that invite talk about trust; on what prompts us to apply the vocabulary of trust in certain problematic situations; and on how applications of that vocabulary contribute to solving, creating, or transforming those situations.
This also invites the question to what extent particular worries about trust are specific to the use of the Internet, as opposed to being continuous with what happens in other walks of social life. There exists a misleading picture that represents the Internet as a world unto itself, an incorporeal realm facing us with a specific set of philosophical and ethical conundrums. This looks to me like a romanticization of the Internet. It is more fruitful to think of our various uses of the Internet as so many extensions of our off-line practices.
This paper presents a novel approach for the description of physical human-robot interaction (pHRI) tasks that involve two-arm coordination, and where tasks are described by the relative pose between the human hand and the robot hand. We develop a unified kinematic model that takes into account the human-robot system from a holistic point of view, and we also propose a kinematic control strategy for pHRI that comprises different levels of shared autonomy. Since the kinematic model takes into account the complete human-robot interaction system and the kinematic control law is closed loop at the interaction level, the kinematic constraints of the task are enforced during its execution. Experiments are performed in order to validate the proposed approach, including a particular case where the robot controls the human arm by means of functional electrical stimulation (FES), which may potentially provide useful solutions for the interaction between assistant robots and impaired individuals (e.g., quadriplegics and hemiplegics).
If you've used a PC, a mobile phone or some other digital device, you've experienced the output of my discipline of interaction design, the field in which I’ve worked for the past seventeen years. The goal of an interaction designer is to design digital tools that help people achieve a task in their life, be it sending an email to a colleague, making a phone call to a friend, or creating a Web page for everyone to see. Interaction designers make choices about what a person sees on screen, when they see it, and how it reacts to their mouse clicks or finger presses. We design experiences that are intended to lead a person successfully through the stages of their task, hopefully in a way that feels effortless and even delightful or fun.
Design is a processional discipline. Designers start with an often vague set of needs and technologies; their goal is a gradual prioritization and synthesis of these and a narrowing down to something specific and buildable. This process, which is an iterative one based on constantly testing ideas, is primarily visual. We use tools like sketching, modeling, and prototyping to test these ideas and make choices of those we think are most successful.
When the first e-commerce services emerged in the late 1990s, consumer trust in online transactions was identified as a potential major hurdle. Researchers of human-computer interaction (HCI) started to investigate how interface and interaction medium design might make these services appear more trustworthy to users. Jens Riegelsberger (then a doctoral student) and the first author were part of that first cohort (Riegelsberger and Sasse 2001). We soon realized that much of the HCI research was very much focused on increasing user trust in Web sites through design elements, but did not consider (1) existing substantive knowledge from other disciplines on the role and mechanics of trust, and (2) existing methodological knowledge on how to conduct valid studies on trust formation and its impact on behavior. To address this, we reviewed and integrated existing knowledge to prepare a foundation for HCI research, which was published in two research papers: to address point 2, a prescription for valid HCI methods for studying trust, The Researcher's Dilemma (Riegelsberger et al. 2003a); and to address point 1, a framework for HCI research and The Mechanics of Trust (Riegelsberger et al. 2005).
The key message from the latter paper was the need for HCI researchers to engage with technology developers to create trustworthy systems, rather than focus on influencing trust perceptions at the user interface level. In the worst case, the latter could lead to manipulating user trust perceptions to place trust in systems that are not trustworthy, which would be socially and ethically irresponsible. The way forward, we argued, was to design systems that encouraged trustworthy behavior from all participants, by creating the right economic incentives and creating reliable trust signaling. In the current chapter, the authors summarize this prescription and reiterate the argument for it, because it is still valid today. We then review progress over the past eight years to consider to what extent the prescription has been implemented. Although our conclusion may seem sobering, it really is not: the security signals offered by service providers are not accurate enough and require too much user effort.
Trust – and its lack – is a hot issue. This is especially true of public discussion of one of the defining features of contemporary life – namely, computers and the varied technologies that are built on them. We want trust but doubt whether it is well-grounded. Nor is it clear how it could be so grounded. Where is rational trust to be found? Call this the search for trust. Meanwhile, computers become a more pervasive part of our lives. Uncertainty and risk increase. The search is urgent.
There is an obvious way to resolve the search for trust. To build trust in a technological world, we need to know what trust is. Philosophers answer questions of the form “what is f?” They do so paradigmatically through conceptual analysis. Therefore, philosophers should analyze trust, thereby answering the question “what is trust?” Such an analysis will explain when trust is grounded and when it is not. It will then be possible to identify how trust can be grounded in the specific context of the new modes of living that computing technologies have created. The response concludes: let's get started.
Several of the prior chapters in this book allude to the work of Harold Garfinkel and his seminal Studies in Ethnomethodology (1967). One of the great lessons that one can take from that book is the idea that society is made up of people who “do” sociological theory or, rather, people who construct and deploy “lay-sociological theorizing” to both interpret and organize the world around them. Their everyday reasoning is a form of sociology Garfinkel would have us believe. Today, of course, the idea that people theorize in this sense, that they reason sociologically, has suffused itself throughout the discipline of sociology and its cognates. Take Michel De Certeau (1984), for example, or another sociologist of the quotidian, Henri Lefebrve (2004). Both argue that the social world is constructed, “enacted” through the deployment of interpretative skills and agency – through people's capacity to reason in particular ways. And consider other social sciences, such as anthropology. Here Tim Ingold (2011) argues that people construct their places of dwelling through conscious acts of “dialogic engagement”: they attend to, work with, and reflect on the things and persons around in ways that directs them in new trajectories, lines of action. All of this is a form of reasoning, Ingold claims.
The subtle differences between these various views notwithstanding, that people reason in a way that can be characterized as sociological, and that, as a result, the thing called society has the shape it has, is virtually commonplace in contemporary thinking. The word “theorizing,” however, has been ameliorated with alternate formulas by these (and other) authors. We have just listed some of the alternative words and phrases used: people enact their reasoning and they rationally engage their reasoning as part of how they produce dwellings. These and other formula stand as proxy for theorizing. One of the motivations for using alternatives is that many commentators, including those just mentioned, would appear to prefer keeping the term “theory” as a label for their own thinking rather than as one applicable to the non-professional arena. To put it directly, this move allows them to valorize what they do while giving lay persons’ actions a more prosaic, less consequential air.
One view of cyberspace is that it is made up of technology: personal computers, the routers that support the Internet, huge data centers, and the like. Another view is that cyberspace is made up of people: people who interact over the Internet; people who run the Internet and the data centers; people who regulate, invest, set standards, and do all the other actions that make up the experience of cyberspace. The latter view is probably the more relevant; technology is only the foundation.
If cyberspace were only technology, we might properly ignore issues of trust. We might ask whether we have confidence that the technology will function as intended, and our everyday experience tells us when that confidence might be misplaced. But to the extent that cyberspace is made up of people, we should ask whether issues of trust are important in the proper functioning of cyberspace. I argue that trust is central in many ways.
Trust, as I use the term, is a relationship between trustor and trustee in which the trustor is willing to assume that the trustee will act in the best interest of the trustor. This does not mean that the trustor can predict exactly what the behavior of the trustee will be, but that the trustee will use judgment and intelligence to restrict the range of actions undertaken. One who is not trustworthy may be malicious or simply inattentive, incompetent, or in an unsuited role: trust is usually accepted with respect to a particular role.
In reality, the Internet, as a networking person would define it, has not changed much since it was commercialized in the 1990s. The main Internet concept is still there, and so are its core technologies and applications; for example, the protocols that are responsible for transferring bits between two computers have been virtually unchanged since the inception of the Internet. However, many things have evolved and have tremendously impacted the way we communicate, perform computation, and conduct business online.
This chapter highlights recent trends and technology evolutions that appear to be shaping perhaps not the Internet itself (as seen in the strict definition of a networking person), but everything around traditional approaches to computing and communication. In my opinion, there are three main such transforming trends: the Cloud and the promise it brings for computing; the new Web with its intertwined services and applications; and Big Data computing, which opens up new horizons and opportunities with fast processing of diverse, dynamic, and massive-scale datasets. Each of these trends is not disconnected from the others, but interlinked, which – as I discuss – is the case with every aspect of the Internet today. This maze of interconnected services, applications, users, and devices is one of the two main themes that are omnipresent in the Internet today. The other is an implicit notion of shared trust, a trust that appears to transfer – irrespective of user intentions – through the links of this maze, reforms our online experiences, and also bears tough challenges for user privacy.
A disturbance rejection controller is proposed based on the general dynamic model of 3D biped robots. For the first time, with this proposed approach, not only the Zero Moment Point (ZMP) location remains unchanged in presence of disturbances but also the longitudinal and lateral ground reaction forces and the vertical twist moment remain unchanged. This way, slipping as well as tipping is prevented by the controller. The swing phase of the robot's walking gait is considered. An integral sliding mode architecture is chosen for the disturbance rejection. The support forces and moments of the stance foot are the control outputs. The acceleration of the arm/body joints are chosen as the inputs. During the disturbance rejection, the leg joints remain at their desired trajectory. Since the leg joint trajectories are unaffected, the robot is still able to complete its step as planned, even when bounded disturbances are experienced. For simulations, the general method is applied to an 18-degree of freedom biped humanoid robot. Simulations show that the controller successfully mitigates bounded disturbances and maintains all of the support reactions extremely close to their desired values. Consequently, the shift in the position of the ZMP is negligible, and the robot foot does not slip.
Employing passive dynamics of the simplest point-foot walker, we have shown that the walking surface could have a great role in promoting the gait stability. In this regard, the stabilization of the simplest walking model,3 between the range of slopes greater than 0.0151 rad. and less than 0.26 rad., has been achieved. The walker like other passive dynamic walking models has no open or closed-loop control system; so, is only actuated by the gravity field. Moreover, no damper or spring is used. Obviously, according to the model's unstable behavior, it is unable to walk on an even flat ramp between the mentioned intervals.3 Here, instead of restraining the model, we let it explore other smooth surfaces, walking on which, will end in an equally inclined surface. To reach the objective, we employ a parallel series of fixed straight lines (local slopes) passing through contact points of an unstable cycling gait, which is generated by an ordinary ramp. To categorize, we have nicknamed those local slopes that guide the biped to a stable cyclic walking, “Ground Attractors,” and the other, leading it to a fall, “Repulsive Directions.” Our results reveal that for the slope <0.26 rad., a closed interval of ground attractors could be found. Stabilization of those unstable limit cycles by this technique makes obvious the key role of walking surface on bipedal gait. Furthermore, following our previous work,13 the results confirm that the two thoroughly similar walking trajectories can have different stability. All of these results strongly demonstrate that without considering the effects of a walking surface, we cannot establish any explicit relationship between the walker's speed and its stability.
This paper discusses a planar 2-DOF (degrees of freedom) parallel kinematic machine with actuation redundancy. Its inverse dynamic model is constructed by utilizing the Newton–Euler method based on the kinematic analysis. However, the dynamic model cannot be solved directly because the number of equations is less than the number of unknowns, which is due to the redundant force. In order to solve this problem, the relationship between the deformations of the links and the position errors of the moving platform are further explored. Then a novel method, which aims at minimizing the position errors of the machine, is proposed to optimize the redundant force. It also enables to solve the dynamic model. Finally, the dynamic performance analyses of this machine and its non-redundant counterpart are provided by numerical examples. Besides, another optimization method proposed for minimizing the constraint forces is also applied for comparison. The results show the effectiveness of the novel methods in improving the position precision of the machine.
The paper describes a teacher-training course on the use of corpora in language education offered to graduate students at the Institute of Applied Linguistics, University of Warsaw. It also presents the results of two questionnaires distributed to the students prior to and after the second edition of the course. The main aims of the course are: to introduce students to the concept of a corpus and its analysis; to familiarize them with a range of available corpora, corpus-based resources and tools; and to demonstrate to them various applications of corpora in language education, with special emphasis placed on the in-house preparation of courses, teaching materials and class activities. In the first part of the paper, the design, the syllabus, the progression and the outcomes of the course are outlined. In the second part, the responses of thirteen students who participated in the second edition of the course are analysed. The analysis indicates that on the whole the students reacted positively to the course and they saw the benefits of corpus-based materials and tools in language teaching. Yet the students also reported that they needed more time to gain full command of the resources and software presented and more guidance on the pedagogical issues related to corpus use. The paper concludes that fourteen sessions, designed as an overview of the whole range of corpus-based resources and applications, is not sufficient to encourage teacher trainees to use corpora in their future work if they have no contact with these resources and tools in other classes. Only extensive exposure to corpora by future teachers coupled with suitable teacher training in the applications of corpora in language education may bring a substantial change in the scope of corpus use in language classrooms in the wide educational context.
Controlling a walking biped robot is a challenging problem due to the robot's complex and uncertain dynamics. In order to tackle this problem, we propose a sliding mode controller based on a dynamic model that we obtained using the conformal geometric algebra (CGA) approach. An important contribution of this paper is the development of algorithms using the CGA framework. The CGA framework permits us to use lines, points, and other geometric entities to obtain the Lagrange equations of the system. The references for the joints of the robot were obtained in a bio-inspired way following the kinematics of a walking human body. The first and second derivatives of the reference signal were obtained via an exact robust differentiator based on a high-order sliding mode. We analyzed the performance of the proposed control schemes by using bio-inspired walking patterns and simulations.