To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Created in a dorm room at Harvard University, Facebook was a simple website set up to compare two pictures of female students at a time, inviting fellow students to mark them as hot or not. Since then, the scope and ambitions of Facebook have expanded considerably. Here is what Mark Zuckerberg said about the role of Facebook at a meeting on the financial results of the company ten years later: “Next, let’s talk about understanding the world. What I mean by this is that every day, people post billions of pieces of content and connections into the graph and in doing this, they’re helping to build the clearest model of everything there is to know in the world” (Facebook, 2013, italics added).
Performing actions in a timely manner is an indispensable aspect in everyday human activities. Accordingly, it has to be present in robotic systems if they are going to seamlessly interact with humans. The current work addresses the problem of learning both the spatial and temporal characteristics of human motions from observation. We formulate learning as a mapping between two worlds (the observed and the action ones). This mapping is realized via an abstract intermediate representation termed “Latent Space.” Learned actions can be subsequently invoked in the context of more complex human–robot interaction (HRI) scenarios. Unlike previous learning from demonstration (LfD) methods that cope only with the spatial features of an action, the formulated scheme effectively encompasses spatial and temporal aspects. Learned actions are reproduced under the high-level control of a time-informed task planner. During the implementation of the studied scenarios, temporal and physical constraints may impose speed adaptations in the reproduced actions. The employed latent space representation readily supports such variations, giving rise to novel actions in the temporal domain. Experimental results demonstrate the effectiveness of the proposed scheme in the implementation of HRI scenarios. Finally, a set of well-defined evaluation metrics are introduced to assess the validity of the proposed approach considering the temporal and spatial consistency of the reproduced behaviors.
This book started from a puzzle. If transparency is not just a simple recipe for the improvement of social life through the sharing of timely and accurate information, then what is it? And how do we make sense of what happens when ideals about transparency, clarity and openness – facilitated by digital transformations – spread through business, politics and societies at large? Our times are marked by a widespread trust in transparency as a panacea and an infatuation with the idea that humans, organizations and societies can be optimized if we can see what they are and how they behave. Through explorations of theories and illustrations, the different chapters have suggested that something more intricate may be at play: that transparency works not like a window being opened on reality, but more like a prism that refracts and produces selective and surprising visibilities. The illustrations also suggests that transparency ideals travel widely, and shape the lives of individuals, organizations and societies in extensive ways. Transparency, as I have put it, is a form of social ordering and a force in the reconfiguration of human realities, organizational processes and politics and society.
The excitement about transparency means that many organizations think and talk about their operations and relations in new ways. The public and the press increasingly demand access and insight into the inner lives of organizations, and digital technologies are often seen as disruptive forces that will end secrecy and make it impossible to stay out of sight (Sifry, 2011). The combination of institutionalized transparency ideals and processes of digitalization and datafication is often considered to make organizations more accountable and unable to hide. Such accounts trust in the ability of transparency initiatives to provide direct access to organizational phenomena and processes, often invoking metaphorical images such as “opening a window on reality” and the existence of “naked organizations” (Tapscott and Ticoll, 2003). Also, they stress that those with secrets will need to exercise extreme caution because the mounting pressure to share and open up makes organizations more fluid and accessible.
Our personal lives are shaped by digital transformations in very obvious and tangible ways. Billions of people spend hours a day sharing, commenting and liking photos and stories. We have easier and better access to all kinds of information, and more and more daily activities take place in digital spaces. Most people coordinate their work and social relations in new ways, and many use digital technologies to raise awareness of what they do and who they are. Digital technologies make communicating, sharing and engaging with others easier than ever before. These developments blur the lines between what is public and what is private, and require that we reflect on what openness and privacy imply in a datafied world. But digital transformations also shape our lives in more fundamental and subtle ways. How we sense and experience, how we depict ourselves and how we understand the world are inseparable from digital technologies and the environments they give rise to. Digital spaces and data exchanges are increasingly the foundations of our existence, whether we like it or not. At a rapidly growing pace, they offer new possibilities for action and guide our conduct in important ways.
As suggested throughout this book, digital transformations do not simply make individuals, organizations or societies more transparent. What happens is rather an intensification of the need to manage visibilities – to consider what to make (in)visible and how to guide attention. So far, my main concern has been to suggest what these developments mean for individuals and organizations, but now the focus shifts to broader societal issues. That is, how do hopes about digital transparency and processes of visibility management shape the way we think about societies and political affairs? Looking at such different phenomena as state control, corporate reporting, social editing and attempts to govern future affairs, the chapter offers an account of visibility management as a form of social ordering.
There is an important historical backdrop to this. Transparency is intimately tied to the project of modernity – the hope for technological progress, human perfectability and scientific scrutiny and rationality. But transparency is also an Enlightenment ideal that largely remains unchallenged, despite its limitations and obvious ties to phenomena such as totalitarianism and surveillance.