To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
An increased interest in adaptation studies in the early 21st century has generated countless discussions about rethinking adaptations as well as the field of adaptation studies as a whole. The impression has often been voiced, for instance by Thomas Leitch in his essay for the inaugural issue of the journal Adaptation, that adaptation studies is “at a crossroads,” in which its methodology and material are in transition from the discipline’s humble novel-to-film-studies beginnings to a broader, if somewhat unclear, future. As part of a moment in the field's history, in which scholars repeatedly state ambitious research agendas, Linda Hutcheon has likewise described adaptation studies as moving “well beyond [its] familiar film/ performance focus” and on to readings that highlight the politics of our time, the “indigenization” of adaptations, and approaches that question notions of priority and anteriority in unprecedented ways.
There is, however, another major change – the elephant in the room of adaptation studies, so to speak: since adaptation, at least in its most common understanding, describes the transposition of a story or its elements from one medium to another, it is necessarily bound to questions of mediality and remediation. Therefore, one of the most important new developments in adaptation studies is constituted by the shift in the global mediascape in light of the rise of digital media since the 1980s and the spread of the internet since the 1990s. This transformation amounts to nothing less than a shift from a largely analog, localized, image- and text-based “Gutenberg Galaxy” to a more rapidly disseminating mixed analog-digital environment. It is a moment that forces us, once again, to re-examine notions of authorship, control, audiences, sources and adaptations, as well as interactions between medium and consumer, or between consumers and producers. This volume sets out to explore how these shifts relate to adaptation studies and what they mean for the field. It does so by examining new forms of adaptations and their cultural embeddedness both theoretically and analytically, with the help of a range of texts constituting some of the major new forms of adaptations and adaptation environments that have arisen in the wake of the rise of digital media.
Applying broad notions of adaptation, this chapter seeks to bring “recombinant adaptation” – mashups and remixes on digital platforms – in dialogue with Gerard Genette's idea of the paratext as a text's “relations with the public.” It takes four steps towards investigating how literary publishing houses such as Quirk Books respond to recombinant adaptation. Firstly, it delineates the paratexts of mashup novels as performative zones of transaction. Secondly, it examines the question of how paratexts regulate the quasi-religious textuality of fandom participation. Thirdly, it looks at the role of paratextual canonization within this textuality. And finally, it argues that printed products within the field attempt to perform a nostalgic authorization and re-materialization of literature, highlighting the haptic and material qualities of the book. Adapting the term “polytext,” the chapter calls these multifarious paratextual transactions the “polyprocess.”
This essay seeks to bring the field of “recombinant adaptation” – mashups and remixes on digital platforms – in dialogue with the Genettian idea of the paratext. Genette held that paratexts shape a given text's “relations with the public.” More recently, Jonathan Gray has applied the notion of paratext to media franchises, highlighting the active role of paratexts in creating and continuing franchise texts. Dorothee Birke and Birte Christ have elaborated Genette's ideas for a situation of convergence culture and transmedia storytelling, examining how paratexts fulfill interpretive, commercial, or navigational functions in determining contemporary readers’ transmedia experience of narratives.
This chapter takes four steps towards investigating how literary publishing houses respond to the ubiquitous remixes and mashups to be found on lowthreshold digital platforms of participation. It will, first, delineate paratexts as zones of transaction, shifting research emphases from textual towards performative concerns and highlighting the way cultures negotiate textual distribution and circulation. Secondly, it will examine the question of how paratexts regulate the quasi-religious textuality of fandom participation; thirdly, the role of paratextual canonization will be a special focus within this textuality. Finally, the chapter argues that printed products within the field attempt to perform a nostalgic authorization and re-materialization of literature, highlighting the haptic and material qualities of “bookishness.”
The novel series Fifty Shades of Grey and The Mortal Instruments originated as fanfiction adaptations of the Twilight and Harry Potter series. E.L. James and Cassandra Clare published in fanfiction archives first, before they deleted their online writing, edited and rewrote their work, and removed traces of fandom so that the narratives could be adapted to the print market. This process is called “filing off the serial numbers” or “pulled to publish” by fans. Beyond the adapted texts, and writing strategies that transitioned from the fan community to the commercial book market, established practices of fan authorship have been adapted as well. The article investigates these consecutive and simultaneous processes of transposition and appropriation as “layered forms of adaptation.”
Key words: Fanfiction; Harry Potter; Twilight; pulled to publish; adaptation
Introduction
After the Fifty Shades of Grey book series had sold more than 125 million copies worldwide, fans eagerly awaited the release of the movie adaptation in February of 2015. Building on the books’ success, the opening weekend of the movie alone grossed $248 million. The production and pre-production of the film was accompanied by media reports and PR announcements of the ways the narrative and specifically the BDSM scenes in the book were adapted to the screen, as well as which actors were cast as the central characters Christian and Ana. Throughout this renewed interest in Fifty Shades of Grey, the history of layered adaptation that the text had transitioned through before it was turned into a movie receded into the background.
An earlier version of Fifty Shades of Grey, the Twilight fanfiction Masterof the Universe, had been widely read by fans online before the text was stripped of its direct references to Twilight and became a commercial success in its own right. Master of the Universe is not the only prominent text that evolved from the realms of fanfiction writing; with Sylvain Reynard’s Gabriel's Inferno and the writing duo Christina Lauren's Beautiful Bastard at least three other authors of erotic Twilight fanfiction made the New YorkTimes bestseller list. Beyond easily adaptable “all human” fanfiction from the Twilight fandom, fanfictions from other fandoms, including texts that revel in fantastic supernatural worlds, have successfully transitioned from the communal online and free writing context to the book market.
Cultural production is increasingly understood along the lines of self-organizing network dynamics instead of as linear and more or less stable (translation) processes with clear-cut creator-recipient dualisms. Participation and remediation, however, have always been a constitutive factor in cultural production. I propose to treat adaptations as embedded in and shaping the complex, non-linear, and decentralized networks of culture that operate along the lines of shifting and contingent connections between human and non-human actors. In doing so, I will show why it can be helpful for critical adaptation studies to take seriously the notion of cultural “function.” Making productive insights into biological adaptation for cultural adaptation studies, I aim to shed light on the connotation of adaptation as temporary and contingent “knowledge.”
Key words: Network theory; cultural functions of adaptation; biological vs. cultural adaptation; adaptation as a form of contingent knowledge
Adaptation as processual knowledge
Contrary to some public discourse, adaptation studies is far from thinking of adaptations as “poor” derivatives of original source texts. The so-called “fidelity discourse” has been successfully deconstructed or, as Kamilla Elliott and Simone Murray suggest, has never played as big a role in adaptation studies as scholars have repeatedly claimed. While the tendency to make value judgments based on how “truthful” an adaptation is to its original persists in fan communities, adaptation studies have long deconstructed unidirectional and hierarchical models of adaptation practices. Furthermore, the field of adaptation studies has opened up to a variety of media and multidirectional adaptation processes beyond the field of novel-to-film adaptations. The theoretical reconceptualization of adaptation studies has been spurred on, among other things, by an increasing cultural development towards what Henry Jenkins calls “convergence culture,” coinciding with a scholarly interest in the participatory nature of popular culture and of a supposedly democratic “grassroots” creativity. In the Information Age, adaptation is no longer regarded as the exception but represents instead the rule as to how media products and stories emerge, proliferate, and interact with each other. Cultural production is increasingly understood along the lines of self-organizing network dynamics instead of as linear and more or less stable (translation) processes with clear-cut creator-recipient dualisms.
In this paper, it is addressed how network structure can be related to asymptotic network behavior. If such a relation is studied, that usually concerns only strongly connected networks and only linear functions describing the dynamics. In this paper, both conditions are generalized. A couple of general theorems is presented that relates asymptotic behavior of a network to the network’s structure characteristics. The network structure characteristics, on the one hand, concern the network’s strongly connected components and their mutual connections; this generalizes the condition of being strongly connected to a very general condition. On the other hand, the network structure characteristics considered generalize from linear functions to functions that are normalized, monotonic, and scalar-free, so that many nonlinear functions are also covered. Thus, the contributed theorems generalize the existing theorems on the relation between network structure and asymptotic network behavior addressing only specific cases such as acyclic networks, fully, and strongly connected networks, and theorems addressing only linear functions. This paper was invited as an extended (by more than 45%) version of a Complex Networks’18 conference paper. In the discussion section, the differences are explained in more detail.
Learn about the most recent theoretical and practical advances in radar signal processing using tools and techniques from compressive sensing. Providing a broad perspective that fully demonstrates the impact of these tools, the accessible and tutorial-like chapters cover topics such as clutter rejection, CFAR detection, adaptive beamforming, random arrays for radar, space-time adaptive processing, and MIMO radar. Each chapter includes coverage of theoretical principles, a detailed review of current knowledge, and discussion of key applications, and also highlights the potential benefits of using compressed sensing algorithms. A unified notation and numerous cross-references between chapters make it easy to explore different topics side by side. Written by leading experts from both academia and industry, this is the ideal text for researchers, graduate students and industry professionals working in signal processing and radar.
The induced removal lemma of Alon, Fischer, Krivelevich and Szegedy states that if an n-vertex graph G is ε-far from being induced H-free then G contains δH(ε) · nh induced copies of H. Improving upon the original proof, Conlon and Fox proved that 1/δH(ε)is at most a tower of height poly(1/ε), and asked if this bound can be further improved to a tower of height log(1/ε). In this paper we obtain such a bound for graphs G of density O(ε). We actually prove a more general result, which, as a special case, also gives a new proof of Fox’s bound for the (non-induced) removal lemma.
Human-like motion of robots can improve human–robot interaction and increase the efficiency. In this paper, a novel human-like motion planning strategy is proposed to help anthropomorphic arms generate human-like movements accurately. The strategy consists of three parts: movement primitives, Bayesian network (BN), and a novel coupling neural network (CPNN). The movement primitives are used to decouple the human arm movements. The classification of arm movements improves the accuracy of human-like movements. The motion-decision algorithm based on BN is able to predict occurrence probabilities of the motions and choose appropriate mode of motion. Then, a novel CPNN is proposed to solve the inverse kinematics problems of anthropomorphic arms. The CPNN integrates different models into a single network and reflects the features of these models by changing the network structure. Through the strategy, the anthropomorphic arms can generate various human-like movements with satisfactory accuracy. Finally, the availability of the proposed strategy is verified by simulations for the general motion of humanoid NAO.
In multi-domain product development organizations, there is a continuous need to transfer captured knowledge between engineers to enable better design decisions in the future. The objective of this paper is to evaluate how engineering knowledge can be captured, disseminated and (re)used by applying a knowledge reuse tool entitled Engineering Checksheet (ECS). The tool was introduced in 2012 and this evaluation has been performed over the 2017–2018 period. This case study focused on codified knowledge in incremental product development with a high reuse potential both in and over time. The evaluation draws conclusions from the perspectives of the knowledge workers (the engineers), knowledge owners and knowledge managers. The study concludes that the ECS has been found to be valuable in enabling a timely understanding of technological concepts related to low level engineering tasks in the product development process. Hence, this enables knowledge flow and, in particular, reuse among inexperienced engineers, as well as providing quick and accurate quality control for experienced engineers. The findings regarding knowledge ownership and management relate to the need for clearly defining a knowledge owner structure in which communities of practice take responsibility for empowering engineers to use ECS and as knowledge evolves managing updates to the ECS.
With this groundbreaking text, discover how wireless artificial intelligence (AI) can be used to determine position at centimeter level, sense motion and vital signs, and identify events and people. Using a highly innovative approach that employs existing wireless equipment and signal processing techniques to turn multipaths into virtual antennas, combined with the physical principle of time reversal and machine learning, it covers fundamental theory, extensive experimental results, and real practical use cases developed for products and applications. Topics explored include indoor positioning and tracking, wireless sensing and analytics, wireless power transfer and energy efficiency, 5G and next-generation communications, and the connection of large numbers of heterogeneous IoT devices of various bandwidths and capabilities. Demo videos accompanying the book online enhance understanding of these topics. Providing a unified framework for wireless AI, this is an excellent text for graduate students, researchers, and professionals working in wireless sensing, positioning, IoT, machine learning, signal processing and wireless communications.
We live in times of transparency. Digital technologies expose everything we do, like, and search for, and it is difficult to remain private and out of sight. Meanwhile, many people are concerned about the unchecked powers of tech giants and the hidden operations of big data, artificial intelligence and algorithms and call for more openness and insight. How do we - as individuals, companies and societies - deal with these technological and social transformations? Seen through the prism of digital technologies and data, our lives take new shapes and we are forced to manage our visibilities carefully. This book challenges common ways of thinking about transparency, and argues that the management of visibilities is a crucial, but overlooked force that influences how people live, how organizations work, and how societies and politics operate in a digital, datafied world.
In the manufacturing process of sophisticated and individualized large components, classical solutions to build large machine tools cannot meet the demand. A hybrid robot, which is made up of a 3 degree-of-freedom (3-DOF) parallel manipulator and a 2-DOF serial manipulator, has been developed as a plug-and-play robotized module that can be rapidly located in multi-stations where machining operations can be performed in situ. However, processing towards high absolute accuracy has become a huge challenge due to the movement of robot platform. In this paper, a human-guided vision system is proposed and integrated in the robot system to improve the accuracy of the end-effector of a robot. A handheld manipulator is utilized as a tool for human–robot interaction in the large-scale unstructured circumstances without intelligence. With 6-DOF, humans are able to manipulate the robot (end-effector) so as to guide the camera to see target markers mounted on the machining datum. Simulation is operated on the virtual control platform V-Rep, showing a high robust and real-time performance on mapping human manipulation to the end-effector of robot. And then, a vision-based pose estimation method on a target marker is proposed to define the position and orientation of machining datum, and a compensation method is applied to reduce pose errors on the entire machining trajectory. The algorithms are tested on V-Rep, and the results show that the absolute pose error reduces greatly with the proposed methods, and the system is immune to the motion deviation of the robot platform.
We consider a two-echelon production inventory system with a manufacturer having limited production capacity and a distribution center (DC). There is a positive transportation time between the manufacturer and the DC. Customers gain a value by receiving the product and incur a waiting cost when facing a delay. We assume that customers' waiting cost depends on their degree of impatience with respect to delay, which is captured by a convex waiting cost function. Customers are strategic with respect to joining the system and either place an order or balk from the system upon their arrival depending on their expected waiting time. We study the Stackelberg equilibrium assuming that the DC acts as a Stackelberg leader and customers are the followers. We first obtain the total expected revenue and then provide a heuristic to derive the optimal base-stock levels in the warehouse and the DC as well as the optimal price of the product.
It’s now remarkably easy to release to the world a cloud-based application programming interface (API) that provides some software function as a service. As a consequence, the cloud API space has become very densely populated, so that even if a particular API offers a service whose potential value is considerable, there are many other factors that play a role in determining whether or not that API will be commercially successful. If you’re thinking about entering the API marketplace with your latest and greatest idea, this post offers some entirely subjective advice on how you might increase the chances of your offering not being lost in all the noise.