To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter examines the social construction of technology (SCOT) perspective. This perspective examines how people shape technology use toward their own ends, and why deterministic models of media use fail to account for how people actually use technology. This chapter contrasts a relational approach from competing perspectives, especially technology-centered ones. Technological determinism and media domestication are examined. The SCOT perspective is brought into dialogue with constructivist theories of personal relationships and with dialectical and ironic perspectives on media’s influence on relationships. Three social factors influencing the use of technology are explored: norms of technology use, using technology to access important others, and make-do or seemfulness.
This chapter consists of two parts. The first part is an account of the influence that military technological advancements have had on development of the law governing armed conflict. Beginning in the 1860s, it recalls the points at which new weapon technologies have prompted legal responses in the form of treaties, declarations and other instruments. It concludes with the establishment of the International Criminal Court after discussing the two developments of most relevance to the book: the drafting of the Additional Protocols to the 1949 Geneva Conventions and the Convention on Certain Conventional Weapons, the latter being the convention under which regulation of autonomous weapons is being debated. The second part discusses the public debate about autonomous weapons beginning with the growth of broad public interest in the early 2000s and the contributions of roboticists, ethicists and other academics. It then covers the involvement of the United Nations and ends by summarising the process by which the regulatory debate in connection with the Convention on Certain Conventional Weapons began.
This chapter consists of two parts. The first part covers basic aspects of machine autonomy as a technical concept, explaining how it constitutes a form of control over a machine and how degrees of autonomous capability manifest in complex machines. This part is not about weapon systems in particular, but about electromechanical systems in general. It briefly outlines adaptive and intelligent control methodologies and explains that autonomy does not sever the connection between a machine and its operator but only alters the relationship between them. The second part discusses some aspects of how autonomous systems are used in military applications. Specifically, autonomous behaviour will extend to systems ancillary to combat operations and autonomous systems will be employed in roles wherein they effectively ‘collaborate’ with human soldiers and with each other. Assessing the legal consequences of using autonomous systems in military operations is therefore not simply a matter of studying the properties of a new type of weapon; it is about understanding a new relationship between soldiers and weapons.
This final chapter briefly sets out two lists of recommendations with the intention of helping to progress the debate about regulation of autonomous weapon systems currently being conducted by the States Parties to the Convention on Certain Conventional Weapons. The first set relates to the definition of autonomous weapons. This has been a point of contention since the first informal Meeting of Experts was held in 2014. The recommendations given here are drawn from the points made in earlier chapters and can be used to formulate a definition of autonomous weapons which will support regulatory efforts. The second set of recommendations briefly outlines a proposed regulatory response based on the approaches taken in existing Protocols of the CCW.
By the end of the nineteenth century, railway expansion had led to the formation of a technocratic bureaucracy in Chile and other countries in Latin America. Central to this formation were the engineers who oversaw and regulated both public and private railways. Recently, historians have begun to re-examine engineers’ roles in this period. By employing methods and theoretical framings from the history of technology, this article argues that engineering was an important framework through which state–capital relations evolved, making engineers pivotal actors in the evolution of political economy at the time.
Economic models play a central role in the decision-making process of the National Institute for Health and Care Excellence (NICE). Inadequate validation methods allow for errors to be included in economic models. These errors may alter the final recommendations and have a significant impact on outcomes for stakeholders.
To describe the patterns of technical errors found in NICE submissions and to provide an insight into the validation exercises carried out by the companies prior to submission.
All forty-one single technology appraisals (STAs) completed in 2017 by NICE were reviewed and all were on medicines. The frequency of errors and information on their type, magnitude, and impact was extracted from publicly available NICE documentation along with the details of model validation methods used.
Two STAs (5 percent) had no reported errors, nineteen (46 percent) had between one and four errors, sixteen (39 percent) had between five and nine errors, and four (10 percent) had more than ten errors. The most common errors were transcription errors (29 percent), logic errors (29 percent), and computational errors (25 percent). All STAs went through at least one type of validation. Moreover, errors that were notable enough were reported in the final appraisal document (FAD) in eight (20 percent) of the STAs assessed but each of these eight STAs received positive recommendations.
Technical errors are common in the economic models submitted to NICE. Some errors were considered important enough to be reported in the FAD. Improvements are needed in the model development process to ensure technical errors are kept to a minimum.
The tax system incentivizes automation, even in cases where it is not otherwise efficient. This is because the vast majority of tax revenue is derived from labor income. When an AI replaces a person, the government loses a substantial amount of tax revenue - potentially hundreds of billions of dollars a year in the aggregate. All of this is the unintended result of a system designed to tax labor rather than capital. Such a system no longer works once labor is capital. Robots are not good taxpayers. The solution is to change the tax system to be more neutral between AI and human workers and to limit automation’s impact on tax revenue. This would be best achieved by reducing taxes on human workers and increasing corporate and capital taxes.
This chapter explains the need for AI legal neutrality and discusses its benefits and limitations. It then provides an overview of its application in tax, tort, intellectual property, and criminal law. Law is vitally important to the development of AI, and AI will have a transformative effect on the law given that many legal rules are based on standards of human behavior that will be automated. As AI increasingly steps into the shoes of people, it will need to be treated more like a person, and more importantly, sometimes people will need to be treated more like AI.
To investigate the impact of the uncertainty stemming from products with European conditional marketing authorization (CMA) or authorization in exceptional circumstances (AEC) on the National Institute for Health and Care Excellence's (NICE) recommendations.
Products which received CMA/AEC by European Medicines Agency (EMA) up to 1 December 2016 were identified and matched with corresponding NICE decisions issued by August 2017, the status of which was then traced to August 2019. We assessed whether the conversion of CMA to full marketing authorization triggered a review of a NICE decision. The odds of a recommendation carrying a commercial arrangement for products with and without CMA/AEC were calculated.
Fifty-four products were granted CMA/AEC by EMA. NICE conducted thirty evaluations of products with CMA/AEC. Twelve products were recommended by NICE by August 2017 and fourteen by August 2019. All recommendations had an associated commercial arrangement. The odds of carrying a commercial arrangement were higher for products with CMA/AEC compared to those with full authorization. Conversions from conditional to full authorization among products not recommended by NICE did not trigger an appraisal review.
Uncertainty, stemming from the lack of robust clinical data of products authorized with CMA/AEC, has a substantial impact on HTA recommendations, frequently requiring risk mitigation mechanisms such as commercial and data collection arrangements. Further analyses should be conducted to assess whether the benefits of early access strategies outweigh the risks for patients and the healthcare system.
Health technology assessment (HTA) conducted to inform developers of health technologies (development-focused HTA, DF-HTA) has a number of distinct features when compared to HTA conducted to inform usage decisions (use-focused HTA). To conduct effective DF-HTA, it is important that analysts are aware of its distinct features as analyses are often not published. We set out a framework of ten features, drawn from the literature and our own experience: a target audience of developers and investors; an underlying user objective to maximize return on investment; a broad range of decisions to inform; wide decision space; reduced evidence available; earlier timing of analysis; fluid business model; constrained resources for analysis; a positive stance of analysis; and a “consumer”-specific burden of proof. This paper presents a framework of ten features of DF-HTA intended to initiate debate as well as provide an introduction for analysts unfamiliar with the field.
To alert policy makers early about emerging health technologies that could significantly impact the healthcare system at the clinical, financial and organizational levels, the Agency for Care Effectiveness (ACE) in Singapore established a horizon scanning system (HSS) in 2019. This paper describes the development of the ACE HSS and showcases its application with cell and gene therapy products as the first example.
A literature review of existing HSS methods, including the processes of the EuroScan International Network and other overseas horizon scanning agencies, was done to inform the development of our horizon scanning framework. The framework was first applied to the new and emerging cell and gene therapies.
Identification sources, filtration and prioritization criteria, and horizon scanning outputs for the HSS were developed in alignment to international best practices, with recommendations for technology uptake represented by a traffic light system. For the first horizon scanning exercise on cell and gene therapies, forty therapies passed the filtration step, of which eight were prioritized for further assessment. The few early reports developed were used to inform and prepare the healthcare system for their potential introduction, particularly in terms of the need to develop health and funding policies.
Early assessment of prioritized topics has provided support for strategic efforts within the Ministry of Health. Given that ACE's horizon scanning program is still in its infancy, the framework will continue to evolve to ensure relevance to our stakeholders so that it remains fit for purpose for our healthcare system.
My essay examines Bob Dylan’s relation to sound recording, still a neglected topic even with the recent explosion of scholarship on the legendary singer-songwriter. Drawing on historical accounts of Dylan and his career, as well as recent histories of studio recording, I trace the artist’s mercurial relation to record making as evidenced in his turbulent encounters with various record producers. My chief focus is on Dylan’s account of his late 1980s creative crisis in his 2004 memoir Chronicles: Volume 1, which I argue also provides a condensed account of the songwriter’s philosophy of sound recording.
Literature operates with voice, and voices the voices of the author/poet, of the characters (and the poem’s ‘speaker’), and of the reciter and reader. But literature also operates an extensive imaginary of voice. Voice is the ‘stuff’ of literature in two senses: its material support and an abiding theme. And criticism itself involves so many ‘voicings’ of literary texts, as we test out the texts’ possibilities, their reverberations, their potential afterlives. This chapter explores both the physiology and imaginary of voice across millennia, shaped by phenomena as various as the social function of poetry in predominantly, or exclusively, oral cultures, the class and race politics of accent, the technologies of sound recording, reproduction, transmission and processing. It approaches this history through a series of interconnecting fragments: Voice as origin to, and excess over, speech; the voices of the muses and of song, voices that come from within and without, voices that ‘possess’ us; prosthetics of voice, written or machinic; mnemonics of voice, in which voice is not just memorable but creates memory. In both its physiology and its imaginary, voice comes across as extravagant – as extravagance, even.
Neural machine translation (NMT) has recently shown promising results on publicly available benchmark datasets and is being rapidly adopted in various production systems. However, it requires high-quality large-scale parallel corpus, and it is not always possible to have sufficiently large corpus as it requires time, money, and professionals. Hence, many existing large-scale parallel corpus are limited to the specific languages and domains. In this paper, we propose an effective approach to improve an NMT system in low-resource scenario without using any additional data. Our approach aims at augmenting the original training data by means of parallel phrases extracted from the original training data itself using a statistical machine translation (SMT) system. Our proposed approach is based on the gated recurrent unit (GRU) and transformer networks. We choose the Hindi–English, Hindi–Bengali datasets for Health, Tourism, and Judicial (only for Hindi–English) domains. We train our NMT models for 10 translation directions, each using only 5–23k parallel sentences. Experiments show the improvements in the range of 1.38–15.36 BiLingual Evaluation Understudy points over the baseline systems. Experiments show that transformer models perform better than GRU models in low-resource scenarios. In addition to that, we also find that our proposed method outperforms SMT—which is known to work better than the neural models in low-resource scenarios—for some translation directions. In order to further show the effectiveness of our proposed model, we also employ our approach to another interesting NMT task, for example, old-to-modern English translation, using a tiny parallel corpus of only 2.7K sentences. For this task, we use publicly available old-modern English text which is approximately 1000 years old. Evaluation for this task shows significant improvement over the baseline NMT.
Introducing irrigation to smallholder farms in Sub-Saharan Africa (SSA) can increase food security, improve nutrition, and reduce poverty. To explore the possibility of using drip irrigation on smallholder farms in SSA, we introduce a feasibility study that views the design space from both a user-centered lens, explaining how drip might be successful in the future, and from an engineering lens. With a first-order model, we compare estimated capital costs of drip and sprinkler systems for various farm profiles and show that drip has the potential to be a viable technology for many farms in SSA.
Sustainable Design Engineering (SDE) is an emerging research field and the development of programmes aiming at educating sustainable design engineers is very limited. One example is the SDE program at the Aalborg University in Copenhagen, which is based on a Problem Based Learning (PBL) model. In this article we aim to address the following three research questions: i) why Sustainable Design Engineering? ii) what is Sustainable Design Engineering? iii) How can Sustainable Design Engineering be implemented? By means of two examples from master thesis projects in the building and food sectors.
Platform design has been firmly established in the automotive industry as a strategy to provide wider product variety while maintaining cost effective production. But this strategy can struggle to keep up with the pace and nature of emerging technologies. This paper reviews the existing approaches to modelling product platforms, and showcases the challenges at OEMs introducing new technological innovations in their platforms. A gap is identified in the methods to assess the ability of existing platforms to integrate new technologies whenever they become available.
Design & Manufacture Knowledge Mapping is a critical activity in medium-to-large organisations supporting many organisational activities. However, techniques for effective mapping of knowledge often employ interviews, consultations and appraisals. Although invaluable in providing expert insight, the application of such methods is inherently intrusive and resource intensive. This paper presents word co-occurrence graphs as a means to automatically generate knowledge maps from technical documents and validates against expert generated knowledge maps.
This paper proposes an analytical framework for estimating the domain where a type of technology can be used in a system. In order to achieve this aim, we have elaborated on the concepts of technology critical, technology sensitive, and the technology comfort zone, to analytically assess the impact of a new technology in the early phases of system design. The result is a general method to indicate the range of requirements that can result in valid designs. This tool can assist in the decision-making processes for technology portfolio selection based on sustainable principles.
Crucial in the design process, Technology Readiness Levels are a common form of technology maturity assessment. Studies suggest that the TRL scale can be subjective and biased. Automating the assessment can reduce human bias. This paper highlights important challenges of automation by presenting data collected on 15 technologies from the nanotechnology sector. Our findings show that, contrary to claims from the literature, patent data exists for low maturity technologies and may be useful for automation. We also found that there exists unexpected trends in data publications at TRL 2, 3 and 4.