To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The rise of artificial intelligence is mainly associated with software-based robotic systems such as mobile robots, unmanned aerial vehicles, and increasingly, semi-autonomous cars. However, the large gap between the algorithmic and physical worlds leaves existing systems still far from the vision of intelligent and human-friendly robots capable of interacting with and manipulating our human-centered world. The emerging discipline of machine intelligence (MI), unifying robotics and artificial intelligence, aims for trustworthy, embodiment-aware artificial intelligence that is conscious both of itself and its surroundings, adapting its systems to the interactive body it is controlling. The integration of AI and robotics with control, perception and machine-learning systems is crucial if these truly autonomous intelligent systems are to become a reality in our daily lives. Following a review of the history of machine intelligence dating back to its origins in the twelfth century, this chapter discusses the current state of robotics and AI, reviews key systems and modern research directions, outlines remaining challenges and envisages a future of man and machine that is yet to be built.
As robots and intangible autonomous systems increasingly interact with humans, we wonder who should be held accountable when things go wrong. This chapter examines the extra-contractual liability of users, keepers and operators for wrongs committed by autonomous systems. It explores how the concept of ‘wrong’ can be defined with respect to autonomous systems and what standard of care can reasonably be expected of them. The chapter also looks at existing accountability rules for things and people in various legal orders and explains how they can be applied to autonomous systems. From there, various approaches to a new liability regime are explored. Neither product liability nor the granting of a legal persona to robots is an adequate response to the current challenges. Rather, both the keeper and the operator of the autonomous system should be held strictly liable for any wrong committed, opening up the possibility of privileges being granted to the operators of machine-learning systems that learn from data provided by the system’s users.
The possible emulation of human creativity by various models of artificial intelligence systems is discussed in this chapter. In some instances, the degree of originality of creations using algorithms may surprise even human beings themselves. For this reason, copyright protection of ‘works’ created by autonomous systems is proposed, which would take account of both the fundamental contributions of computer science researchers and the investment in human and economic resources that give rise to these ‘works’.
Rapid progress in AI and robotics is challenging the traditional boundaries of law. Algorithms are widely employed to make decisions that have an increasingly far-reaching impact on individuals and society, potentially leading to manipulation, biases, censorship, social discrimination, violations of privacy and property rights, and more. This has sparked a global debate on how to regulate AI and robotics.
Nowadays everything revolves around digital data. They are, however, difficult to capture in legal terms due to their great variety. They may be either valuable goods or completely useless. They may be regarded as syntactic or semantic. However, it is the particularly sensitive data protected by data protection law that are highly valuable and interesting for data-trading, big-data and artificial-intelligence applications in the European data market. The European legislator appears to favour both a high level of protection of personal data, including the principle of ‘data minimisation’, and a free flow of data. The GDPR includes some free-flow elements, but especially legislation on trading and usage of non-personal data is currently under discussion. The European legislator faces key challenges regarding the (partly) conflicting objectives reflected in data protection law and data economic law. This contribution assesses the current state of legal discussions and legislative initiatives at the European level.
Machine-learning algorithms are used to profile individuals and make decisions based on them. The European Union is a pioneer in the regulation of automated decision-making. The regime for solely automated decision-making under Article 22 of the General Data Protection Regulation (GDPR), including the interpretative guidance of the Article 29 Working Party (WP29, replaced by the European Data Protection Board under the GDPR), has become more substantial (i.e., less formalistic) than was the case under Article 15 of the Data Protection Directive. This has been achieved by: endorsing a non-strict concept of ‘solely’ automated decisions; explicitly recognising the enhanced protection required for vulnerable adults and children; linking the data subject’s right to an explanation to the right to challenge automated decisions; and validating the ‘general prohibition’ approach to Article 22(1). These positive developments enhance legal certainty and ensure higher levels of protection for individuals. They represent a step towards the development of a more mature and sophisticated regime for automated decision-making that is committed to helping individuals retain adequate levels of autonomy and control, whilst meeting the technology and innovation demands of the data-driven society.
This chapter introduces the notion of “wake neutrality” of artificial intelligence devices and reviews its implication for wake-word approaches in open conversational commerce (OCC) devices such as Amazon’s Alexa, Google Home and Apple’s Siri. Examples illustrate how neutrality requirements such as explainability, auditability, quality, configurability, institutionalization, and non-discrimination may impact the various layers of a complete artificial intelligence architecture stack. The legal programming implications of these requirements for algorithmic law enforcement are also analysed. The chapter concludes with a discussion of the possible role of standards bodies in setting a neutral, secure and open legal programming voice name system (VNS) for human-to-AI interactions to include an “emotional firewall.”
High-frequency trading has become important on financial markets and is one of the first areas in algorithmic trading to be intensely regulated. This chapter reviews the EU approach to regulation of algorithmic trading, which can be taken as a blueprint for other regulations on algorithms by focusing on organizational requirements such as pre- and post-trade controls and real-time monitoring.
Despite their profound and growing influence on our lives, algorithms remain a partial “black box.” Keeping the risks that arise from rule-based and learning systems in check is a challenging task for both: society and the legal system. This chapter examines existing and adaptable legal solutions and complements them with further proposals. It designs a regulatory model in four steps along the time axis: preventive regulation instruments; accompanying risk management; ex post facto protection; and an algorithmic responsibility code. Together, these steps form a legislative blueprint to further regulate artificial intelligence applications.
The legal consideration of a robot machine as a ‘product’ has led to the application of civil liability rules for producers. Nevertheless, some aspects of the relevant European regulation suggest special attention should be devoted to a review in this field in relation to robotics. Types of defect, the meanings of the term ‘producer’, the consumer expectation test and non-pecuniary damages are some of the aspects that could give rise to future debate. The inadequacy of the current Directive 85/374/EEC for regulating damages caused by robots, particularly those with self-learning capability, is highlighted by the document ‘Follow up to the EU Parliament Resolution of 16 February 2017 on Civil Law Rules on Robotics’. Other relevant documents are the Report on “Liability for AI and other emerging digital technologies” prepared by the Expert Group on Liability and New Technologies, the “Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics” [COM(2020) 64 final, 19.2.2020] and the White Paper “On Artificial Intelligence – A European approach to excellence and trust” [COM(2020) 65 final, 19.2.2020].
We investigate the usage of semantic information for morphological segmentation since words that are derived from each other will remain semantically related. We use mathematical models such as maximum likelihood estimate (MLE) and maximum a posteriori estimate (MAP) by incorporating semantic information obtained from dense word vector representations. Our approach does not require any annotated data which make it fully unsupervised and require only a small amount of raw data together with pretrained word embeddings for training purposes. The results show that using dense vector representations helps in morphological segmentation especially for low-resource languages. We present results for Turkish, English, and German. Our semantic MLE model outperforms other unsupervised models for Turkish language. Our proposed models could be also used for any other low-resource language with concatenative morphology.
Alleviating pain is good and abandoning hope is bad. We instinctively understand how words like alleviate and abandon affect the polarity of a phrase, inverting or weakening it. When these words are content words, such as verbs, nouns, and adjectives, we refer to them as polarity shifters. Shifters are a frequent occurrence in human language and an important part of successfully modeling negation in sentiment analysis; yet research on negation modeling has focused almost exclusively on a small handful of closed-class negation words, such as not, no, and without. A major reason for this is that shifters are far more lexically diverse than negation words, but no resources exist to help identify them. We seek to remedy this lack of shifter resources by introducing a large lexicon of polarity shifters that covers English verbs, nouns, and adjectives. Creating the lexicon entirely by hand would be prohibitively expensive. Instead, we develop a bootstrapping approach that combines automatic classification with human verification to ensure the high quality of our lexicon while reducing annotation costs by over 70%. Our approach leverages a number of linguistic insights; while some features are based on textual patterns, others use semantic resources or syntactic relatedness. The created lexicon is evaluated both on a polarity shifter gold standard and on a polarity classification task.
This article describes the criteria for identifying the focus of negation in Spanish. This work involved an in-depth linguistic analysis of the focus of negation through which we identified some 10 different types of criteria that account for a wide variety of constructions containing negation. These criteria account for all the cases that appear in the NewsCom corpus and were assessed in the annotation of this corpus. The NewsCom corpus consists of 2955 comments posted in response to 18 different news articles from online newspapers. The NewsCom corpus contains 2965 negative structures with their corresponding negation marker, scope, and focus. This is the first corpus annotated with focus in Spanish and it is freely available. It is a valuable resource that can be used both for the training and evaluation of systems that aim to automatically detect the scope and focus of negation and for the linguistic analysis of negation grounded in real data.