To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book is devoted to five main principles of algorithm design: divide and conquer, greedy algorithms, thinning, dynamic programming, and exhaustive search. These principles are presented using Haskell, a purely functional language, leading to simpler explanations and shorter programs than would be obtained with imperative languages. Carefully selected examples, both new and standard, reveal the commonalities and highlight the differences between algorithms. The algorithm developments use equational reasoning where applicable, clarifying the applicability conditions and correctness arguments. Every chapter concludes with exercises (nearly 300 in total), each with complete answers, allowing the reader to consolidate their understanding and apply the techniques to a range of problems. The book serves students (both undergraduate and postgraduate), researchers, teachers, and professionals who want to know more about what goes into a good algorithm and how such algorithms can be expressed in purely functional terms.
The tax system incentivizes automation, even in cases where it is not otherwise efficient. This is because the vast majority of tax revenue is derived from labor income. When an AI replaces a person, the government loses a substantial amount of tax revenue - potentially hundreds of billions of dollars a year in the aggregate. All of this is the unintended result of a system designed to tax labor rather than capital. Such a system no longer works once labor is capital. Robots are not good taxpayers. The solution is to change the tax system to be more neutral between AI and human workers and to limit automation’s impact on tax revenue. This would be best achieved by reducing taxes on human workers and increasing corporate and capital taxes.
This chapter explains the need for AI legal neutrality and discusses its benefits and limitations. It then provides an overview of its application in tax, tort, intellectual property, and criminal law. Law is vitally important to the development of AI, and AI will have a transformative effect on the law given that many legal rules are based on standards of human behavior that will be automated. As AI increasingly steps into the shoes of people, it will need to be treated more like a person, and more importantly, sometimes people will need to be treated more like AI.
This chapter defines artificial intelligence and discusses its history and evolution, explains the differences between major types of AI (symbolic/classical and connectionist), and describes AI’s most recent advances, applications, and impact. It also weighs in on the question of whether AI can “think,” noting that the question is less relevant to regulatory efforts, which should focus on promoting behaviors that improve social outcomes.
AI has the potential to be substantially safer than people. Self-driving cars will cause accidents, but they will cause fewer accidents than people. Because automation will result in substantial safety benefits, tort law should encourage its adoption as a means of accident prevention. Under current laws, suppliers of AI tortfeasors are strictly responsible for their harms. A better system would hold them liable for harms caused by AI tortfeasors in negligence. Not only would this encourage the use of AI after it exceeds human performance, but also the liability test would focus on activity rather than design, which would be simpler to administer. More importantly, just as AI activity should be discouraged when it is less safe than a person, human activity should be discouraged when it is less safe than an AI. Once AI is safer than a person and automation is practicable, human tortfeasors should be held to the standard of AI behavior.
The impact of artificial inventors is only starting to be felt, but AI’s rapid improvement means that it may soon outdo people at solving problems in certain areas. This should revolutionize not only research and development but also patent law. The most important requirement to being granted a patent is that an invention must be nonobvious to a hypothetical skilled person who represents an average researcher. As AI increasingly augments average researchers, this should make them more knowledgeable and sophisticated. In turn, this should raise the bar to patentability. Once inventive AI moves from augmenting to automating average researchers, it should directly represent the skilled person in obviousness determinations. As inventive AI continues to improve, this should continue to raise the bar to patentability, eventually rendering innovative activities obvious. To a superintelligent AI, everything will be obvious.
One of the central logical ideas in Wittgenstein’s Tractatus logico-philosophicus is the elimination of the identity sign in favor of the so-called “exclusive interpretation” of names and quantifiers requiring different names to refer to different objects and (roughly) different variables to take different values. In this paper, we examine a recent development of these ideas in papers by Kai Wehmeier. We diagnose two main problems of Wehmeier’s account, the first concerning the treatment of individual constants, the second concerning so-called “pseudo-propositions” (Scheinsätze) of classical logic such as $a=a$ or $a=b \wedge b=c \rightarrow a=c$. We argue that overcoming these problems requires two fairly drastic departures from Wehmeier’s account: (1) Not every formula of classical first-order logic will be translatable into a single formula of Wittgenstein’s exclusive notation. Instead, there will often be a multiplicity of possible translations, revealing the original “inclusive” formulas to be ambiguous. (2) Certain formulas of first-order logic such as $a=a$ will not be translatable into Wittgenstein’s notation at all, being thereby revealed as nonsensical pseudo-propositions which should be excluded from a “correct” conceptual notation. We provide translation procedures from inclusive quantifier-free logic into the exclusive notation that take these modifications into account and define a notion of logical equivalence suitable for assessing these translations.
In this paper, a sliding mode control using a control point concept is proposed for an under-actuated quadrotor. The proposed controller controls the position of the control point, a displaced point from the quadrotor’s geometric center, and the yaw angle. This method solves singularity issues in control matrix inversion and enables the utilization of the multi-input multi-output equation to derive the control inputs. The sliding surface is designed to control four outputs while stabilizing roll and pitch angles. Simulation and experimental results show the effectiveness and robustness of the proposed controller in the tracking of a trajectory under parametric uncertainties.
This chapter starts Part II (domain dependent feature engineering) by describing the creation of the base WikiCities dataset employed here and in the next three chapters. This dataset is used to predict population of cities using a semantic graph built from Wikipedia infoboxes. Semantic graphs were chosen as an example of handling and representing variable-length raw data as fixed-length feature vectors, particularly using the techniques discussed in Chapter 5. The intention with this dataset is to provide a task that can be attacked with regular features, with time series features, textual features and image features. The chapter discusses how the dataset come to be, an Exploratory Data Analysis over it, resulting in a base, incomplete featurization. From there, a first featurization was produced, with an error analysis process including feature ablation and mutual information feature utility. From this error analysis, a second featurization is proposed and an error analysis using feature stability concludes the exercise. All insights are captured in the two final feature sets, one conservative and other expected to have higher performance.
This chapter closes Part I presenting advanced topics, including dealing with variable length feature vectors, Feature Engineering and Deep Learning and automatic Feature Engineering (either supervised or unsupervised). It starts bridging the pure domain independent techniques to start drilling into problems of domain-specific importance. Variable length feature vectors has been a problem for fixed-size vector ML ever since. In general, techniques involve truncation, computing the most general tree and encoding paths on it or just destructive projection into a smaller plane. The chapter briefly delve into some Deep Learning concepts and what it entails for feature engineering. Automated Feature Learning using FeatureTools (the DataScience Machine) and genetic programming is covered, also Instance Engineering and Unsupervised Feature Engineering (in the form of autoencoders).
This chapter concludes by responding to some of the controversies about artificial intelligence and possible criticisms of AI legal neutrality. It argues that AI legal neutrality is important regardless of whether AI broadly achieves superhuman performance, and that the law would not want to constrain AI development for protectionist reasons. It further argues that AI legal neutrality is a coherent principle for policymakers to apply, even though it allows the law to treat AI and people differently and will sometimes be at odds with other regulatory goals. Finally, it discusses some of the risks and dangers of AI and argues these are susceptible to management with appropriate legal frameworks.