To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Tech companies bypass privacy laws daily, creating harm for profit. The information economy is plagued with hidden harms to people’s privacy, equality, finances, reputation, mental wellbeing, and even to democracy, produced by data breaches and data-fed business models. This book explores why this happens and proposes what to do about it. Legislators, policymakers, and judges are trapped into ineffective approaches to tackle digital harms because they work with tools unfit to deal with the unique challenges of data ecosystems that leverage AI. People are powerless towards inferences about them that they can’t anticipate, interfaces that manipulate them, and digital harms they can’t escape. Adopting a cross-jurisdictional scope, this book describes how laws and regulators can and should respond to these pervasive and expanding harms. In a world where data is everywhere, one of society’s most pressing challenges is addressing power discrepancies between the companies that profit from personal data and the people whose data produces profit. Doing so requires creating accountability for the consequences of corporate data practices—not the practices themselves. Laws can achieve this by creating a new type of liability that recognizes the social value of privacy, uncovering dynamics between individual and collective digital harms.
Chapter 2 looks at transparency and fintech tools. The premise behind many so-called fintech innovations in consumer markets is to make more personalised financial products available to an often underserved and largely inexperienced cohort. Many consumers are not good at managing their day-to-day finances, selecting optimal credit products or investing for the future. Fintech products, and the applications associated with them, are commonly promoted on the basis they will use consumer data, AI capacities, and a lower cost basis to promote competition and better serve consumers, including financially excluded or vulnerable consumers. Paterson, Miller, and Lyons challenge these premises by demystifying the kinds of capacities that are possible through the fintech technologies being offered to consumers. The most common form of fintech solutions offered to consumers are credit, budgeting, and investment tools. These typically do not disrupt existing service models through the use of deep learning AI. Rather they are commonly enabled by encoding the rules of thumb used by mortgage brokers and financial advisers. They make a return through methods criticised on when deployed by social media platforms, namely on-selling data, targeted advertising, and commission-based sales. There is moreover little incentive for fintech providers to make products that benefit marginalised cohorts for whom there is minimal relevant data and little likelihood of lucrative return. The authors argue that greater transparency is required about what is being offered to consumers though fintech tools and who benefits from them, along with greater accountability for ill-founded and even sensationalised claims.
Chapter 6 explores a different path: building privacy law on liability. Liability for material and immaterial privacy would improve the protection system. To achieve meaningful liability, though, laws must compensate privacy harm, not just the material consequences that stem from it. Compensation for financial and physical harms produced by the collection, processing, or sharing of data is important but insufficient. The proposed liability framework would address informational exploitation by making companies internalize risk. It would deter and remedy socially detrimental data practices, rather than chasing elusive individual control aims. Courts can distinguish harmful losses from benign ones by examining them on the basis of contextual and normative social values. By focusing on harm, privacy liability would overcome its current problems of causation quagmires and frivolous lawsuits.
Governments are increasingly adopting artificial intelligence (AI) tools to assist, augment, and even replace human administrators. In this chapter, Paul Miller, the NSW Ombudsman, discusses how the well-established principles of administrative law and good decision-making apply, or may be extended, to control the use of AI and other automated decision-making (ADM) tools in administrative decision-making. The chapter highlights the importance of careful design, implementation and ongoing monitoring to mitigate the risk that ADM in the public sector could be unlawful or otherwise contravene principles of good decision-making – including consideration of whether express legislative authorisation for the use of ADM technologies may be necessary or desirable.
The increase in global population and urbanization is presenting significant challenges to society: space is becoming increasingly scarce, demand is exceeding capacity for deteriorating infrastructure, transportation is fraught with congestion, and environmental impacts are accelerating. Underground space, and particularly tunnels, has a key role to play in tackling these challenges. However, the cost, risk, uncertainty, and complexity of the tunneling process have impeded its growth. In this paper, we envision several technological advancements that can potentially innovate and transform the mechanized tunneling industry, including artificial intelligence (AI), autonomous, and bio-inspired systems. The proliferation of AI may assist human engineers and operators in making informed decisions systematically and quantitatively based on massive real-time data during tunneling. Autonomous tunneling systems may enable precise and predictable tunneling operations with minimal human intervention and facilitate the construction of massive and large-scale underground infrastructure projects that were previously challenging or unfeasible using conventional methods. Bio-inspired systems may provide valuable references and strategies for more efficient tunneling design and construction concepts. While these technological advancements can offer great promise, they also face considerable challenges, such as improving accessibility to and shareability of tunneling data, developing robust, reliable, and explainable machine learning systems, as well as scaling the mechanics and ensuring the applicability of bio-inspired systems from the prototype level to real-world applications. Addressing these challenges is imperative to ensure the successful implementation of these innovations for future tunneling.
Given a fixed graph $H$ and a constant $c \in [0,1]$, we can ask what graphs $G$ with edge density $c$ asymptotically maximise the homomorphism density of $H$ in $G$. For all $H$ for which this problem has been solved, the maximum is always asymptotically attained on one of two kinds of graphs: the quasi-star or the quasi-clique. We show that for any $H$ the maximising $G$ is asymptotically a threshold graph, while the quasi-clique and the quasi-star are the simplest threshold graphs, having only two parts. This result gives us a unified framework to derive a number of results on graph homomorphism maximisation, some of which were also found quite recently and independently using several different approaches. We show that there exist graphs $H$ and densities $c$ such that the optimising graph $G$ is neither the quasi-star nor the quasi-clique (Day and Sarkar, SIAM J. Discrete Math. 35(1), 294–306, 2021). We also show that for $c$ large enough all graphs $H$ maximise on the quasi-clique (Gerbner et al., J. Graph Theory 96(1), 34–43, 2021), and for any $c \in [0,1]$ the density of $K_{1,2}$ is always maximised on either the quasi-star or the quasi-clique (Ahlswede and Katona, Acta Math. Hung. 32(1–2), 97–120, 1978). Finally, we extend our results to uniform hypergraphs.
Artificial barriers in Learning Automata (LA) is a powerful and yet under-explored concept although it was first proposed in the 1980s. Introducing artificial non-absorbing barriers makes the LA schemes resilient to being trapped in absorbing barriers, a phenomenon which is often referred to as lock in probability leading to an exclusive choice of one action after convergence. Within the field of LA and reinforcement learning in general, there is a sacristy of theoretical works and applications of schemes with artificial barriers. In this paper, we devise a LA with artificial barriers for solving a general form of stochastic bimatrix game. Classical LA systems possess properties of absorbing barriers and they are a powerful tool in game theory and were shown to converge to game’s of Nash equilibrium under limited information. However, the stream of works in LA for solving game theoretical problems can merely solve the case where the Saddle Point of the game exists in a pure strategy and fail to reach mixed Nash equilibrium when no Saddle Point exists for a pure strategy.
Furthermore, we provide experimental results that are in line with our theoretical findings.
Most studies on path planning of robotic arm focus on obstacle avoidance at the end position of robotic arm, while ignoring the obstacle avoidance of robotic arm joint linkage, and the obstacle avoidance method has low flexibility and adaptability. This paper proposes a path obstacle avoidance algorithm for the overall 6-DOF robotic arm that is based on the improved A* algorithm and the artificial potential field method. In the first place, an improved A* algorithm is proposed to address the deficiencies of the conventional A* algorithm, such as a large number of search nodes and low computational efficiency, in robotic arm end path planning. The enhanced A* algorithm proposes a new node search strategy and local path optimization method, which significantly reduces the number of search nodes and enhances search efficiency. To achieve the manipulator joint rod avoiding obstacles, a method of robotic arm posture adjustment based on the artificial potential field method is proposed. The efficiency and environmental adaptability of the robotic arm path planning algorithm proposed in this paper are validated through three types of simulation analysis conducted in different environments. Finally, the AUBO-i10 robotic arm is used to conduct path avoidance tests. Experimental results demonstrate that the proposed method can make the manipulator move smoothly and effectively plan an obstacle-free path, proving the method’s viability.
To answer database queries over incomplete data, the gold standard is finding certain answers: those that are true regardless of how incomplete data is interpreted. Such answers can be found efficiently for conjunctive queries and their unions, even in the presence of constraints. With negation added, the problem becomes intractable however. We concentrate on the complexity of certain answers under constraints and on effficiently answering queries outside the usual classes of (unions) of conjunctive queries by means of rewriting as Datalog and first-order queries. We first notice that there are three different ways in which query answering can be cast as a decision problem. We complete the existing picture and provide precise complexity bounds on all versions of the decision problem, for certain and best answers. We then study a well-behaved class of queries that extends unions of conjunctive queries with a mild form of negation. We show that for them, certain answers can be expressed in Datalog with negation, even in the presence of functional dependencies, thus making them tractable in data complexity. We show that in general, Datalog cannot be replaced by first-order logic, but without constraints such a rewriting can be done in first order.
Ultrafilters play a significant role in model theory to characterize logics having various compactness and interpolation properties. They also provide a general method to construct extensions of first-order logic having these properties. A main result of this paper is that every class $\Omega $ of uniform ultrafilters generates a $\Delta $-closed logic ${\mathcal {L}}_\Omega $. ${\mathcal {L}}_\Omega $ is $\omega $-relatively compact iff some $D\in \Omega $ fails to be $\omega _1$-complete iff ${\mathcal {L}}_\Omega $ does not contain the quantifier “there are uncountably many.” If $\Omega $ is a set, or if it contains a countably incomplete ultrafilter, then ${\mathcal {L}}_\Omega $ is not generated by Mostowski cardinality quantifiers. Assuming $\neg 0^\sharp $ or $\neg L^{\mu }$, if $D\in \Omega $ is a uniform ultrafilter over a regular cardinal $\nu $, then every family $\Psi $ of formulas in ${\mathcal {L}}_\Omega $ with $|\Phi |\leq \nu $ satisfies the compactness theorem. In particular, if $\Omega $ is a proper class of uniform ultrafilters over regular cardinals, ${\mathcal {L}}_\Omega $ is compact.
In this paper, we propose an enhanced version of the vanilla transformer for data-to-text generation and then use it as the generator of a conditional generative adversarial model to improve the semantic quality and diversity of output sentences. Specifically, by adding a diagonal mask matrix to the attention scores of the encoder and using the history of the attention weights in the decoder, this enhanced version of the vanilla transformer prevents semantic defects in the output text. Also, using this enhanced transformer along with a triplet network, respectively, as the generator and discriminator of conditional generative adversarial network, diversity and semantic quality of sentences are guaranteed. To prove the effectiveness of the proposed model, called conditional generative adversarial with enhanced transformer (CGA-ET), we performed experiments on three different datasets and observed that our proposed model is able to achieve better results than the baselines models in terms of BLEU, METEOR, NIST, ROUGE-L, CIDEr, BERTScore, and SER automatic evaluation metrics as well as human evaluation.
Lower limb exoskeletons (LLEs) have demonstrated their potential in delivering quantified repetitive gait training for individuals afflicted with gait impairments. A critical concern in robotic gait training pertains to fostering active patient engagement, and a viable solution entails harnessing the patient’s intrinsic effort to govern the control of LLEs. To address these challenges, this study presents an innovative online gait learning approach with an appropriate control strategy for rehabilitation exoskeletons based on dynamic movement primitives (DMP) and an Assist-As-Needed (AAN) control strategy, denoted as DMP-AAN. Specifically tailored for post-stroke patients, this approach aims to acquire the gait trajectory from the unaffected leg and subsequently generate the reference gait trajectory for the affected leg, leveraging the acquired model and the patient’s personal exertion. Compared to conventional AAN methodologies, the proposed DMP-AAN approach exhibits adaptability to diverse scenarios encompassing varying gait patterns. Experimental validation has been performed using the lower limb rehabilitation exoskeleton HemiGo. The findings highlight the ability to generate suitable control efforts for LLEs with reduced human-robot interactive force, thereby enabling highly patient-controlled gait training sessions to be achieved.
Accurate geospatial information about the causes and consequences of climate change, including energy systems infrastructure, is critical to planning climate change mitigation and adaptation strategies. When up-to-date spatial data on infrastructure is lacking, one approach to fill this gap is to learn from overhead imagery using deep-learning-based object detection algorithms. However, the performance of these algorithms can suffer when applied to diverse geographies, which is a common case. We propose a technique to generate realistic synthetic overhead images of an object (e.g., a generator) to enhance the ability of these techniques to transfer across diverse geographic domains. Our technique blends example objects into unlabeled images from the target domain using generative adversarial networks. This requires minimal labeled examples of the target object and is computationally efficient such that it can be used to generate a large corpus of synthetic imagery. We show that including these synthetic images in the training of an object detection model improves its ability to generalize to new domains (measured in terms of average precision) when compared to a baseline model and other relevant domain adaptation techniques.
Dive into the foundations of intelligent systems, machine learning, and control with this hands-on, project-based introductory textbook. Precise, clear introductions to core topics in fuzzy logic, neural networks, optimization, deep learning, and machine learning, avoid the use of complex mathematical proofs, and are supported by over 70 examples. Modular chapters built around a consistent learning framework enable tailored course offerings to suit different learning paths. Over 180 open-ended review questions support self-review and class discussion, over 120 end-of-chapter problems cement student understanding, and over 20 hands-on Arduino assignments connect theory to practice, supported by downloadable Matlab and Simulink code. Comprehensive appendices review the fundamentals of modern control, and contain practical information on implementing hands-on assignments using Matlab, Simulink, and Arduino. Accompanied by solutions for instructors, this is the ideal guide for senior undergraduate and graduate engineering students, and professional engineers, looking for an engaging and practical introduction to the field.
This paper examines Smalley’s preliminary taxonomy of the sound shape and the subsequent application of graphical notation in electroacoustic music. It will demonstrate ways in which spatial categorisations of the morphological sound shape have remained relatively untouched in academia, despite a codependency of frequency, space and time. Theoretical examples and existing visualisations of the sound shape will be considered as a starting point, to determine why the holistic visualisation of space is warranted. A notational system addressing the codependency between spatial and spectral sound shapes will be presented, with reference to its context in Cartesian-coordinate sound environments. This method of electroacoustic notation will incorporate the visualisation of Smalley’s categorisation of spatial sound shapes and ideas of spatial gesture, texture and distribution within Smalley’s composed and listening spaces. This visualisation and notation of composed and listening spaces will demonstrate that audio technologies are imperative drivers in the future analysis and understanding of the sound shape. It will measure the modulation of spatial sound shape properties for Cartesian (height, width, depth) and spherical (azimuth and altitude) across linear temporality, to better represent the complete form of Smalley’s sound shape. This spatial notation will aid the rounded visualisation of Smalley’s morphology, motion, texture, gesture, structure and form. Use of this notational framework will illustrate ways in which a new tool to score electroacoustic sound shapes can inform new practices in computer music composition.
Several governmental organizations all over the world aim for algorithmic accountability of artificial intelligence systems. However, there are few specific proposals on how exactly to achieve it. This article provides an extensive overview of possible transparency and inspectability mechanisms that contribute to accountability for the technical components of an algorithmic decision-making system. Following the different phases of a generic software development process, we identify and discuss several such mechanisms. For each of them, we give an estimate of the cost with respect to time and money that might be associated with that measure.