To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The last chapter ended on a down note, when we realized that the standard binary search tree can’t guarantee O(log n) performance if it isn’t balanced. This chapter introduces self-balancing search trees. All three of the trees we’ll examine – 2-3-4 trees, B-trees, and red–black trees – implement search tree operations, but perform extra work to ensure that the tree stays balanced.
The responsibilitiesand liability of the persons and organisations involved in the development of AI systems are not clearly identified. The assignment of liability will need government to mo e from a risk-based to a responsibility-based system. One possible approach would be to establish a pan-EU compensation fund for damages caused by digital technologies and AI, financed by the industry and insurance companies.
Arrays are Java’s fundamental low-level data structure, used to manage fixed-size collections of items. Chapter 2 introduced ArrayList, which implemented a resizable sequential collection of data items, similar to Python’s lists. Arrays are lower-level, but they’re often the best choice for representing fixed-size collections of items, such as matrices. Arrays are also the basic building block of many higher-level data structures. Therefore, understanding how to create and manipulate basic arrays is an essential skill.
This chapter examines diverse aspects of new technologies that are disrupting traditional consumer protection. These include phenomena such as consumer profiling or commercialization of data. It can be concluded that artificial intelligence represents a particular challenge for consumer law and policy. Consumer law should be technologically neutral. Irrespective of the technology deployed, the level of consumer protection needs to be always maintained. However, consumer law requirements must never be seen as obstacles to the innovation and the development of new technologies; and establishing the right balance between these two values remains a particular challenge.
The evolutionary development of advanced systems (AS) leads to a necessary rethinking of how they can be supported methodically and in terms of processes in product development. Advanced systems engineering (ASE) offers a novel and holistically adaptive approach to facing such challenges in a structured way. However, many of the ASE use cases relate to the development of systems as products, product networks or individual projects. The additional consideration of entire modular product families within AS offers a further decisive advantage for companies, organisations and the people in ASE. By considering modular product families along the entire life cycle in a product family engineering (PFE), the approaches of ASE can bring their impact and potential to additional system levels occurring when considering product families. The systems, which become complex through variety and collaboration, are broken down into their system elements in a structured way and prepared for a common interdisciplinary understanding, as conveyed by ASE. In this paper, the PFE is presented in excerpts using examples of various aspects and points in time of the product’s life as a complementary approach for ASE.
Recursion is a fundamental concept in computer science. A recursive algorithm is one that defines a solution to a problem in terms of itself. That is, recursive techniques solve large problems by building up solutions of smaller instances of the same problem. This turns out to be a powerful technique, because many advanced algorithmic problems and data structures are fundamentally self-similar.
The chapter addresses the notion of psychological harm inflicted upon consumers by AI systems. It ponders what phenomena could be considered psychological harm, analyzes how AI systems could be causing them, and provides an overview of the legal strategies for combating them. It demonstrates that the risk posed to consumers’ mental health by AI systems is real and should be addressed, yet the approach taken by the EU in its AIA Proposal is suboptimal.
Pinterest is a social media platform that allows users to assemble images or other media into customized lists, then share those lists with others. Pinterest calls these lists “pinboards” and the items added to each board “pins,” analogous to real-world physical bulletin boards. Like other social media systems, Pinterest wants to recommend new content to its users to keep them engaged with the service. In 2018, Pinterest introduced a system called Pixie as a component of their overall recommendation infrastructure (Eksombatchai et al., 2018). It uses a graph model to represent the connections among items, then explores that graph in a randomized way to generate recommendations. In this chapter, we’ll build our own system based on the graph algorithms used by Pixie.
We live in a networked world. Professional networks, social networks, neural networks – we’re all familiar with the idea that connections matter. This chapter introduces graphs, our last major topic. Graphs are the primary tool for modeling connections or relationships among a set of items; binary trees, for example, are a special type of graph. Graph models illustrate the power of abstraction: They capture the underlying structure of a network, independent of what the elements actually represent. Therefore, graph algorithms are flexible – they’re not tied to one particular application or problem domain.
So far, we’ve considered four data structures: arrays, lists, stacks, and queues. All four could be described as linear, in that they maintain their items as ordered sequences: arrays and lists are indexed by position, stacks are LIFO, and queues are FIFO. In this chapter, we’ll consider the new problem of building a lookup structure, like a table, that can take an input called the key and return its associated value. For example, we might fetch a record of information about a museum artifact given its ID number as the key. None of our previous data structures are a good fit for this problem.
The purpose of this chapter is to determine how the emergence of digtal delgates would affect the process of contract conclusion and how consumer law might need to be supplemented to strike an appropriate balance between utilising the potential for automation, where desired, with the ability of consumers to remain in control.
First-of-a-kind engineered systems often burst with complexity – a major cause of project budget and time overruns. Our particular concern is the structural complexity of nuclear fusion devices, which is determined by the amount and entanglement of components. We seek to understand how this complexity rises during the development phase and how to manage it. This paper formulates a theory around the interplay between a problem-solving design process and an evolving design model. The design process introduces new elements that solve problems but also increase the quantifiable complexity of the model. Those elements may lead to new problems, extending the design process. We capture these causal effects in a hierarchy of problems and introduce two metrics of the impact of design decisions on complexity. By combining and incorporating the Function-Behavior-Structure (FBS) paradigm, we create a new problem-solving method. This method frames formulation, synthesis and analysis activities as transitions from problems to solutions. We demonstrate our method for a nuclear fusion measurement system. Exploring different design trajectories leads to alternative design models with varying degrees of complexity. Furthermore, we visualize the time-evolution of complexity during the design process. Analysis of individual design decisions emphasizes the high impact of early design decisions on the final system complexity.
This chapter identifies three shortcomings in our preparedness for the governance of future worlds of consumers and AI. If our governance is to be smart, there must first be a systematic gathering of regulatory intelligence (to understand what does and does not work). AI givernance will require new institutions that are geared for the kind of conversations that humans will need to have in the future to adjust to a radically different approach to governance
Logic Theorist was the first artificially intelligent program, created in 1955 by Allen Newell and Herbert Simon, and actually predating the term “artificial intelligence,” which was introduced the next year. Logic Theorist could apply the rules of symbolic logic to prove mathematical theorems – the first time a computer accomplished a task considered solely within the domain of human intelligence. Given a starting statement, it applied logical laws to generate a set of new statements, then recursively continued the process. Eventually, this procedure would discover a chain of logical transformations that connected the starting statement to the desired final statement. Applied naively, this process would generate an intractable number of possible paths, but Logic Theorist had the ability to detect and discard infeasible paths that couldn’t lead to a solution.
Very often, software developers need to evaluate the trade-offs between different approaches to solving a problem. Do you want the fastest solution, even if it’s difficult to implement and maintain? Will your code still be useful if you have to process 100 times as much data? What if an algorithm is fast for some inputs but terrible for others? Algorithm analysis is the framework that computer scientists use to understand the trade-offs between algorithms. Algorithm analysis is primarily theoretical: It focuses on the fundamental properties of algorithms, and not on systems, languages, or any particular details of their implementations.
This chapter introduces the key concepts of algorithm analysis, starting from the practical example of searching an array for a value of interest. We’ll start by making experimental comparisons between two searching methods: a simple linear search and the more complex binary search. The second part of the chapter introduces one of the most important mathematical tools in computer science, Big-O notation, the primary tool for algorithm analysis.
This chapter argues that the influences on consumer choice should be revisited because the digital environment and the use of AI increase the urgency of having a clear criterion with which to distinguish permitted influences from prohibited ones. The current emphasis on either rational consumers or behaviourally influenced consumers operates with an ideal of unencumbered choice which has no place in reality and overlooks the fact that the law allows many subtle or not-so-subtle attempts to influence the actual behaviour of consumers. To effectively stand up to the force of AI-driven sales techniques, it may be necessary to update the existing framework of consumer protection.
In this paper, a novel tensioning and relaxing wearable system is introduced to improve the wearing comfort and load-bearing capabilities of knee exoskeletons. The research prototype of the novel system, which features a distinctive overrunning clutch drive, is presented. Through co-simulation with ANSYS, MATLAB, and SOLIDWORKS software, a comprehensive multi-objective optimization is performed to enhance the dynamics performance of the prototype. Firstly, the wearing contact stiffness of the prototype and the mechanical parameters of the relevant materials are simulated and fitted based on the principle of functional equivalence. And then, its equivalent nonlinear circumferential stiffness model is obtained. Secondly, to enhance the wearing comfort of the exoskeleton, a novel comprehensive performance evaluation index, termed wearing comfort, is introduced. The index considers multiple factors such as the duration of vibration transition, the acceleration encountered during wear, and the average pressure applied. Finally, through the utilization of this indicator, the system’s dynamics performance is optimized via multi-platform co-simulation, and the simulation results validate the effectiveness of the research method and the proposed wearable comfort index. The theoretical basis for the subsequent research on the effectiveness of prototype weight-bearing is provided.