To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
So far, we’ve considered four data structures: arrays, lists, stacks, and queues. All four could be described as linear, in that they maintain their items as ordered sequences: arrays and lists are indexed by position, stacks are LIFO, and queues are FIFO. In this chapter, we’ll consider the new problem of building a lookup structure, like a table, that can take an input called the key and return its associated value. For example, we might fetch a record of information about a museum artifact given its ID number as the key. None of our previous data structures are a good fit for this problem.
The purpose of this chapter is to determine how the emergence of digtal delgates would affect the process of contract conclusion and how consumer law might need to be supplemented to strike an appropriate balance between utilising the potential for automation, where desired, with the ability of consumers to remain in control.
First-of-a-kind engineered systems often burst with complexity – a major cause of project budget and time overruns. Our particular concern is the structural complexity of nuclear fusion devices, which is determined by the amount and entanglement of components. We seek to understand how this complexity rises during the development phase and how to manage it. This paper formulates a theory around the interplay between a problem-solving design process and an evolving design model. The design process introduces new elements that solve problems but also increase the quantifiable complexity of the model. Those elements may lead to new problems, extending the design process. We capture these causal effects in a hierarchy of problems and introduce two metrics of the impact of design decisions on complexity. By combining and incorporating the Function-Behavior-Structure (FBS) paradigm, we create a new problem-solving method. This method frames formulation, synthesis and analysis activities as transitions from problems to solutions. We demonstrate our method for a nuclear fusion measurement system. Exploring different design trajectories leads to alternative design models with varying degrees of complexity. Furthermore, we visualize the time-evolution of complexity during the design process. Analysis of individual design decisions emphasizes the high impact of early design decisions on the final system complexity.
This chapter identifies three shortcomings in our preparedness for the governance of future worlds of consumers and AI. If our governance is to be smart, there must first be a systematic gathering of regulatory intelligence (to understand what does and does not work). AI givernance will require new institutions that are geared for the kind of conversations that humans will need to have in the future to adjust to a radically different approach to governance
Logic Theorist was the first artificially intelligent program, created in 1955 by Allen Newell and Herbert Simon, and actually predating the term “artificial intelligence,” which was introduced the next year. Logic Theorist could apply the rules of symbolic logic to prove mathematical theorems – the first time a computer accomplished a task considered solely within the domain of human intelligence. Given a starting statement, it applied logical laws to generate a set of new statements, then recursively continued the process. Eventually, this procedure would discover a chain of logical transformations that connected the starting statement to the desired final statement. Applied naively, this process would generate an intractable number of possible paths, but Logic Theorist had the ability to detect and discard infeasible paths that couldn’t lead to a solution.
Very often, software developers need to evaluate the trade-offs between different approaches to solving a problem. Do you want the fastest solution, even if it’s difficult to implement and maintain? Will your code still be useful if you have to process 100 times as much data? What if an algorithm is fast for some inputs but terrible for others? Algorithm analysis is the framework that computer scientists use to understand the trade-offs between algorithms. Algorithm analysis is primarily theoretical: It focuses on the fundamental properties of algorithms, and not on systems, languages, or any particular details of their implementations.
This chapter introduces the key concepts of algorithm analysis, starting from the practical example of searching an array for a value of interest. We’ll start by making experimental comparisons between two searching methods: a simple linear search and the more complex binary search. The second part of the chapter introduces one of the most important mathematical tools in computer science, Big-O notation, the primary tool for algorithm analysis.
This chapter argues that the influences on consumer choice should be revisited because the digital environment and the use of AI increase the urgency of having a clear criterion with which to distinguish permitted influences from prohibited ones. The current emphasis on either rational consumers or behaviourally influenced consumers operates with an ideal of unencumbered choice which has no place in reality and overlooks the fact that the law allows many subtle or not-so-subtle attempts to influence the actual behaviour of consumers. To effectively stand up to the force of AI-driven sales techniques, it may be necessary to update the existing framework of consumer protection.
In this paper, a novel tensioning and relaxing wearable system is introduced to improve the wearing comfort and load-bearing capabilities of knee exoskeletons. The research prototype of the novel system, which features a distinctive overrunning clutch drive, is presented. Through co-simulation with ANSYS, MATLAB, and SOLIDWORKS software, a comprehensive multi-objective optimization is performed to enhance the dynamics performance of the prototype. Firstly, the wearing contact stiffness of the prototype and the mechanical parameters of the relevant materials are simulated and fitted based on the principle of functional equivalence. And then, its equivalent nonlinear circumferential stiffness model is obtained. Secondly, to enhance the wearing comfort of the exoskeleton, a novel comprehensive performance evaluation index, termed wearing comfort, is introduced. The index considers multiple factors such as the duration of vibration transition, the acceleration encountered during wear, and the average pressure applied. Finally, through the utilization of this indicator, the system’s dynamics performance is optimized via multi-platform co-simulation, and the simulation results validate the effectiveness of the research method and the proposed wearable comfort index. The theoretical basis for the subsequent research on the effectiveness of prototype weight-bearing is provided.
Crowd monitoring for sports games is important to improve public safety, game experience, and venue management. Recent crowd-crushing incidents (e.g., the Kanjuruhan Stadium disaster) have caused 100+ deaths, calling for advancements in crowd-monitoring methods. Existing monitoring approaches include manual observation, wearables, video-, audio-, and WiFi-based sensing. However, few meet the practical needs due to their limitations in cost, privacy protection, and accuracy.
In this paper, we introduce a novel crowd monitoring method that leverages floor vibrations to infer crowd reactions (e.g., clapping) and traffic (i.e., the number of people entering) in sports stadiums. Our method allows continuous crowd monitoring in a privacy-friendly and cost-effective way. Unlike monitoring one person, crowd monitoring involves a large population, leading to high uncertainty in the vibration data. To overcome the challenge, we bring in the context of crowd behaviors, including (1) temporal context to inform crowd reactions to the highlights of the game and (2) spatial context to inform crowd traffic in relation to the facility layouts. We deployed our system at Stanford Maples Pavilion and Michigan Stadium for real-world evaluation, which shows a 14.7% and 12.5% error reduction compared to the baseline methods without the context information.
No other computational problem has been studied in more depth, or yielded a greater number of useful solutions, than sorting. Historically, business computers spent 25% of their time doing nothing but sorting data (Knuth, 2014c), and many advanced algorithms start by sorting their inputs. Dozens of algorithms have been proposed over the last 80-odd years, but there is no “best” solution to the sorting problem. Although many popular sorting algorithms were known as early as the 1940s, researchers are still designing improved versions – Python’s default algorithm was only implemented in the early 2000s and Java’s current version in the 2010s.
This chapter examines the effects that legally-oriented AI developments will have on consumer protection and to consumers’ need for legal advice and representation. The chapter provides a brief survey of the many possible ways in which AI may influence consumers’ legal needs. It provides comparative analysis of the benefits and risks of the use of AI in the legal sphere, discusses the state of regulation in this area and argues in favor of a new regulatory framework.
Computer animators have always sought to push boundaries and create impressive, realistic visual effects, but some processes are too demanding to model exactly. Effects like fire, smoke, and water have complex fluid dynamics and amorphous boundaries that are hard to recreate with standard physical calculations. Instead, animators might turn to another approach to create these effects: particle systems. Bill Reeves, a graphics researcher and animator, began experimenting with particle-based effects in the early 1980s while making movies at Lucasfilm. For a scene in Star Trek II: The Wrath of Khan (1982), he needed to create an image of explosive fire spreading across the entire surface of a planet. Reeves used thousands of independent particles, each one representing a tiny piece of fire (Reeves, 1983). The fire particles were created semi-randomly, with attributes for their 3D positions, velocities, and colors. Reeves’ model governed how particles appeared, moved, and interacted to create a realistic effect that could be rendered on an early 1980s computer. Reeves would go on to work on other Lucasfilm productions, including Return of the Jedi (1983), before joining Pixar, where his credits include Toy Story (1995) and Finding Nemo (2003).
Java is an object-oriented programming language. Java programs are implemented as collections of classes and objects that interact with each other to deliver the functionality that the programmer wants. So far, we’ve used “class” as being roughly synonymous with “program,” and all of our programs have consisted of one public class with a main method that may call additional methods. We’ve also talked about how to use the new keyword to initialize objects like Scanner that can perform useful work. It’s now time to talk about the concepts of objects and classes in more depth and then learn how to write customized classes.
The previous two chapters showed how the concept of last-in-first-out data processing is surprisingly powerful. We’ll now consider the stack’s counterpart, the queue. Like a waiting line, a queue stores a set of items and returns them in first-in-first-out (FIFO) order. Pushing to the queue adds a new item to the back of the line and pulling retrieves the oldest item from the front. Queues have a lower profile than stacks, and are rarely the centerpiece of an algorithm. Instead, queues tend to serve as utility data structures in a larger system.
The aspirations-ability framework proposed by Carling has begun to place the question of who aspires to migrate at the center of migration research. In this article, building on key determinants assumed to impact individual migration decisions, we investigate their prediction accuracy when observed in the same dataset and in different mixed-migration contexts. In particular, we use a rigorous model selection approach and develop a machine learning algorithm to analyze two original cross-sectional face-to-face surveys conducted in Turkey and Lebanon among Syrian migrants and their respective host populations in early 2021. Studying similar nationalities in two hosting contexts with a distinct history of both immigration and emigration and large shares of assumed-to-be mobile populations, we illustrate that a) (im)mobility aspirations are hard to predict even under ‘ideal’ methodological circumstances, b) commonly referenced “migration drivers” fail to perform well in predicting migration aspirations in our study contexts, while c) aspects relating to social cohesion, political representation and hope play an important role that warrants more emphasis in future research and policymaking. Methodologically, we identify key challenges in quantitative research on predicting migration aspirations and propose a novel modeling approach to address these challenges.
A hash, in culinary terms, is a dish made of mixed foods – often including corned beef and onions – chopped into tiny pieces. In the early twentieth century, it became a shorthand for something of dubious origin, probably unwise to consume. In computer science, a hash function is an operation that rearranges, mixes, and combines data to produce a single fixed-size output. Unlike their culinary namesake, hash functions are wonderfully useful. A hash value is like a “fingerprint” of the input used to calculate it. Hash functions have applications to security, distributed systems, and – as we’ll explore – data structures.
This technical note shows how we have combined prescriptive type checking and constraint solving to increase automation during software verification. We do so by defining a type system and implementing a typechecker for $\{log\}$ (read ‘setlog’), a Constraint Logic Programming language and satisfiability solver based on set theory. The constraint solver is proved to be safe w.r.t. the type system. Two industrial-strength case studies are presented where this combination is used with very good results.