To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter deals with how public policy can steer AI, by taking how it can impact the use of big data, one of the key inputs required for AI. Essentially, public policy can steer AI through putting conditions and limitations on data. But data itself can help improve public policy – also in the area of economic policymaking. Hence, this chapter touches on the future potential of economic policy improvements through AI. More specifically, we discuss under what conditions the availability of large data sets can support and enhance public policy effectiveness – including in the use of AI – along two main directions. We analyze how big data can help existing policy measures to improve their effectiveness and, second, we discuss how the availability of big data can suggest new, not yet implemented, policy solutions that can improve upon existing ones. The key message of this chapter is that the desirability of big data and AI to enhance policymaking depends on the goal of public authorities, and on aspects such as the cost of data collection and storage and the complexity and importance of the policy issue.
In this chapter, we describe the development of AI since World War II, noting various AI “winters” and tracing the current boom in AI back to around 2006/2007. We provide various metrics describing the nature of this AI boom. We then provide a summary and discussion of the salient research relevant to the economics of AI and outline some recent theoretical advances.
This chapter provides a motivation for this book, outlining the interests of economists in artificial intelligence, describing who this book is aimed at, and laying out the structure of the book.
In this chapter, we take the production function enriched with AI abilities from Chapter 4, and apply it to study the implications for progress in AI on growth and inequality. The crucial finding we discuss in this chapter is that understanding the nature of AI as narrow ML and its effect on key macroeconomic outcomes depends on having appropriate assumptions in growth models. In particular, we discuss the appropriateness of assuming, as most standard endogenous growth models today do, that economies are supply driven. If they are not supply driven, then demand constraints, which can arise from the diffusion of AI, may restrict growth. Through this, we show why expectations that AI will may lead to “explosive” economic growth is unlikely to materialize. We show that by considering the nature of AI as specific (and not general) AI and making appropriate assumptions that reflect the digital AI economy better, economic outcomes may be characterized by slow growth, rising inequality, and rather full employment – conditions that rather well describe economies in the West.
In this chapter, we consider the future of AI. We base our speculation on informed discussions of the implications of current socioeconomic and technological trends, and on our understanding of past digital revolutions. This allows us to provide insights on where the economy is heading, and what this may imply for economics as a science. Future avenues for research are identified.
A linear equation $E$ is said to be sparse if there is $c\gt 0$ so that every subset of $[n]$ of size $n^{1-c}$ contains a solution of $E$ in distinct integers. The problem of characterising the sparse equations, first raised by Ruzsa in the 90s, is one of the most important open problems in additive combinatorics. We say that $E$ in $k$ variables is abundant if every subset of $[n]$ of size $\varepsilon n$ contains at least $\text{poly}(\varepsilon )\cdot n^{k-1}$ solutions of $E$. It is clear that every abundant $E$ is sparse, and Girão, Hurley, Illingworth, and Michel asked if the converse implication also holds. In this note, we show that this is the case for every $E$ in four variables. We further discuss a generalisation of this problem which applies to all linear equations.
In the field of laparoscopic surgery, research is currently focusing on the development of new robotic systems to assist practitioners in complex operations, improving the precision of their medical gestures. In this context, the performance of these robotic platforms can be conditioned by various factors, such as the robot’s accessibility and dexterity in the task workspace. In this paper, we present a new strategy for improving the kinematic and dynamic performance of a 7-degrees of freedom robot-assisted camera-holder system for laparoscopic surgery. This approach involves the simultaneous optimization of the robot base placement and the laparoscope mounting orientation. To do so, a general robot capability representation approach is implemented in an innovative multiobjective optimization algorithm. The obtained results are first evaluated in simulation and then validated experimentally by comparing the robot’s performances implementing both the existing and the optimized solution. The optimization result led to a 2% improvement in the accessibility index and a 14% enhancement in manipulability. Furthermore, the dynamic performance criteria resulted in a substantial 43% reduction in power consumption.
Pharmaceutical distribution routing problem is a key problem for pharmaceutical enterprises, since efficient schedules can enhance resource utilization and reduce operating costs. Meanwhile, it is a complicated combinatorial optimization problem. Existing research mainly focused on delivery route lengths or distribution costs minimization, while seldom considered customer priority and carbon emissions simultaneously. However, considering the customer priority and carbon emissions simultaneously will not only help to enhance customer satisfaction, but also help to reduce the carbon emissions. In this article, we consider the customer priority and carbon emission minimization simultaneously in the pharmaceutical distribution routing problem, the corresponding problem is named pharmaceutical distribution routing problem considering customer priority and carbon emissions. A corresponding mathematical model is formulated, the objectives of which are minimizing fixed cost, refrigeration cost, fuel consumption cost, carbon emission cost, and penalty cost for violating time windows. Moreover, a hybrid genetic algorithm (HGA) is proposed to solve the problem. The framework of the proposed HGA is genetic algorithm (GA), where an effective local search based on variable neighborhood search (VNS) is specially designed and incorporated to improve the intensification abilities. In the proposed HGA, crossover with adaptive probability and mutation with adaptive probability are utilized to enhance the algorithm performance. Finally, the proposed HGA is compared with four optimization algorithms, and experimental results have demonstrated the effectiveness of the HGA.
In practical applications, many robots equipped with embedded devices have limited computing capabilities. These limitations often hinder the performance of existing dynamic SLAM algorithms, especially when faced with occlusions or processor constraints. Such challenges lead to subpar positioning accuracy and efficiency. This paper introduces a novel lightweight dynamic SLAM algorithm designed primarily to mitigate the interference caused by moving object occlusions. Our proposed approach combines a deep learning object detection algorithm with a Kalman filter. This combination offers prior information about dynamic objects for each SLAM algorithm frame. Leveraging geometric techniques like RANSAC and the epipolar constraint, our method filters out dynamic feature points, focuses on static feature points for pose determination, and enhances the SLAM algorithm’s robustness in dynamic environments. We conducted experimental validations on the TUM public dataset, which demonstrated that our approach elevates positioning accuracy by approximately 54% and boosts the running speed by 75.47% in dynamic scenes.
The naive combination of polymorphic effects and polymorphic type assignment has been well known to break type safety. In the literature, there are two kinds of approaches to this problem: one is to restrict how effects are triggered and the other is to restrict how they are implemented. This work explores a new approach to ensuring the safety of the use of polymorphic effects in polymorphic type assignment. A novelty of our work is to restrict effect interfaces. To formalize our idea, we employ algebraic effects and handlers, where an effect interface is given by a set of operations coupled with type signatures. We propose signature restriction, a new notion to restrict the type signatures of operations and show that signature restriction ensures type safety of a language equipped with polymorphic effects and unrestricted polymorphic type assignment. We also develop a type-and-effect system to enable the use of both of the operations that satisfy and those that do not satisfy the signature restriction in a single program.
In this qualitative systematic meta-synthesis study, 57 studies from the international literature published between 2010 and 2024 on the use of voice-based artificially intelligent chatbots in English language learning were analyzed. The present study aimed to explore the most recent studies on this topic by investigating the theoretical frameworks, methodological and technological properties, user reports of chatbot usage experience, and pedagogical implementations. It sought to identify research and implementation trends for voice-based chatbots via qualitative data analysis methods. Based on the reviewed studies, this paper presents data-based pedagogical implications that align with the latest voice-based AI chatbot research trends.
A decoupling method is proposed for the elastic stiffness modeling of hybrid robots based on the rigidity principle, screw theory, strain energy, and Castigliano’s second theorem. It enables the decoupling of parallel and serial modules, as well as the individual contributions of each elastic component to the mechanism’s stiffness performance. The method is implemented as follows: (1) formulate limb constraint wrenches and corresponding limb stiffness matrix based on the screw theory and strain energy, (2) formulate the overall stiffness matrix of parallel and serial modules corresponding to end of the hybrid robots based on the rigidity principle, principle of virtual work, the wrench transfer formula, and strain energy methods, and (3) obtain and decouple the overall stiffness matrix and deflection of the robot based on the Castigliano’s second theorem. Finally, A planar hybrid structure and the 4SRRR + 6R hybrid robot are used as illustrative examples to implement the proposed method. The results indicate that selectively enhancing the stiffness performance of the mechanism is the most effective approach.
Suicide is a leading cause of death in the United States, particularly among adolescents. In recent years, suicidal ideation, attempts, and fatalities have increased. Systems maps can effectively represent complex issues such as suicide, thus providing decision-support tools for policymakers to identify and evaluate interventions. While network science has served to examine systems maps in fields such as obesity, there is limited research at the intersection of suicidology and network science. In this paper, we apply network science to a large causal map of adverse childhood experiences (ACEs) and suicide to address this gap. The National Center for Injury Prevention and Control (NCIPC) within the Centers for Disease Control and Prevention recently created a causal map that encapsulates ACEs and adolescent suicide in 361 concept nodes and 946 directed relationships. In this study, we examine this map and three similar models through three related questions: (Q1) how do existing network-based models of suicide differ in terms of node- and network-level characteristics? (Q2) Using the NCIPC model as a unifying framework, how do current suicide intervention strategies align with prevailing theories of suicide? (Q3) How can the use of network science on the NCIPC model guide suicide interventions?