To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Quick and accurate forecasts of incidence and mortality trends for the near future are particularly useful for the immediate allocation of available public health resources, as well as for understanding the long-term course of the pandemic. The surveillance data used for predictions, however, may come with some reporting delays. Consequently, auxiliary data sources that are available immediately can provide valuable additional information for recent time periods for which surveillance data have not yet become fully available. In this work, a set of Google search queries by individual users related to COVID-19 incidence and mortality is collected and analyzed. The information from these queries aims to improve quick forecasts. Initially, the identified search query keywords were ranked according to their predictive abilities with reported incidence and mortality. After that, the ARIMA, Prophet, and XGBoost models were fitted to generate forecasts using only the available reported incidence and mortality (baseline model) or together with combinations of searched keywords identified based on their predictive abilities (predictors model). In summary, the inclusion of top-ranked keywords as predictors significantly enhanced prediction accuracy for the majority of scenarios in the range from 50% to 90% across all considered models and is recommended for future use. The inclusion of low-ranked keywords did not provide such an improvement. In general, the ranking of predictors and the corresponding forecast improvements were more pronounced for incidence, while the results were less pronounced for mortality.
A finite point set in $\mathbb{R}^d$ is in general position if no $d + 1$ points lie on a common hyperplane. Let $\alpha _d(N)$ be the largest integer such that any set of $N$ points in $\mathbb{R}^d$, with no $d + 2$ members on a common hyperplane, contains a subset of size $\alpha _d(N)$ in general position. Using the method of hypergraph containers, Balogh and Solymosi showed that $\alpha _2(N) \lt N^{5/6 + o(1)}$. In this paper, we also use the container method to obtain new upper bounds for $\alpha _d(N)$ when $d \geq 3$. More precisely, we show that if $d$ is odd, then $\alpha _d(N) \lt N^{\frac {1}{2} + \frac {1}{2d} + o(1)}$, and if $d$ is even, we have $\alpha _d(N) \lt N^{\frac {1}{2} + \frac {1}{d-1} + o(1)}$. We also study the classical problem of determining $a(d,k,n)$, the maximum number of points selected from the grid $[n]^d$ such that no $k + 2$ members lie on a $k$-flat, and improve the previously best known bound for $a(d,k,n)$, due to Lefmann in 2008, by a polynomial factor when $k$ = 2 or 3 (mod 4).
Successful synthesis of a new design requires balancing of trade-offs that arise from multiple competing design objectives and constraints. Early-stage design synthesis typically does not consider detailed technical constraints; a task left to late-stage mathematical design optimisation to refine an already-determined configuration. The recently developed Multi-Objective Monotonicity Analysis (MOMA) has shown that design optimisation can be used successfully in configuration redesign. This article extends the MOMA approach to early-stage design. Synthesis of an aptly named ideal design is achieved by focusing on the avoidance or reduction of trade-offs and by managing active constraints across all stages of the design process. The ideal design meets a set of formal conditions, which provide the basis for a systematic collection of corresponding design principles that can be selectively combined to create new embodiments, avoiding overly restrictive trade-offs and constraints. These principles are consistent with the decision making of experienced mechanical designers, shown here in the industrial practice for designing drug delivery devices.
Generative artificial intelligence (GenAI) applications in job scheduling are expected to help schedulers embed their requirements into scheduling models in a more user-friendly way to generate customized scheduling results. However, there are still very few such applications, while using existing general-purpose GenAI services is inconvenient and prone to data leakage risks. To solve these problems, this study established a GenAI job scheduling system. By hosting the GenAI job scheduling system locally, schedulers can avoid the leakage of order- or recipe-related information that may occur when uploading to the cloud-based GenAI service. In the GenAI job scheduling system, a user interface is designed for users to enter queries in natural language. The user’s query is then analyzed to extract his/her requirements related to the scheduling task, thereby building an extended three-field notation (ETFN) of the scheduling problem. A customized genetic algorithm (GA) is generated to help solve the mathematical programming (MP) model corresponding to the ETFN, thereby updating invalid code or adding new code to the basic GA application. The effectiveness of the GenAI job scheduling system has been tested in a flexible job shop case.
Designed for educators, researchers, and policymakers, this insightful book equips readers with practical strategies, critical perspectives, and ethical insights into integrating AI in education. First published in Swedish in 2023, and here translated, updated, and adapted for an English-speaking international audience, it provides a user-friendly guide to the digital and AI-related challenges and opportunities in today's education systems. Drawing upon cutting-edge research, Thomas Nygren outlines how technology can be usefully integrated into education, not as a replacement for humans, but as a tool that supports and reinforces students' learning. Written in accessible language, topics covered include AI literacy, source awareness, and subject-specific opportunities. The central role of the teacher is emphasized throughout, as is the importance of thoughtful engagement with technology. By guiding the reader through the fastevolving digital transformation in education globally, it ultimately enables students to become informed participants in the digital world.
In recent years, the use of AI has skyrocketed. The introduction of widely available generative AI, such as ChatGPT, has reinvigorated concerns for harm caused to users. Yet so far government bodies and scholarly literature have failed to determine a governance structure to minimize the risks associated with AI and big data. Despite the recent consensus among tech companies and governments that AI needs to be regulated, there has been no agreement regarding what a framework of functional AI governance should look like. This volume assesses the role of law in governing AI applications in society. While exploring the intersection of law and technology, it argues that getting the mix of AI governance structures correct-both inside and outside of the law-while balancing the importance of innovation with risks to human dignity and democratic values, is one of the most important legal-social determination of our times.
The sixth chapter provides a deeper exploration of probabilistic models, building upon concepts encountered earlier in the text. The chapter illustrates how to construct diverse models, particularly by employing the notion of conditional independence. It also outlines standard methods for estimating parameters and hidden states, as well as techniques for sampling. The chapter concludes by discussing and implementing applications such as Kalman filtering and Gibbs sampling. The chapter covers a range of topics, including parametric families of probability distributions, maximum likelihood estimation, modeling complex dependencies using conditional independence and marginalization, and applications such as linear-Gaussian models and Kalman filtering.
This chapter introduces the mathematics of data through the example of clustering, a fundamental technique in data analysis and machine learning. The chapter begins with a review of essential mathematical concepts, including matrix and vector algebra, differential calculus, optimization, and elementary probability, with practical Python examples. The chapter then delves into the k-means clustering algorithm, presenting it as an optimization problem and deriving Lloyd's algorithm for its solution. A rigorous analysis of the algorithm's convergence properties is provided, along with a matrix formulation of the k-means objective. The chapter concludes with an exploration of high-dimensional data, demonstrating through simulations and theoretical arguments how the "curse of dimensionality" can affect clustering outcomes.
This chapter educates the reader on the main ideas that have enabled various advancements in Artificial Intelligence (AI) and Machine Learning (ML). Using various examples, and taking the reader on a journey through history, it showcases how the main ideas developed by the pioneers of AI and ML are being used in our modern era to make the world a better place. It communicates that our lives are surrounded by algorithms that work based on a few main ideas. It also discusses recent advancements in Generative AI, including the main ideas that led to the creation of Large Language Models (LLMs) such as Chat GPT. The chapter also discusses various societal considerations in AI and ML and ends with various technological advancements that could further improve our abilities in using the main ideas.
A major challenge in laryngeal surgery today is the limited flexibility of surgical operations. To address the limitation, this paper proposes a novel continuum robot (CR) system with enhanced dexterity, a robust inverse kinematics algorithm, and a sensorless automatic calibration method. The proposed CR possesses 4 flexible degrees of freedom, allowing for control based on angles and end-effector position. Compared with traditional Jacobian-based methods, the proposed inverse kinematics algorithm effectively addresses the singularity issue arising from curvature hypotheses. Mitigating the singularity is crucial for ensuring continuous and stable motion planning. The calibration method enables automatic initialization without additional sensors, a capability not previously reported in the literature. The efficient and automatic calibration reduces preparation time for laryngeal surgery. Compared to the manual calibration, which requires approximately 210 s, the proposed method reduces the calibration time by 160 s, thereby achieving a 76.19 % improvement in efficiency. Taking the damped least squares method as the baseline, the inverse kinematics algorithm reduces the maximum solving error from 7.36 mm to just 0.05 mm. Furthermore, the CR is capable of dexterous motion within narrow and curved cavities. The phantom and animal experiment results demonstrate the practicality and reliability of the proposed CR system in laryngeal surgery.
This chapter discusses the broader role and impact of analytics science in improving various aspects of society. It introduces what the book is about, and what the reader should expect to learn from reading this book. It also discusses the analytics revolution in the private and public sector, and introduces a key element of the book — insight-driven problem solving — by highlighting its vital role in addressing various societal problems.
In this chapter, the reader learns about the main ideas developed by analytics scientists in problem solving that requires decision-making over time. The reader realizes that various decision-making problems, including those made in everyday life such as which parking spot to take or which job offer to accept, can be addressed using these main ideas. The chapter also illustrates how the same ideas have been used to improve the world around us by citing various examples, including assisting policymakers to gain insights into the impact of different social mobility policies or to find better lockdown policies during a pandemic such as COVID-19.
This chapter explores the behavior of random walks on graphs, framed within the broader context of Markov chains. It introduces finite-state Markov chains, explaining key concepts such as transition matrices, the Markov property, and the computation of stationary distributions. The chapter then discusses the long-term behavior of Markov chains, including the convergence to equilibrium under conditions of irreducibility and aperiodicity. The chapter delves into the application of random walks on graphs, particularly in the context of PageRank, a method for identifying central nodes in a network. It also discusses Markov chain Monte Carlo (MCMC) methods, specifically the Metropolis–Hastings algorithm and Gibbs sampling, which are used to generate samples from complex probability distributions. The chapter concludes by illustrating the application of Gibbs sampling to generate images of handwritten digits using a restricted Boltzmann machine.
Chapter 2 explores the fundamental concept of least squares, covering its geometric, algebraic, and numerical aspects. The chapter begins with a review of vector spaces and matrix inverses, then introduces the geometry of least squares through orthogonal projections. It presents the QR decomposition and Householder transformations as efficient methods for solving least-squares problems. The chapter concludes with an application to regression analysis, demonstrating how to fit linear and polynomial models to data. Key topics include the normal equations, orthogonal decomposition, and the Gram–Schmidt algorithm. The chapter also addresses the issue of overfitting in polynomial regression, highlighting the importance of model selection in data analysis. The chapter includes practical Python implementations and numerical examples to reinforce the theoretical concepts.
This chapter introduces the foundational mathematical concepts behind neural networks, backpropagation, and stochastic gradient descent (SGD). It begins by generalizing the Chain Rule and providing a brief overview of automatic differentiation, which is essential for efficiently computing derivatives in machine learning models. The chapter then explains backpropagation within the context of multilayer neural networks, specifically focusing on multilayer perceptrons (MLPs). It covers the implementation of SGD, highlighting its advantages in optimizing large datasets. Practical examples using the PyTorch library are provided, including the classification of images from the Fashion-MNIST dataset. The chapter provides a solid foundation in the mathematical tools and techniques that underpin modern AI.