To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We show that recent approaches to quantitative analysis based on non-idempotent typing systems can be extended to programming languages with effects. In particular, we consider two cases: the weak open call-by-name (CBN) and call-by-value (CBV) variants of the $\lambda$-calculus, equipped with operations to write and read from a global state. In order to capture quantitative information with respect to time and space for both CBN and CBV, we design for each of them a quantitative type system based on a (tight) multi-type system. One key observation of this work is how CBN and CBV influence the composition of state types. That is, each type system is developed by taking into account how each language manages the global state: in CBN, the composition of state types is almost straightforward, since function application does not require evaluation of its argument; in CBV, however, the interaction between functions and arguments makes the composition of state types more subtle since only values can be passed as actual arguments. The main contribution of this paper is the design of type systems capturing quantitative information about effectful CBN and CBV programming languages. Indeed, we develop type systems that are qualitatively and quantitatively sound and complete.
This chapter focuses on the core concepts of optimization theory and its application in data science and AI. It begins with a review of differentiable functions of several variables, including the gradient and Hessian matrices, and key results like the Chain Rule and the Mean Value Theorem. The chapter then introduces optimality conditions for unconstrained optimization, explaining first-order and second-order conditions, and the role of convexity in ensuring global optimality. A detailed discussion of the gradient descent algorithm is provided, including its convergence analysis under different assumptions. The chapter concludes with an application to logistic regression, demonstrating how gradient descent is used to optimize the cross-entropy loss function in a supervised learning context. Practical Python examples are integrated throughout to illustrate the theoretical concepts.
This article explores the intersection of electroacoustic music in multichannel immersive audio environments and trauma-informed therapy practices through gestural mediation in artistic works. Drawing on Jean-François Augoyard’s definition of anamnesis, the article examines how spatial audio can evoke memories and mirror the psychological landscapes associated with trauma. The research is centred around the composition Crumble, which uses spatial sound to articulate fragmented mental states of individuals processing traumatic experiences. Through an analysis of the spatial placement of sound, gestural technologies and embodied interaction, the article demonstrates how these elements converge to create immersive environments that facilitate an understanding of trauma and memory. The findings highlight the role of embodied cognition in music performance, showing how Crumble’s integration of body, space and sound fosters audience connection through immersive spatialisation and gestural mediation. By integrating these practices, this study offers new insights into the therapeutic potential of immersive electroacoustic music as a medium for expressing complex emotional states. Combining embodied sound, gestural control and spatial audio promotes the exploration of space and memory, encourages personal agency and supports reintegration of body and mind, aligning with trauma-informed practices. It suggests avenues for future exploration in the intersection of music, psychology and immersive technologies.
This chapter is focused on game theory and mechanism design, presenting them as an important branch of analytics science that has impacted our world. Like all other chapters, it starts by presenting the big picture ideas, and showcasing various real-world examples in which those ideas have been impactful. It educates the reader though various familiar examples such as the simple decisions involved in cutting a cake and more critical decision-making scenarios such as what happened during the Battle of the Bismarck Sea or finding ways to reduce racial segregation in the society, and from policies that revolutionized life-saving ideas for those who are in dire need of transplantation to governments’ complex efforts in improving voting mechanisms. The chapter provides engaging stories showcasing how the main ideas in game theory and mechanism design have been impactful in a myriad of ways.
This chapter is devoted to understanding how the main ideas in graph theory and combinatorics optimization can assist insight-driven problem solving, and thereby, create public impact. The reader sees how such ideas have laid the foundation for apps such as Google Maps and how they are being used to improve our understanding of social networks, design transportation networks, create efficiency schedules for sports events, enhance cryptosystems, and improve the efficiency of supply chains. The reader also learns how analytics scientists have been able to learn from the amazing ability of nature in problem solving (swarm intelligence) and use this to develop effective insight-driven problem solving approaches that can yield powerful insights in addressing complex societal problems.
This chapter starts by communicating how various aspects of our lives involve interacting with queues. It then provides a brief history of the main inception of queueing theory and its main governing princples, and discusses how it has impacted various aspects of our lives. It educates the reader about the main ideas and principles in queueing theory and also elaborates on the psychological aspects of waiting in queues. Showcasing various examples of how the main ideas in queueing theory have enabled important improvements, ranging from what happened during Queen Elizabeth II’s memorial, to the creation of the internet and modern telephones, to our experiences in airports or on roads, the chapter presents queueing theory as a potent branch of analytics science that has enabled scholars to make the world a better place. The chapter also discusses the vital interplays between queueing theory, public policy, and technology.
This chapter is devoted to data analysis and its critical role in analytics science. The reader is introduced to the science of inference from observations and experiments and learns about the main ideas in data analysis that have been influential in addressing societal problems. Real-world examples are used throughout to convey the main ideas and illustrate why data analyses performed without sufficient care can yield wrong insights. Successful examples of insight-driven problem solving approaches in data analysis are contrasted with those that can yield wrong insights, and the reader is taken on an engaging yet educational journey that depicts how and why successful insight-driven problem solving approaches using data can have significant public impact.
The ongoing development of a Swiss Health Data Space (SHDS) presents an opportunity to transform health delivery and care by enabling large-scale secondary health research. The successful implementation of the SHDS depends on its trustworthiness, as public trust is closely linked to public participation in data-sharing initiatives. We conducted four focus groups across the German-, French-, and Italian-speaking regions of Switzerland to identify public expectations and requirements related to the attributes that define a trustworthy SHDS. The participants discussed four fictitious case studies on: (1) consent management; (2) record linkage via the national social security number; (3) national data coordination center; and (4) cross-border data exchange. To best inform Swiss policy, we held a panel discussion with patient experts and healthcare professionals to translate the focus group findings into governance and public communication recommendations. Policy recommendations are proposed based on insights from the fictitious case studies discussed with participants, accompanied by guidance on implementation measures that contribute to proactively building trust in the development of the SHDS. Communication recommendations are further provided, highlighting that the success of the SHDS will depend on early and continuous trustworthy public communication efforts that actively engage the Swiss public, address their concerns, and foster support throughout its development. Overarching these efforts will be a foundational governance approach that meaningfully involves relevant stakeholders and members of the Swiss public, while allocating appropriate responsibility to maintain trustworthiness of the SHDS.
This chapter introduces the reader to the big picture of what analytics science is. What is analytics science? What types does it have, and what is its scope? How can analytics science be used to improve various tasks that society needs to carry out? Is analytics science all about using data? Or can it work without data? What is the role of data versus models? How can one develop and rely on a model to answer essential questions when the model can be wrong due to its assumptions? What is ambiguity in analytics science? Is that different from risk? And how do analytics scientists address ambiguity? What is the role of simulation in analytics science? These are some of the questions that the chapter addresses. Finally, the chapter discusses the notion of "centaurs" and how a successful use of analytics science often requires combining human intuition with the power of strong analytical models.
The fifth chapter explores the application of spectral graph theory to network data analysis. The chapter begins with an introduction to fundamental graph theory concepts, including undirected and directed graphs, graph connectivity, and matrix representations such as the adjacency and Laplacian matrices. It then discusses the variational characterization of eigenvalues and their significance in understanding the structure of graphs. The chapter highlights the spectral properties of the Laplacian matrix, particularly its role in graph connectivity and partitioning. Key applications, such as spectral clustering for community detection and the analysis of random graph models like Erdős–Rényi random graphs and stochastic blockmodels, are presented. The chapter concludes with a detailed exploration of graph partitioning algorithms and their practical implementations using Python.
The fourth chapter introduces the singular value decomposition (SVD), a fundamental matrix factorization with broad applications in data science. The chapter begins by reviewing key linear algebra concepts, including matrix rank and the spectral theorem. It then explores the problem of finding the best low-dimensional approximating subspace to a set of data points, leading to the formal definition of the SVD. The power iteration method is presented as an efficient way to compute the top singular vectors and values. The chapter then demonstrates the application of SVD to principal components analysis (PCA), a dimensionality reduction technique that identifies the directions of maximum variance in data. Further applications of the SVD are discussed, including low-rank matrix approximations and ridge regression, a regularization technique for handling multicollinearity in linear systems.