To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Higher-order constructs enable more expressive and concise code by allowing procedures to be parameterized by other procedures. Assertions allow expressing partial program specifications, which can be verified either at compile time (statically) or run time (dynamically). In higher-order programs, assertions can also describe higher-order arguments. While in the context of (constraint) logic programming ((C)LP), run-time verification of higher-order assertions has received some attention, compile-time verification remains relatively unexplored. We propose a novel approach for statically verifying higher-order (C)LP programs with higher-order assertions. Although we use the Ciao assertion language for illustration, our approach is quite general, and we believe is applicable to similar contexts. Higher-order arguments are described using predicate properties – a special kind of property which exploits the (Ciao) assertion language. We refine the syntax and semantics of these properties and introduce an abstract criterion to determine conformance to a predicate property at compile time, based on a semantic order relation comparing the predicate property with the predicate assertions. We then show how to handle these properties using an abstract interpretation-based static analyzer for programs with first-order assertions by reducing predicate properties to first-order properties. Finally, we report on a prototype implementation and evaluate it through various examples within the Ciao system.
The emergence of large language models (LLMs) provides an opportunity for AI to operate as a co-ideation partner during the creative processes. However, designers currently lack a comprehensive methodology for engaging in co-ideation with LLMs, and there is a limited framework that describes the process of co-ideation between a designer and ChatGPT. This research thus aimed to explore how LLMs can act as codesigners and influence creative ideation processes of industrial designers and whether the ideation performance of a designer could be improved by employing the proposed framework for co-ideation with custom GPT. A survey was first conducted to detect how LLMs influenced the creative ideation processes of industrial designers and to understand the problems that designers face when using ChatGPT to ideate. Then, a framework which based on mapping content to guide the co-ideation between humans and custom GPT (named as Co-Ideator) was promoted. Finally, a design case study followed by a survey and an interview was conducted to evaluate the ideation performance of the custom GPT and framework compared with traditional ideation methods. Also, the effect of custom GPT on co-ideation was compared with a non-artificial intelligence (AI)-used condition. The findings indicated that if users employed co-ideation with custom GPT, the novelty and quality of ideation outperformed by using traditional ideation.
In this paper, we study ordering properties of vectors of order statistics and sample ranges arising from bivariate Pareto random variables. Assume that $(X_1,X_2)\sim\mathcal{BP}(\alpha,\lambda_1,\lambda_2)$ and $(Y_1,Y_2)\sim\mathcal{BP}(\alpha,\mu_1,\mu_2).$ We then show that $(\lambda_1,\lambda_2)\stackrel{m}{\succ}(\mu_1,\mu_2)$ implies $(X_{1:2},X_{2:2})\ge_{st}(Y_{1:2},Y_{2:2}).$ Under bivariate Pareto distributions, we prove that the reciprocal majorization order between the two vectors of parameters is equivalent to the hazard rate and usual stochastic orders between sample ranges. We also show that the weak majorization order between two vectors of parameters is equivalent to the likelihood ratio and reversed hazard rate orders between sample ranges.
Asymptotic dimension and Assouad–Nagata dimension are measures of the large-scale shape of a class of graphs. Bonamy, Bousquet, Esperet, Groenland, Liu, Pirot, and Scott [J. Eur. Math. Society] showed that any proper minor-closed class has asymptotic dimension 2, dropping to 1 only if the treewidth is bounded. We improve this result by showing it also holds for the stricter Assouad–Nagata dimension. We also characterise when subdivision-closed classes of graphs have bounded Assouad–Nagata dimension.
This article develops a model to explain the emergence and persistence of shared memory, providing a practical toolkit for empirical research in memory studies. It begins with a review of the concepts of individual and collective memory, highlighting their limitations. In response, the article introduces two alternative concepts – subjectivised memory and hegemonic memory – that capture the interdependence of individual and collective memory while moving beyond their dichotomy. These concepts form the theoretical basis of the proposed model. The article applies the model to the example of Holocaust remembrance in Germany, illustrating how memory becomes hegemonic and persists over time.
This groundbreaking volume is designed to meet the burgeoning needs of the research community and industry. This book delves into the critical aspects of AI's self-assessment and decision-making processes, addressing the imperative for safe and reliable AI systems in high-stakes domains such as autonomous driving, aerospace, manufacturing, and military applications. Featuring contributions from leading experts, the book provides comprehensive insights into the integration of metacognition within AI architectures, bridging symbolic reasoning with neural networks, and evaluating learning agents' competency. Key chapters explore assured machine learning, handling AI failures through metacognitive strategies, and practical applications across various sectors. Covering theoretical foundations and numerous practical examples, this volume serves as an invaluable resource for researchers, educators, and industry professionals interested in fostering transparency and enhancing reliability of AI systems.
Recentneuro-symbolic approaches have successfully extracted symbolic rule-sets from Convolutional Neural Network-based models to enhance interpretability. However, applying similar techniques to Vision Transformers (ViTs) remains challenging due to their lack of modular concept detectors and reliance on global self-attention mechanisms. We propose a framework for symbolic rule extraction from ViTs by introducing a sparse concept layer inspired by Sparse Autoencoders (SAEs). This linear layer operates on attention-weighted patch representations and learns a disentangled, binarized representation in which individual neurons activate for high-level visual concepts. To encourage interpretability, we apply a combination of L1 sparsity, entropy minimization, and supervised contrastive loss. These binarized concept activations are used as input to the FOLD-SE-M algorithm, which generates a rule-set in the form of a logic program. Our method achieves a better classification accuracy than the standard ViT while enabling symbolic reasoning. Crucially, the extracted rule-set is not merely post-hoc but acts as a logic-based decision layer that operates directly on the sparse concept representations. The resulting programs are concise and semantically meaningful. This work is the first to extract executable logic programs from ViTs using sparse symbolic representations, providing a step forward in interpretable and verifiable neuro-symbolic AI.
The mandible is crucial for human physiological functions, as well as facial esthetics and expressions. The mandibular reconstruction surgery has dual challenges of restoration of both facial form and physiological function, which demands high precision in positioning and orientation of the bone graft. The traditional manual surgery heavily relies on surgeon’s experience. Although the computer image-guided surgery improves the positioning accuracy, the manual manipulation is still difficult to achieve precise spatial orientation of objects, resulting in unsatisfactory intraoperative execution of preoperative surgical design. This paper integrates computer image navigation and robotic technology to assist mandible reconstruction surgery, which empowers surgeons to achieve precise spatial localization and orientation adjustment of bone grafts. The kinematic analysis is conducted, and an improved Iterative Closest Point (ICP) algorithm is proposed for spatial registration. A novel hand-eye calibration method for multi-arm robot and spatial registration of free bone blocks are proposed. The precision experiment of the image-guided navigation and the animal experiments are carried out. The impact of registration point numbers on spatial registration accuracy is analyzed. The results show the feasibility of the robot-assisted navigation for mandibular reconstruction surgery. The robotic system can improve the orientation accuracy of bone blocks to enhance the effectiveness of surgery.
Digital Twinning (DT) has become a main instrument for Industry 4.0 and the digital transformation of manufacturing and industrial processes. In this statement paper, we elaborate on the potential of DT as a valuable tool in support of the management of intelligent infrastructures throughout all stages of their life cycle. We highlight the associated needs, opportunities, and challenges and discuss the needs from both the research and applied perspectives. We elucidate the transformative impact of digital twin applications for strategic decision-making, discussing its potential for situation awareness, as well as enhancement of system resilience, with a particular focus on applications that necessitate efficient, and often real-time, or near real-time, diagnostic and prognostic processes. In doing so, we elaborate on the separate classes of DT, ranging from simple images of a system, all the way to interactive replicas that are continually updated to reflect a monitored system at hand. We root our approach in the adoption of hybrid modeling as a seminal tool for facilitating twinning applications. Hybrid modeling refers to the synergistic use of data with models that carry engineering or empirical intuition on the system behavior. We postulate that modern infrastructures can be viewed as cyber-physical systems comprising, on the one hand, an array of heterogeneous data of diversified granularity and, on the other, a model (analytical, numerical, or other) that carries information on the system behavior. We therefore propose hybrid digital twins (HDT) as the main enabler of smart and resilient infrastructures.
Good test-suites are an important tool to check the correctness of programs. They are also essential in unsupervised educational settings, like automatic grading or for students to check their solution to some programming task by themselves. For most Haskell programming tasks, one can easily provide high-quality test-suites using standard tools like QuickCheck. Unfortunately, this is no longer the case once we leave the purely functional world and enter the lands of console I/O. Nonetheless, understanding console I/O is an important part of learning Haskell, and we would like to provide students the same support as with other subject matters. The difficulty in testing console I/O programs arises from the standard tools’ lack of support for specifying intended console interactions as simple declarative properties. These interactions are however essential in order to determine whether a program behaves as desired. We describe the console interactions of a program by tracing its text input and output actions. In order to describe which traces match the intended behavior of the program under test, we present a formal specification language. The language is designed to capture interactive behavior found in commonly used textbook exercises and examples, or as much of it as possible, as well as in our own teaching, while at the same time retaining simplicity and clarity of specifications. We intentionally restrict the language, ensuring that expressed behavior is truly interactive and not simply a pure string-builder function in disguise. Based on this specification language, we build a testing framework that allows testing against specifications in an automated way. A central feature of the testing procedure is the use of a constraint solver in order to find meaningful input sequences for the program under test.
In today's digital world, platforms are everywhere, shaping our social and cultural landscapes. This groundbreaking book shows how platforms are not just technical systems, but complex networks involving diverse people, practices and values. It explores a wide range of digital platforms, using insights from science and technology studies, anthropology, sociology and cultural theories to offer fresh perspectives on how platforms, media and devices function and evolve. Blending ethnographic work with technical analysis, this is essential reading for anyone wanting a deeper understanding of the digital age.
Killing the Messenger is a highly readable survey of the current political and legal wars over social media platforms. The book carefully parses attacks against social media coming from both the political left and right to demonstrate how most of these critiques are overblown or without empirical support. The work analyzes regulations directed at social media in the United States and European Union, including efforts to amend Section 230 of the Communications Decency Act. It argues that many of these proposals not only raise serious free-speech concerns, but also likely have unintended and perverse public policy consequences. Killing the Messenger concludes by identifying specific regulations of social media that are justified by serious, demonstrated harms, and that can be implemented without jeopardizing the profoundly democratizing impact social media platforms have had on public discourse. This title is also available as open access on Cambridge Core.
The Kerridge [(1961). Inaccuracy and inference. Journal of the Royal Statistical Society: Series B 23(1): 184-194] inaccuracy measure is the mathematical expectation of the information content of the true distribution with respect to an assumed distribution, reflecting the inaccuracy introduced when the assumed distribution is used. Analyzing the dispersion of information around such measures helps us understand their consistency. The study of dispersion of information around the inaccuracy measure is termed varinaccuracy. Recently, Balakrishnan et al. [(2024). Dispersion indices based on Kerridge inaccuracy measure and Kullback–Leibler divergence. Communications in Statistics – Theory and Methods 53(15): 5574-5592] introduced varinaccuracy, to compare models where lower variance indicates greater precision. As interval inaccuracy is crucial for analyzing the evolution of system reliability over time, examining its variability strengthens the validity of the extracted information. This article introduces the varinaccuracy measure for doubly truncated random variables and demonstrates its significance. The measure has been studied under transformations, and bounds are also provided to broaden the applicability of the measure where direct evaluation is challenging. Additionally, an estimator for the measure is proposed, and its consistency is analyzed using simulated data through a kernel-smoothed nonparametric estimation technique. The estimator is validated on real data sets of COVID-19 mortality rates for Mexico and Italy. Furthermore, the article illustrates the practical value of the measure in selecting the best alternative to a given distribution within an interval, following the minimum information discrimination principle, thereby highlighting the effectiveness of the study.
Abductive reasoning is a popular non-monotonic paradigm that aims to explain observed symptoms and manifestations. It has many applications, such as diagnosis and planning in artificial intelligence and database updates. In propositional abduction, we focus on specifying knowledge by a propositional formula. The computational complexity of tasks in propositional abduction has been systematically characterized – even with detailed classifications for Boolean fragments. Unsurprisingly, the most insightful reasoning problems (counting and enumeration) are computationally highly challenging. Therefore, we consider reasoning between decisions and counting, allowing us to understand explanations better while maintaining favorable complexity. We introduce facets to propositional abductions, which are literals that occur in some explanation (relevant) but not all explanations (dispensable). Reasoning with facets provides a more fine-grained understanding of variability in explanations (heterogeneous). In addition, we consider the distance between two explanations, enabling a better understanding of heterogeneity/homogeneity. We comprehensively analyze facets of propositional abduction in various settings, including an almost complete characterization in Post’s framework.
We present the solver asp-fzn for Constraint Answer Set Programming (CASP), which extends ASP with linear constraints. Our approach is based on translating CASP programs into the solver-independent FlatZinc language that supports several Constraint Programming and Integer Programming backend solvers. Our solver supports a rich language of linear constraints, including some common global constraints. As for evaluation, we show that asp-fzn is competitive with state-of-the-art ASP solvers on benchmarks taken from past ASP competitions. Furthermore, we evaluate it on several CASP problems from the literature and compare its performance with clingcon, which is a prominent CASP solver that supports most of the asp-fzn language. The performance of asp-fzn is very promising as it is already competitive on plain ASP and even outperforms clingcon on some CASP benchmarks.
In the design of integrated circuits, one critical metric is the maximum delay introduced by combinational modules within the circuit. This delay is crucial because it represents the time required to perform a computation: in an Arithmetic Logic Unit, it represents the maximum time taken by the circuit to perform an arithmetic operation. When such a circuit is part of a larger, synchronous system, like a CPU, the maximum delay directly impacts the maximum clock frequency of the entire system. Typically, hardware designers use static timing analysis to compute an upper bound of the maximum delay because it can be determined in polynomial time. However, relying on this upper bound can lead to suboptimal processor speeds, thereby missing performance opportunities. In this work, we tackle the challenging task of computing the actual maximum delay, rather than an approximate value. Since the problem is computationally hard, we model it in answer set programming (ASP), a logic language featuring extremely efficient solvers. We propose non-trivial encodings of the problem into ASP. Experimental results show that ASP is a viable solution to address complex problems in hardware design.
For decades, American lawyers have enjoyed a monopoly over legal services, built upon strict unauthorized practice of law rules and prohibitions on nonlawyer ownership of law firms. Now, though, this monopoly is under threat-challenged by the one-two punch of new AI-driven technologies and a staggering access-to-justice crisis, which sees most Americans priced out of the market for legal services. At this pivotal moment, this volume brings together leading legal scholars and practitioners to propose new conceptual frameworks for reform, drawing lessons from other professions, industries, and places, both within the United States and across the world. With critical insights and thoughtful assessments, Rethinking the Lawyers' Monopoly seeks to help shape and steer the coming revolution in the legal services marketplace. This title is also available as open access on Cambridge Core.
Networks describe complex relationships between individual actors. In this work, we address the question of how to determine whether a parametric model, such as a stochastic block model or latent space model, fits a data set well, and will extrapolate to similar data. We use recent results in random matrix theory to derive a general goodness-of-fit (GoF) test for dyadic data. We show that our method, when applied to a specific model of interest, provides a straightforward, computationally fast way of selecting parameters in a number of commonly used network models. For example, we show how to select the dimension of the latent space in latent space models. Unlike other network GoF methods, our general approach does not require simulating from a candidate parametric model, which can be cumbersome with large graphs, and eliminates the need to choose a particular set of statistics on the graph for comparison. It also allows us to perform GoF tests on partial network data, such as Aggregated Relational Data. We show with simulations that our method performs well in many situations of interest. We analyze several empirically relevant networks and show that our method leads to improved community detection algorithms.
Visual exploration is a task in which a camera-equipped robot seeks to efficiently visit all navigable areas of an environment within the shortest possible time. Most existing visual exploration methods rely on a static camera fixed to the robot’s body to control its own movements. However, coupling the orientation of camera with robot’s body limits the extra degrees of freedom to obtain more visual information. In this work, we adjust the camera orientation during robot motion by using a novel camera view planning (CVP) policy to improve the exploration efficiency. Specifically, we reformulate the CVP problem as a reinforcement learning problem. However, two new challenges need to be addressed: 1) determining how to learn an effective CVP policy in complex indoor environments and 2) figuring out how to synchronize it with the robot motion. To solve the above issues, we create a reward function considering factors such as exploration area, observed semantic objects, and the motion conflicts between the camera and the robot’s body. Moreover, to better coordinate the policies of the camera and the robot’s body, the CVP policy takes the body actions and the egocentric 2D spatial maps with exploration, occupancy, and trajectory information into account to make motion decisions. Experimental results show that after using the proposed CVP policy, the exploration area is expanded by 21.72% and 25.6% on average in the small-scale indoor scene with few structured obstacles and large-scale indoor scene with cluttered obstacles, respectively.
This paper introduces Setting-Driven Design (SDD) and supporting tool – the Behaviour Setting Canvas (BSC) – which together address a critical gap in behavioural design by shifting the focus from individual behaviour to the broader context in which behaviour occurs. Rooted in behaviour setting theory, SDD is a powerful approach to behavioural design that offers an end-to-end structure for understanding and intervening in a behavioural design challenge. The process comprises three iterative phases: scoping the behavioural challenge, understanding the setting and intervention development. The process structure revolves around the BSC, a tool for mapping key contextual elements such as roles, motives, norms and routines. While SDD is particularly effective for behaviour change interventions, its utility extends to other design challenges, including introducing new products, shifting social norms and enhancing existing systems where behaviour remains constant. The approach integrates a theory of change to guide intervention development, prototyping and evaluation, ensuring alignment with behavioural objectives and contextual realities. A case study on handwashing in low-income Tanzanian households illustrates the method’s utility, culminating in the creation of Tab Soap, a single-use, biodegradable soap designed to improve hygiene behaviours. The study demonstrates how SDD facilitates insight generation and iterative refinement and complements user-centred design. SDD advances behavioural design by combining theoretical rigour with practical application, offering a scalable and adaptable framework for addressing complex design challenges across diverse fields.