To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Descriptions of various subsets of $\mathbb{SO}(3)$ are encountered frequently in robotics, for example, in the context of specifying the orientation workspaces of manipulators. Often, the Cartesian concept of a cuboid is extended into the domain of Euler angles, notwithstanding the fact that the physical implications of this practice are not documented. Motivated by this lacuna in the existing literature, this article focuses on studying sets of rotations described by such cuboids by mapping them to the space of Rodrigues parameters, where a physically meaningful measure of distance from the origin is available and the spherical geometry is intrinsically pertinent. It is established that the planar faces of the said cuboid transform into hyperboloids of one sheet and hence, the cuboid itself maps into a solid of complicated non-convex shape. To quantify the extents of these solids, the largest spheres contained within them are computed analytically. It is expected that this study would help in the process of design and path planning of spatial robots, especially those of parallel architecture, due to a better and quantitative understanding of their orientation workspaces.
Hybrid MKNF Knowledge Bases (HMKNF-KBs) constitute a formalism for tightly integrated reasoning over closed-world rules and open-world ontologies. This approach allows for accurate modeling of real-world systems, which often rely on both categorical and normative reasoning. Conflict-driven solving is the leading approach for computationally hard problems, such as satisfiability (SAT) and answer set programming (ASP), in which MKNF is rooted. This paper investigates the theoretical underpinnings required for a conflict-driven solver of HMKNF-KBs. The approach defines a set of completion and loop formulas, whose satisfaction characterizes MKNF models. This forms the basis for a set of nogoods, which in turn can be used as the backbone for a conflict-driven solver.
FOLD-RM is an explainable machine learning classification algorithm that uses training data to create a set of classification rules. In this paper, we introduce CON-FOLD which extends FOLD-RM in several ways. CON-FOLD assigns probability-based confidence scores to rules learned for a classification task. This allows users to know how confident they should be in a prediction made by the model. We present a confidence-based pruning algorithm that uses the unique structure of FOLD-RM rules to efficiently prune rules and prevent overfitting. Furthermore, CON-FOLD enables the user to provide preexisting knowledge in the form of logic program rules that are either (fixed) background knowledge or (modifiable) initial rule candidates. The paper describes our method in detail and reports on practical experiments. We demonstrate the performance of the algorithm on benchmark datasets from the UCI Machine Learning Repository. For that, we introduce a new metric, Inverse Brier Score, to evaluate the accuracy of the produced confidence scores. Finally, we apply this extension to a real-world example that requires explainability: marking of student responses to a short answer question from the Australian Physics Olympiad.
There is an unavoidable time offset between the camera stream and the inertial measurement unit (IMU) data due to the sensor triggering and transmission delays, which will seriously affect the accuracy of visual-inertial odometry (VIO). A novel online time calibration framework via double-stage EKF for VIO is proposed in this paper. First, the first-stage complementary Kalman filter is constructed by adapting the complementary characteristics between the accelerometer and the gyroscope in the IMU, where the rotation result predicted by the gyroscope is corrected through the measurement of the accelerometer so that the IMU can output a more accurate initial pose. Second, the unknown time offset is added to the state vector of the VIO system. The estimated pose of IMU is used as the prediction information, and the reprojection error of multiple cameras on the same feature point is used as the constraint information. During the operation of the VIO system, the time offset is continuously calculated and superimposed on the IMU timestamp to obtain the data synchronized by the IMU and the camera. The Schur complement model is used to marginalize the camera state that carries less information in the system state, avoiding the loss of prior information between images, and improving the accuracy of camera pose estimation. Finally, the effectiveness of proposed algorithm is verified using the EuRoC dataset and the real experimental data.
Informal digital learning of English (IDLE) is a promising way of learning English that has received growing attention in recent years. It has positive effects on English as a foreign language (EFL) learners and also creates valuable opportunities for EFL teachers to improve their teaching skills. However, there has been a lack of a valid and reliable scale to measure IDLE among teachers in EFL contexts. To address this lacuna, this study aims to develop and validate a scale to measure IDLE for EFL teachers in Iran. For this purpose, a nine-step rigorous validation procedure was undertaken: administering pilot interviews; creating the first item pool; running expert judgment; running interviews and think-aloud protocol; running the pilot study; performing exploratory factor analysis, Cronbach’s alpha, and confirmatory factor analysis; creating the second item pool; conducting expert reviews; and performing translation and translation quality check. Findings yielded a 41-item scale with six subscales: IDLE-enhanced benefits (12 items), IDLE practice (five items), support from others (nine items), authentic L2 experience (three items), resources and cognition (four items), and frequency and device (eight items). The scale demonstrated satisfactory psychometric properties such that it can be used for research and educational purposes in future.
Answer set programming (ASP) has demonstrated its potential as an effective tool for concisely representing and reasoning about real-world problems. In this paper, we present an application in which ASP has been successfully used in the context of dynamic traffic distribution for urban networks, within a more general framework devised for solving such a real-world problem. In particular, ASP has been employed for the computation of the “optimal” routes for all the vehicles in the network. We also provide an empirical analysis of the performance of the whole framework, and of its part in which ASP is employed, on two European urban areas, which shows the viability of the framework and the contribution ASP can give.
This article presents a domain-specific language for writing highly structured multilevel system specifications. The language effectively bridges the gap between requirements engineering and systems architecting by enabling the direct derivation of a dependency graph from the system specifications. The dependency graph allows for the easy manipulation, visualization and analysis of the system architecture, ensuring the consistency among written system specifications and visual system architecture models. The system architecture models provide direct feedback on the completeness of the system specifications. The language and associated tooling has been made publicly available and has been applied in several industrial case studies. In this article, the fundamental concepts and way of working of the language are explained using an illustrative example.
DatalogMTL is an extension of Datalog with metric temporal operators that has found an increasing number of applications in recent years. Reasoning in DatalogMTL is, however, of high computational complexity, which makes reasoning in modern data-intensive applications challenging. In this paper we present a practical reasoning algorithm for the full DatalogMTL language, which we have implemented in a system called MeTeoR. Our approach effectively combines an optimised (but generally non-terminating) materialisation (a.k.a. forward chaining) procedure, which provides scalable behaviour, with an automata-based component that guarantees termination and completeness. To ensure favourable scalability of the materialisation component, we propose a novel seminaïve materialisation procedure for DatalogMTL enjoying the non-repetition property, which ensures that each rule instance will be applied at most once throughout its entire execution. Moreover, our materialisation procedure is enhanced with additional optimisations which further reduce the number of redundant computations performed during materialisation by disregarding rules as soon as it is certain that they cannot derive new facts in subsequent materialisation steps. Our extensive evaluation supports the practicality of our approach.
We investigate the number of maximal cliques, that is, cliques that are not contained in any larger clique, in three network models: Erdős–Rényi random graphs, inhomogeneous random graphs (IRGs) (also called Chung–Lu graphs), and geometric inhomogeneous random graphs (GIRGs). For sparse and not-too-dense Erdős–Rényi graphs, we give linear and polynomial upper bounds on the number of maximal cliques. For the dense regime, we give super-polynomial and even exponential lower bounds. Although (G)IRGs are sparse, we give super-polynomial lower bounds for these models. This comes from the fact that these graphs have a power-law degree distribution, which leads to a dense subgraph in which we find many maximal cliques. These lower bounds seem to contradict previous empirical evidence that (G)IRGs have only few maximal cliques. We resolve this contradiction by providing experiments indicating that, even for large networks, the linear lower-order terms dominate, before the super-polynomial asymptotic behavior kicks in only for networks of extreme size.
This study explores the integration of generative artificial intelligence (GenAI) in informal digital learning of English (IDLE) practices, focusing on its potential to enhance language learning outcomes and addressing the technological challenges language teachers face in utilising AI-based tools to facilitate second language acquisition. Based on the research context of IDLE and holistic learning ecology and drawing on the theoretical frameworks of technological pedagogical and content knowledge and social cognitive theory, we performed a mixed-methods investigation with an empirical experiment to assess the effectiveness of GenAI followed by semi-structured interviews. The results suggest that the GenAI-mediated IDLE practices effectively improve college students’ oral proficiency in English from both technological and humanistic perspectives. However, results also indicate that the GenAI conversational partner alone is not adequate to provoke continuous extramural GenAI-mediated IDLE practices. We discuss the theoretical and pragmatic significance of GenAI-mediated IDLE in educational equity and reformation.
The importance of automating pavement maintenance tasks for highway systems has garnered interest from both industry and academia. Despite significant research efforts and promising demonstrations being devoted to reaching a level of semi-automation featuring digital sensing and inspection, site maintenance work still requires manual processes using special vehicles and equipment, reflecting a clear gap to transition to fully autonomous maintenance. This paper reviews the current progress in pavement maintenance automation in terms of inspection and repair operations, followed by a discussion of three key technical challenges related to robotic sensing, control, and actuation. To address these challenges, we propose a conceptual solution we term Autonomous Maintenance Plant (AMP), mainly consisting of five modules for sensing, actuation, control, power supply, and mobility. This AMP concept is part of the “Digital Roads” project’s cyber-physical platform where a road digital twin (DT) is created based on its physical counterpart to enable real-time condition monitoring, sensory data processing, maintenance decision making, and repair operation execution. In this platform, the AMP conducts high-resolution survey and autonomous repair operations enabled (instructed) by the road DT. This process is unmanned and completely autonomous with an expectation to create a fully robotized highway pavement maintenance system.
It’s less than a year since OpenAI’s board voted to fire Sam Altman as CEO, in a palace coup that lasted just a weekend before Altman was reinstated. That weekend and subsequent events in OpenAI’s storyline provide all the ingredients for a soap opera. So, just in case Netflix is interested, here’s a stab at a synopsis of what might be just the first of many seasons of ‘The Generative AI Wars’.
This article outlines a human-centered approach to developing digital patient stories, for sharing their experiences in health care, while preserving patient and others’ privacy. Employing a research-through-design approach, the study proposes a design solution using visualization and digital storytelling to document patients’ and families’ experiences and emotions, as well as their interactions with healthcare professionals in the postnatal unit. By transforming selected observational data into animated stories, this approach has the potential to elicit empathy, stimulate stakeholder engagement, and serve as a practical training tool for clinicians. This work was conducted as part of a broader study that aims to contribute to the existing knowledge base by advancing our understanding of stakeholder needs in birthing facilities and through postpartum discharge. This study primarily focuses on strategies for the development of digital stories and summarizes the factors that contributed to the production of digital stories within the context of sensitive data. It may serve as a valuable resource for students, researchers and practitioners interested in utilizing digital stories to encourage discussions, education and ultimately to enhance systems of health care for respect, equity and support.
Data mining and techniques for analyzing big data play a crucial role in various practical fields, including financial markets. However, only a few quantitative studies have been focused on predicting daily stock market returns. The data mining methods used in previous studies are either incomplete or inefficient. This study used the FPC clustering algorithm and prominent clustering algorithms such as K-means, IPC, FDPC, and GOPC for clustering stock market data. The stock market data utilized in this study comprise data from cement companies listed on the Tehran Stock Exchange. These data concerning capital returns and price fluctuations will be examined and analyzed to guide investment decisions. The analysis process involves extracting the stock market data of these companies over the past two years. Subsequently, these companies are categorized based on two criteria: profitability percentage and short-term and long-term price fluctuations, using the FPC clustering algorithm and the classification above algorithms. Then, the results of these clustering analyses are compared against each other using standard and recognized evaluation criteria to assess the quality of the clustering analysis. The findings of this investigation indicate that the FPC algorithm provides more favorable results than other algorithms. Based on the results, companies demonstrating profitability, stability, and loss within short-term (weekly and monthly) and long-term (three-month, six-month, and one-year) time frames will be placed within their respective clusters and introduced accordingly.
Under the umbrella concepts of upscaling and emerging technology, a wide variety of phenomena related to technology development and deployment in society are examined to meet societal imperatives (e.g., environment, safety, social justice). The design literature does not provide an explicit common theoretical and practical framework to clarify the assessment method to handle “an” upscaling. In this nebulous context, designers are struggling to identify the characteristics to anticipate the consequences of emerging technology upscaling. This article therefore first proposes a structuring framework to analyze the literature in a wide range of industrial sectors (energy, chemistry, building, etc.). This characterization brought to light five prevalent archetypes clarifying the concepts of upscaling and emerging technology. Then, a synthesis of invariants and methodological requirements for designers is proposed to deal with upscaling assessment according to each archetype, based on a literature review of existing design methods. This literature review process showed a disparity in treatment for some archetypes, regarding the industrial sector. A discussion is consequently proposed in the conclusion to guide design practices.
To improve understanding of prototyping practice at the fuzzy front end of the design process, this article presents an analysis of a prototyping dataset captured during the IDEA challenge – a 4-day virtually hosted hackathon – using Pro2booth, a web-based prototype capture tool. The dataset comprised 203 prototypes created by four independent teams working in university labs across Europe supported by interviews carried out with each of the teams after the event. The results of the study provide nine key findings about prototyping at hackathons. These include elucidation of the purposes of prototypes in physical, digital and sketch domains and characterisation of teams’ prototyping practices and strategies. The most successful strategy focused on learning about the problem or solution space, often via physical prototypes rather than following more prescriptive ‘theoretical’ methodologies. Recommendations on prototyping strategies in hackathons or similar scenarios are provided, highlighting the importance of practical strategies that prioritise learning and adaptation. The results of this study raise the broader question to the wider research community of how design research and teaching should balance high-level strategic approaches with more hands-on ‘operational’ prototyping.
While governments have long discussed the promise of delegating important decisions to machines, actual use often lags. Consequently, we know little about the variation in the deployment of such delegations in large numbers of similar governmental organizations. Using data from crime laboratories in the United States, we examine the uneven distribution over time of a specific, well-known expert system for ballistics imaging for a large sample of local and regional public agencies; an expert system is an inference engine joined with a knowledge base. Our statistical model is informed by the push-pull-capability theory of innovation in the public sector. We test hypotheses about the probability of deployment and provide evidence that the use of this expert system varies with the pull of agency task environments and the enabling support of organizational resources—and that the impacts of those factors have changed over time. Within this context, we also present evidence that general knowledge of the use of expert systems has supported the use of this specific expert system in many agencies. This empirical case and this theory of innovation provide broad evidence about the historical utilization of expert systems as algorithms in public sector applications.
This book is meant for the serious practitioner-to-be of constructing intelligent machines. Machines that are aware of the world around them, that have goals to achieve, and the ability to imagine the future and make appropriate choices to achieve those goals. It is an introduction to a fundamental building block of artificial intelligence (AI). As the book shows, search is central to intelligence.
Clearly AI is not one monolithic algorithm but a collection of processes working in tandem, an idea espoused by Marvin Minsky in his book The Society of Mind (1986). Human problem solving has three critical components. The ability to make use of experiences stored in memory; the ability to reason and make inferences from what one knows; and the ability to search through the space of possibilities. This book focuses on the last of these. In the real world we sense the world using vision, sound, touch, and smell. An autonomous agent will need to be able to do so as well. Language, and the written word, is perhaps a distinguishing feature of the human species. It is the key to communication which means that human knowledge becomes pervasive and is shared with future generations. The development of mathematical sciences has sharpened our understanding of the world and allows us to compute probabilities over choices to take calculated risks. All these abilities and more are needed by an autonomous agent.
Can one massive neural network be the embodiment of AI? Certainly, the human brain as a seat of intelligence suggests that. Everything we humans do has its origin in activity in our brains, which we call the mind. Perched on the banks of a stream in the mountains we perceive the world around us and derive a sense of joy and well-being. In a fit of contented creativity, we may pen an essay or a poem using our faculty of language. We may call a friend on the phone and describe the scene around us, allowing the friend to visualize the serene surroundings. She may reflect upon her own experiences and recall a holiday she had on the beach. You might start humming your favourite song and then be suddenly jolted out of your reverie remembering that friends are coming over for dinner. You get up and head towards your home with cooking plans brewing in your head.
Having introduced the machinery needed for search in the last chapter, we look at approaches to informed search. The algorithms introduced in the last chapter were blind, or uninformed, taking no cognizance at all of the actual problem instance to be solved and behaving in the same bureaucratic manner wherever the goal might be. In this chapter we introduce the idea of heuristic search, which uses domain specific knowledge to guide exploration. This is done by devising a heuristic function that estimates the distance to the goal for each candidate in OPEN.
When heuristic functions are not very accurate, search complexity is still exponential, as revealed by experiments. We then investigate local search methods that do not maintain an OPEN list, and study gradient based methods to optimize the heuristic value.
Knowledge is necessary for intelligence. Without knowledge, problem solving with search is blind. We saw this in the last chapter. In general, knowledge is that sword in the armoury of a problem solver that can cut through the complexity. Knowledge accrues over time, either distilled from our own experiences or assimilated from interaction with others – parents, teachers, authors, coaches, and friends. Knowledge is the outcome of learning and exists in diverse forms, varying from tacit to explicit. When we learn to ride a bicycle, we know it but are unable to articulate our knowledge. We are concerned with explicit knowledge. Most textbook knowledge is explicit, for example, knowing how to implement a leftist heap data structure.
In a well known incident from ancient Greece, it is said that Archimedes, considered by many to be the greatest scientist of the third century BC, ran naked onto the streets of Syracuse. King Hieron II was suspicious that a goldsmith had cheated him by adulterating a bar of gold given to him for making a crown. He asked Archimedes to investigate without damaging the crown. Stepping into his bathtub Archimedes noticed the water spilling out, and realized in a flash that if the gold were to be adulterated with silver, then it would displace more water since silver was less dense. This was his epiphany moment when he discovered what we now know as the Archimedes principle. And he ran onto the streets shouting ‘Eureka, eureka!’ We now call such an enlightening moment a Eureka moment!
Within Holocaust studies, there has been an increasingly uncritical acceptance that by engaging with social media, Holocaust memory has shifted from the ‘era of the witness’ to the ‘era of the user’ (Hogervorst 2020). This paper starts by problematising this proposition. This claim to a paradigmatic shift implies that (1) the user somehow replaces the witness as an authority of memory, which neglects the wealth of digital recordings of witnesses now circulating in digital spaces and (2) agency online is solely human-centric, a position that ignores the complex negotiations between corporations, individuals, and computational logics that shape our digital experiences. This article proposes instead that we take a posthumanist approach to understanding Holocaust memory on, and with, social media. Adapting Barad's (2007) work on entanglement to memory studies, we analyse two case studies on TikTok: the #WeRemember campaign and the docuseries How To: Never Forget to demonstrate: (1) the usefulness of reading Holocaust memory on social media through the lens of entanglement which offers a methodology that accounts for the complex network of human and non-human actants involved in the production of this phenomenon which are simultaneously being shaped by it. (2) That professional memory institutions and organisations are increasingly acknowledging the use of social media for the sake of Holocaust memory. Nevertheless, we observe that in practice the significance of technical actancy is still undervalued in this context.