To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
‘Design cognition’ refers to the mental processes and representations involved in designing, and has been a significant area of interest since the emergence of design research in the 1960s. The field now faces significant challenges moving into the future, with major change required to overcome stagnation in research topics and methodologies. Tackling these challenges requires us to understand the past and present of design cognition research, and open fresh discussions on its future. This thematic collection aims to address this need by taking stock of current approaches, exploring emerging topics and methodologies, and identifying future directions for enquiry. In this editorial, we examine key issues regarding both what we investigate and how we conduct this research. We present a vision formed from a structured literature review, the work of authors in the collection, and the views of a broad cross-section of the design cognition community. This vision is formalized in a roadmap from the present to the near and far future, highlighting key topics and research questions for the field. Ultimately, ecological measurement, new applications of artificial intelligence, and a move towards theory construction and research maturation constitute key long term challenges for the design cognition community.
Generalising and re-using knowledge learned while solving one problem instance has been neglected by state-of-the-art answer set solvers. We suggest a new approach that generalises learned nogoods for re-use to speed-up the solving of future problem instances. Our solution combines well-known ASP solving techniques with deductive logic-based machine learning. Solving performance can be improved by adding learned non-ground constraints to the original program. We demonstrate the effects of our method by means of realistic examples, showing that our approach requires low computational cost to learn constraints that yield significant performance benefits in our test cases. These benefits can be seen with ground-and-solve systems as well as lazy-grounding systems. However, ground-and-solve systems suffer from additional grounding overheads, induced by the additional constraints in some cases. By means of conflict minimization, non-minimal learned constraints can be reduced. This can result in significant reductions of grounding and solving efforts, as our experiments show.
Recently there has been an increasing interest in frameworks extending Dung’s abstract Argumentation Framework (AF). Popular extensions include bipolar AFs and AFs with recursive attacks and necessary supports. Although the relationships between AF semantics and Partial Stable Models (PSMs) of logic programs has been deeply investigated, this is not the case for more general frameworks extending AF.
In this paper we explore the relationships between AF-based frameworks and PSMs. We show that every AF-based framework Δ can be translated into a logic program PΔ so that the extensions prescribed by different semantics of Δ coincide with subsets of the PSMs of PΔ. We provide a logic programming approach that characterizes, in an elegant and uniform way, the semantics of several AF-based frameworks. This result allows also to define the semantics for new AF-based frameworks, such as AFs with recursive attacks and recursive deductive supports.
We focus on the problem of inducing logic programs that explain models learned by the support vector machine (SVM) algorithm. The top-down sequential covering inductive logic programming (ILP) algorithms (e.g., FOIL) apply hill-climbing search using heuristics from information theory. A major issue with this class of algorithms is getting stuck in local optima. In our new approach, however, the data-dependent hill-climbing search is replaced with a model-dependent search where a globally optimal SVM model is trained first, then the algorithm looks into support vectors as the most influential data points in the model, and induces a clause that would cover the support vector and points that are most similar to that support vector. Instead of defining a fixed hypothesis search space, our algorithm makes use of SHAP, an example-specific interpreter in explainable AI, to determine a relevant set of features. This approach yields an algorithm that captures the SVM model’s underlying logic and outperforms other ILP algorithms in terms of the number of induced clauses and classification evaluation metrics.
In this paper, we study the problem of formal verification for Answer Set Programming (ASP), namely, obtaining a formal proof showing that the answer sets of a given (non-ground) logic program P correctly correspond to the solutions to the problem encoded by P, regardless of the problem instance. To this aim, we use a formal specification language based on ASP modules, so that each module can be proved to capture some informal aspect of the problem in an isolated way. This specification language relies on a novel definition of (possibly nested, first order) program modules that may incorporate local hidden atoms at different levels. Then, verifying the logic program P amounts to prove some kind of equivalence between P and its modular specification.
In this paper we develop a concept aware multi-preferential semantics for dealing with typicality in description logics, where preferences are associated with concepts, starting from a collection of ranked TBoxes containing defeasible concept inclusions. Preferences are combined to define a preferential interpretation in which defeasible inclusions can be evaluated. The construction of the concept-aware multipreference semantics is related to Brewka’s framework for qualitative preferences. We exploit Answer Set Programming (in particular, asprin) to achieve defeasible reasoning under the multipreference approach for the lightweight description logic ξ$\mathcal L_ \bot ^ + $.
Efficient decision-making over continuously changing data is essential for many application domains such as cyber-physical systems, industry digitalization, etc. Modern stream reasoning frameworks allow one to model and solve various real-world problems using incremental and continuous evaluation of programs as new data arrives in the stream. Applied techniques use, e.g., Datalog-like materialization or truth maintenance algorithms to avoid costly re-computations, thus ensuring low latency and high throughput of a stream reasoner. However, the expressiveness of existing approaches is quite limited and, e.g., they cannot be used to encode problems with constraints, which often appear in practice. In this paper, we suggest a novel approach that uses the Conflict-Driven Constraint Learning (CDCL) to efficiently update legacy solutions by using intelligent management of learned constraints. In particular, we study the applicability of reinforcement learning to continuously assess the utility of learned constraints computed in previous invocations of the solving algorithm for the current one. Evaluations conducted on real-world reconfiguration problems show that providing a CDCL algorithm with relevant learned constraints from previous iterations results in significant performance improvements of the algorithm in stream reasoning scenarios.
Answer-Set Programming (ASP) is a powerful and expressive knowledge representation paradigm with a significant number of applications in logic-based AI. The traditional ground-and-solve approach, however, requires ASP programs to be grounded upfront and thus suffers from the so-called grounding bottleneck (i.e., ASP programs easily exhaust all available memory and thus become unsolvable). As a remedy, lazy-grounding ASP solvers have been developed, but many state-of-the-art techniques for grounded ASP solving have not been available to them yet. In this work we present, for the first time, adaptions to the lazy-grounding setting for many important techniques, like restarts, phase saving, domain-independent heuristics, and learned-clause deletion. Furthermore, we investigate their effects and in general observe a large improvement in solving capabilities and also uncover negative effects in certain cases, indicating the need for portfolio solving as known from other solvers.
Designing agents that reason and act upon the world has always been one of the main objectives of the Artificial Intelligence community. While for planning in “simple” domains the agents can solely rely on facts about the world, in several contexts, e.g., economy, security, justice and politics, the mere knowledge of the world could be insufficient to reach a desired goal. In these scenarios, epistemic reasoning, i.e., reasoning about agents’ beliefs about themselves and about other agents’ beliefs, is essential to design winning strategies. This paper addresses the problem of reasoning in multi-agent epistemic settings exploiting declarative programming techniques. In particular, the paper presents an actual implementation of a multi-shot Answer Set Programming-based planner that can reason in multi-agent epistemic settings, called PLATO (ePistemic muLti-agent Answer seT programming sOlver). The ASP paradigm enables a concise and elegant design of the planner, w.r.t. other imperative implementations, facilitating the development of formal verification of correctness. The paper shows how the planner, exploiting an ad-hoc epistemic state representation and the efficiency of ASP solvers, has competitive performance results on benchmarks collected from the literature.
The development of autonomous greenhouses has caught the interest of many researchers and industrial considering their potential of offering an optimal environment for the growth of high-quality crops with minimum resources. Since an autonomous greenhouse is a mechatronic system, the consideration of its subsystem (e.g. heating systems) and component (e.g. actuators and sensors) interactions early in the design phase can ease the product development process. Indeed, this consideration could shorten the design process, reduce the number of redesign loops, and improve the performance of the overall mechatronic system. In the case of a greenhouse, it would lead to a higher quality of the crops and a better management of resources. In this work, the layout design of a general autonomous greenhouse is translated into an optimization problem statement while considering product-related dependencies. Then, a genetic algorithm is used to carry out the optimization of the layout design. The methodology is applied to the design of a fully autonomous greenhouse (45 cm × 30 cm × 30 cm) for the growth of plants in space. Although some objectives are conflictual, the developed algorithm proposes a compromise to obtain a near-optimal feasible layout design. The algorithm was also able to optimize the volume of components (occupied space) while considering the energy consumption and the overall mass. Their respective summed values are 2844.32 cm3, which represents 7% of the total volume, 5.86 W, and 655.8 g.
The repeated execution of reasoning tasks is desirable in many applicative scenarios, such as stream reasoning and event processing. When using answer set programming in such contexts, one can avoid the iterative generation of ground programs thus achieving a significant payoff in terms of computing time. However, this may require some additional amount of memory and/or the manual addition of operational directives in the declarative knowledge base at hand. We introduce a new strategy for generating series of monotonically growing propositional programs. The proposed overgrounded programs with tailoring (OPTs) can be updated and reused in combination with consecutive inputs. With respect to earlier approaches, our tailored simplification technique reduces the size of instantiated programs. A maintained OPT slowly grows in size from an iteration to another while the update cost decreases, especially in later iterations. In this paper we formally introduce tailored embeddings, a family of equivalence-preserving ground programs which are at the theoretical basis of OPTs and we describe their properties. We then illustrate an OPT update algorithm and report about our implementation and its performance.
Qualitative reasoning involves expressing and deriving knowledge based on qualitative terms such as natural language expressions, rather than strict mathematical quantities. Well over 40 qualitative calculi have been proposed so far, mostly in the spatial and temporal domains, with several practical applications such as naval traffic monitoring, warehouse process optimisation and robot manipulation. Even if a number of specialised qualitative reasoning tools have been developed so far, an important barrier to the wider adoption of these tools is that only qualitative reasoning is supported natively, when real-world problems most often require a combination of qualitative and other forms of reasoning. In this work, we propose to overcome this barrier by using ASP as a unifying formalism to tackle problems that require qualitative reasoning in addition to non-qualitative reasoning. A family of ASP encodings is proposed which can handle any qualitative calculus with binary relations. These encodings are experimentally evaluated using a real-world dataset based on a case study of determining optimal coverage of telecommunication antennas, and compared with the performance of two well-known dedicated reasoners. Experimental results show that the proposed encodings outperform one of the two reasoners, but fall behind the other, an acceptable trade-off given the added benefits of handling any type of reasoning as well as the interpretability of logic programs.
A practical bit condition monitoring system is a necessary component of autonomous drilling. Tricone bits are widely used in blasthole drilling in mining. Bits experience a variety of wear mechanisms during the operation and rolling element failure is the dominant catastrophic failure mode of tricone bits. Bit lifetime and performance significantly vary based on the working condition and the critical components of the bit i.e. rolling elements, are invisible to the direct condition monitoring systems. At McGill University, extensive research work is conducted to develop an indirect bit condition monitoring and failure prediction approach relying on the vibration signals and the technology is currently patent pending. This article presents real-world experimental evidence to show the unreliability of conservative bit changing strategy based on the bit operation life or drop in the rate of penetration (ROP) and ineffectiveness of direct wear monitoring techniques to cover the dominant failure mode.
Objective
To demonstrate the unreliability of tricone bit replacement relying on bit operation life or ROP measurement and ineffectiveness of vision-based monitoring techniques for autonomous drilling.
In Probabilistic Logic Programming (PLP) the most commonly studied inference task is to compute the marginal probability of a query given a program. In this paper, we consider two other important tasks in the PLP setting: the Maximum-A-Posteriori (MAP) inference task, which determines the most likely values for a subset of the random variables given evidence on other variables, and the Most Probable Explanation (MPE) task, the instance of MAP where the query variables are the complement of the evidence variables. We present a novel algorithm, included in the PITA reasoner, which tackles these tasks by representing each problem as a Binary Decision Diagram and applying a dynamic programming procedure on it. We compare our algorithm with the version of ProbLog that admits annotated disjunctions and can perform MAP and MPE inference. Experiments on several synthetic datasets show that PITA outperforms ProbLog in many cases.
Concolic testing is a popular software verification technique based on a combination of concrete and symbolic execution. Its main focus is finding bugs and generating test cases with the aim of maximizing code coverage. A previous approach to concolic testing in logic programming was not sound because it only dealt with positive constraints (by means of substitutions) but could not represent negative constraints. In this paper, we present a novel framework for concolic testing of CLP programs that generalizes the previous technique. In the CLP setting, one can represent both positive and negative constraints in a natural way, thus giving rise to a sound and (potentially) more efficient technique. Defining verification and testing techniques for CLP programs is increasingly relevant since this framework is becoming popular as an intermediate representation to analyze programs written in other programming paradigms.
To address the challenges of limited language proficiency and provide necessary feedback in the implementation of task-based language teaching (TBLT), a mobile-supported TBLT application was developed to provide linguistic and task scaffolding. Sixty-six English as a foreign language (EFL) university learners participated in a three-week experiment as part of a general English course. They were assigned to either an experimental group (mobile-supported TBLT), which received TBLT with scaffolds built into the application, or a control group (traditional TBLT), which received traditional paper-based TBLT without the scaffolds. At the end of the experiment, an English achievement test of vocabulary, grammar, and conversation comprehension was administered to determine if the technological scaffolds enhanced the learning outcomes for the course. Students’ self-perceived use of oral communication strategies was also measured to explore how these scaffolds affected the conversational interaction essential for task performance. Results showed that the mobile-supported TBLT group outperformed the traditional TBLT group on the vocabulary and conversation comprehension tests but not so much on the grammar test. Also, the mobile-supported TBLT group reported greater awareness of fluency- and accuracy-oriented strategies for speaking than the traditional TBLT group. Implications for designing mobile learning to enhance TBLT in an EFL setting are drawn.
A variety of machines are currently being used for mechanical excavation in mining and civil industries. A series of research works have been conducted at McGill University in the past decade to study the effects of microwave (MW) irradiation on rock mechanical properties. The idea is to enhance the excavation performance by improving the rate of penetration and decreasing the wear rate on the cutting tools. These two effects would eventually translate into economic benefits for mine operators. The effectiveness of MW on weakening rocks is proven, however the most efficient method to employ MW in mines is still under investigation. This article presents some experimental results on the effects of cooling- rate on rock strength. Brazilian Tensile Strength (BTS) of microwave treated samples were compared in natural air-cooled and water rapid-cooled conditions.
The influence of nutrient loading and other anthropogenic stressors is thought to be greater in low inflow, microtidal estuaries, where there is limited water exchange. This 11-month study compared spatial changes in macrofaunal communities adjacent to regions that varied in land cover in Oso Bay, Texas, an estuarine secondary bay with inflow dominated by hypersaline discharge, in addition to discharge from multiple municipal wastewater treatment plants. Macrofauna communities changed in composition with distance away from a wastewater treatment plant in Oso Bay, with the western region of the bay containing different communities than the head and the inlet of the bay. Ostracods were numerically dominant close to the wastewater discharge point. Macrobenthic community composition is most highly correlated with silicate concentrations in the water column. Silicate is negatively correlated with salinity and dissolved oxygen, and positively correlated with nutrients within the bay. Results are relevant for environmental management purposes by demonstrating that point-source discharges can still have ecological effects in hydrologically altered estuaries.
There are numerous associations between psychological characteristics and political values, but it is unclear whether messages tailored to these psychological characteristics can influence political decisions. Two studies (N = 398, N = 395) tested whether psychological-based argument tailoring could influence participants’ decision-making. We constructed arguments based on the 2016 Brexit referendum; Remain supporters were presented with four arguments supporting the Leave campaign, tailored to reflect the participant’s strongest (/weakest) moral foundation (Loyalty or Fairness) or personality trait (Conscientiousness or Openness). We tested whether individuals scoring high on a trait would find the tailored arguments more persuasive than individuals scoring low on the same trait. We found clear evidence for targeting, particularly for Loyalty, but either no evidence or weak evidence, in the case of Conscientiousness, for tailoring. Overall, the results suggest that targeting political messages could be effective, but provide either no, or weak evidence that tailoring these messages influences political decision-making.
Earlier work by the authors suggested that the formation of molten eutectic regions in Mg-Ca binary alloys caused a discrepancy in ignition temperature when different heating rates are used. This effect was observed for alloys where Ca content is greater than 1 wt%. In this work, the effect of two heating rates (25 °C/min and 45 °C/min) on the ignition resistance of Mg-3Ca is evaluated in terms of oxide growth using X-ray Photoelectron Spectroscopy. It is found that the molten eutectic regions develop a thin oxide scale of ~100 nm rich in Ca at either heating rate. The results prove that under the high heating rate, solid intermetallics are oxidized forming CaO nodules at the metal/oxide interface that eventually contribute to the formation of a thick and non-protective oxide scale in the liquid state.