To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Data-based methods have gained increasing importance in engineering. Success stories are prevalent in areas such as data-driven modeling, control, and automation, as well as surrogate modeling for accelerated simulation. Beyond engineering, generative and large-language models are increasingly helping with tasks that, previously, were solely associated with creative human processes. Thus, it seems timely to seek artificial-intelligence-support for engineering design tasks to automate, help with, or accelerate purpose-built designs of engineering systems for instance in mechanics and dynamics, where design so far requires a lot of specialized knowledge. Compared with established, predominantly first-principles-based methods, the datasets used for training, validation, and test become an almost inherent part of the overall methodology. Thus, data publishing becomes just as important in (data-driven) engineering science as appropriate descriptions of conventional methodology in publications in the past. However, in mechanics and dynamics, quite widely, still traditional publishing practices are prevalent that largely do not yet take into account the rising role of data as much as that may already be the case in pure data-scientific research. This article analyzes the value and challenges of data publishing in mechanics and dynamics, in particular regarding engineering design tasks, showing that the latter raise also challenges and considerations not typical in fields where data-driven methods have been booming originally. Researchers currently find barely any guidance to overcome these challenges. Thus, ways to deal with these challenges are discussed and a set of examples from across different design problems shows how data publishing can be put into practice.
We provide a fine classification of bisimilarities between states of possibly different labelled Markov processes (LMP). We show that a bisimilarity relation proposed by Panangaden that uses direct sums coincides with “event bisimilarity” from his joint work with Danos, Desharnais, and Laviolette. We also extend Giorgio Bacci’s notions of bisimilarity between two different processes to the case of nondeterministic LMP and generalize the game characterization of state bisimilarity by Clerc et al. for the latter.
Online customer feedback management (CFM) is becoming increasingly important for businesses. Providing timely and effective responses to guest reviews can be challenging, especially as the volume of reviews grows. This paper explores the response process and the potential for artificial intelligence (AI) augmentation in response formulation. We propose an orchestration concept for human–AI collaboration in co-writing within the hospitality industry, supported by a novel NLP-based solution that combines the strengths of both human and AI. Although complete automation of the response process remains out of reach, our findings offer practical implications for improving response speed and quality through human–AI collaboration. Additionally, we formulate policy recommendations for businesses and regulators in CFM. Our study provides transferable design knowledge for developing future CFM products.
In the wake of the recent resurgence of the Datalog language of databases, together with its extensions for ontological reasoning settings, this work aims to bridge the gap between the theoretical studies of DatalogMTL (Datalog extended with metric temporal logic) and the development of production-ready reasoning systems. In particular, we lay out the functional and architectural desiderata of a modern reasoner and propose our system, Temporal Vadalog. Leveraging the vast amount of experience from the database community, we go beyond the typical chase-based implementations of reasoners, and propose a set of novel techniques and a system that adopts a modern data pipeline architecture. We discuss crucial architectural choices, such as how to guarantee termination when infinitely many time intervals are possibly generated, how to merge intervals, and how to sustain a limited memory footprint. We discuss advanced features of the system, such as the support for time series, and present an extensive experimental evaluation. This paper is a substantially extended version of “The Temporal Vadalog System” as presented at RuleML+RR ’22.
The formal theory of monads shows that much of the theory of monads can be developed in the abstract at the level of 2-categories. This means that results about monads can be established once and for all and simply instantiated in settings such as enriched category theory.
Unfortunately, these results can be hard to reason about as they involve more abstract machinery. In this paper, we present the formal theory of monads in terms of string diagrams — a graphical language for 2-categorical calculations. Using this perspective, we show that many aspects of the theory of monads, such as the Eilenberg–Moore and Kleisli resolutions of monads, liftings, and distributive laws, can be understood in terms of systematic graphical calculational reasoning.
This paper will serve as an introduction both to the formal theory of monads and to the use of string diagrams, in particular, their application to calculations in monad theory.
This study introduces an innovative deep learning method for intelligent healthcare emotion analysis, specifically targeting the recognition of pain based on facial expressions. The suggested approach combines cloud-based mobile applications, utilising separate front-end and back-end elements to optimise data processing. The main contributions consist of a Smart Automated System (SASys) that integrates statistical and deep learning methods to extract features, thereby guaranteeing both resilience and efficiency. Image preprocessing encompasses the tasks of detecting faces and normalising them, which is crucial for extracting features with high accuracy. The comparison of statistical feature representation using Histogram of Oriented Gradients and Local Binary Pattern, along with machine learning classifiers, against an enhanced deep learning-based approach with an integrated multi-tasking feature known as multi-task convolutional neural network, demonstrates encouraging outcomes that support the superiority of the convolutional neural network architecture. Statistical and deep learning-based classification scores, when combined, greatly enhance the system’s overall performance. The results of the experiments prove that the method is effective, outperforming traditional classifiers and exhibiting comparable accuracy to cutting-edge healthcare SASys.
In this paper, by means of upper approximation operators in rough set theory, we study representations for sL-domains and its special subclasses. We introduce the concepts of sL-approximation spaces, L-approximation spaces, and bc-approximation spaces, which are special types of CF-approximation spaces. We prove that the collection of CF-closed sets in an sL-approximation space (resp., an L-approximation space, a bc-approximation space) ordered by set-theoretic inclusion is an sL-domain (resp., an L-domain, a bc-domain); conversely, every sL-domain (resp., L-domain, bc-domain) is order-isomorphic to the collection of CF-closed sets of an sL-approximation space (resp., an L-approximation space, a bc-approximation space). Consequently, we establish an equivalence between the category of sL-domains (resp., L-domains) with Scott continuous mappings and that of sL-approximation spaces (resp., L-approximation spaces) with CF-approximable relations.
With advancements in industrial robot technology and the ongoing enhancements in control system performance, the demand for precise robot motion is increasing. Generally, an increased number of interpolation points enhances the precision of robot movement, but excessive points can lead to jittering and out-of-step issues. This paper investigates the relationship between the number of motion interpolation points and the response times of the control system and the robot’s terminal velocity, based on the theoretical calculation and experimental analysis of the limit interpolation points for the control system of a self-developed 6-DOF (Six Degree of Freedom) robot. The method for calculating limit interpolation points is refined using the least squares method, and equations are derived for different control system response time and robot’s terminal velocity reaction times. The validity of the prediction curves is verified through experimental analysis.
Automated Agencies is the definitive account of how automation is transforming government explanations of the law to the public. Joshua D. Blank and Leigh Osofsky draw on extensive research regarding the federal government's turn to automated legal guidance through chatbots, virtual assistants, and other online tools. Blank and Osofsky argue that automated tools offer administrative benefits for both the government and the public in terms of efficiency and ease of use, yet these automated tools may also mislead members of the public. Government agencies often exacerbate this problem by making guidance seem more personalized than it is, not recognizing how users may rely on the guidance, and not disclosing that the guidance cannot be relied upon as a legal matter. After analyzing the potential costs and benefits of the use of automated legal guidance by government agencies, Automated Agencies charts a path forward for policymakers by offering detailed policy recommendations.
There is a canonical and efficient way to extend a convergent presentation of a category by a 2-polygraph into a coherent one. Precisely, the 3-cells used in this extension procedure are in one-to-one correspondence with the confluence diagrams of critical branchings in the polygraph. Now, if the polygraph is finite, so is the set of its critical branchings, and therefore the set of 3-cells generating coherence can be taken to be finite. In such a situation, the polygraph is said to have finite derivation type, or FDT. The relevance of this concept, introduced by Squier, lies in the following invariance property: if a category admits a finite presentation having finite derivation type, then all finite presentations of also have FDT. This invariance will prove essential to show that some finitely presented categories do not admit convergent presentations. Using these conditions, Squier managed to produce an explicit example of a finitely presented monoid, with decidable word problem, but having no finite convergent presentation. This provides a negative answer to the question of universality of finite convergent rewriting.
This chapter presents techniques for proving the termination of 3-polygraphs. A first method is based on a certain type of well-founded orders called reduction orders. Attention then turns to functorial interpretations: these amount to construct a functor from the underlying category to another category which already bears a reduction order. This covers quite a few useful examples. To address more complex cases, a powerful technique, due to Guiraud, is presented, based on the construction of a derivation from the polygraph. Here, termination is obtained by specifying quantities on 2-cells which decrease during rewriting, based on information propagated by the 2-cells themselves.
The study of universal algebra, that is, the description of algebraic structures by means of symbolic expressions subject to equations, dates back to the end of the 19th century. It was motivated by the large number of fundamental mathematical structures fitting into this framework: groups, rings, lattices, and so on. From the 1970s on, the algorithmic aspect became prominent and led to the notion of term rewriting system. This chapter briefly revisits these ideas from a polygraphic viewpoint, introducing only what is strictly necessary for understanding. Term rewriting systems are introduced as presentations of Lawvere theories, which are particular cartesian categories. It is shown that a term rewriting system can also be described by a 3-polygraph in which variables are handled explicitly, i.e., by taking into account their duplication and erasure. Finally, a precise meaning is given to the statement that term rewriting systems are "cartesian polygraphs".
This appendix provides an explicit description of the free n-category generated by an n-polygraph. This section is mostly inspired of the work of Makkai. A formal definition of the syntax of n-categories is first provided, describing morphisms in an (n+1)-category freely generated by an n-polygraph, allowing reasoning by induction on its terms to prove results on free categories. It turns out that this syntax for n-categories, which corresponds to the one used throughout the book, is very "redundant", in the sense that there are many ways to express a composite of cells which will give rise to the same result, and is sometimes not very practical for this reason. An alternative syntax, which suffers less from these problems, is provided by restricting compositions. Finally, a brief mention of the word problem for free n-categories is made.
This chapter is dedicated to the definition of 2-polygraphs, which are a 2-dimensional generalization of 1-polygraphs. Before introducing this notion, a refined viewpoint over 1-polygraphs is given. Instead of merely focusing on the set presented by a 1-polygraph as a set of equivalence classes of generators modulo the relations, the free category generated by the polygraph is now considered. The notion of 2-polygraph naturally appears as soon as arbitrary, non necessarily free, small categories are considered. In order to present such a category, one starts with a polygraph such that the 1-generators generate the morphisms of the category, but now it must be taken into account the relations induced by the category among the morphisms of the free category generated the resulting 1-polygraph. These relations will be generated by a set of 2-generators, consisting in certain pairs of morphisms intended to be equalized in the category. Following the same pattern, it will be explained that a 2-polygraph can also be seen as a system of generators for a free 2-category, thus preparing the study of 3-polygraphs. The variant where a (2,1)-category is freely generated is also examined.
This chapter introduces in full generality the central concept of this book, namely the notion of polygraph. Given an n-category, a cellular extension of it consist in attaching cells of dimension n+1 between certain pairs of parallel n-cells. This operation freely generates an (n+1)-category. Polygraphs are then obtained by starting with a set, considered as a 0-category and inductively repeating the above process in all dimensions. The construction yields a fundamental triangle of adjunctions between omega-categories, polygraphs, and globular sets. A brief description of (n,p)-polygraphs, that is, the notion of polygraph adapted to (n,p)-categories, concludes the chapter.