To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Link traversal–based query processing (ltqp), in which a sparql query is evaluated over a web of documents rather than a single dataset, is often seen as a theoretically interesting yet impractical technique. However, in a time where the hypercentralization of data has increasingly come under scrutiny, a decentralized Web of Data with a simple document-based interface is appealing, as it enables data publishers to control their data and access rights. While (ltqp allows evaluating complex queries over such webs, it suffers from performance issues (due to the high number of documents containing data) as well as information quality concerns (due to the many sources providing such documents). In existing ltqp approaches, the burden of finding sources to query is entirely in the hands of the data consumer. In this paper, we argue that to solve these issues, data publishers should also be able to suggest sources of interest and guide the data consumer toward relevant and trustworthy data. We introduce a theoretical framework that enables such guided link traversal and study its properties. We illustrate with a theoretic example that this can improve query results and reduce the number of network requests. We evaluate our proposal experimentally on a virtual linked web with specifications and indeed observe that not just the data quality but also the efficiency of querying improves.
This paper demonstrates that “completely-jigless” assembly of a model product that requires fitting accuracy at the level of industrial products is possible by using a universal hand with four parallel stick fingers mounted on a conventional position-control-based industrial robot. Assuming that each part is taken out of the parts bin and temporarily placed on the work table, the accuracy required for precise fitting cannot be achieved with a vision sensor alone. Introducing an appropriate grasping strategy, the initial position error of the part is absorbed by self-alignment in the process of grasping. Once the alignment is completed, the pose of the grasped part is fixed and jigless assembly is possible with a conventional industrial robot, which has high repeatability. In this paper, we use a gear unit as an example of an industrial product and present some grasping strategies with the universal hand. We also propose some subsequent assembly strategies for shafts and gears. Using those grasping and assembly strategies, it is shown that jigless assembly of the gear unit was successfully completed in the experiment. Although the target product in this paper is specific, the assembly elements in this product, such as shaft screwing, bearing insertion, and gear meshing, are also included in many other products. Therefore, the methods shown in this paper can be applied to other products.
Selective compliance articulated robot arms (SCARA) robotic manipulators find wide use in industry. A nonlinear optimal control approach is proposed for the dynamic model of the 4-degrees of freedom (DOF) SCARA robotic manipulator. The dynamic model of the SCARA robot undergoes approximate linearization around a temporary operating point that is recomputed at each time-step of the control method. The linearization relies on Taylor series expansion and on the associated Jacobian matrices. For the linearized state-space model of the system, a stabilizing optimal (H-infinity) feedback controller is designed. To compute the controller’s feedback gains, an algebraic Riccati equation is repetitively solved at each iteration of the control algorithm. The stability properties of the control method are proven through Lyapunov analysis. The proposed control method is advantageous because: (i) unlike the popular computed torque method for robotic manipulators, it is characterized by optimality and is also applicable when the number of control inputs is not equal to the robot’s number of DOFs and (ii) it achieves fast and accurate tracking of reference setpoints under minimal energy consumption by the robot’s actuators. The nonlinear optimal controller for the 4-DOF SCARA robot is finally compared against a flatness-based controller implemented in successive loops.
We present recent results on the model companions of set theory, placing them in the context of a current debate in the philosophy of mathematics. We start by describing the dependence of the notion of model companionship on the signature, and then we analyze this dependence in the specific case of set theory. We argue that the most natural model companions of set theory describe (as the signature in which we axiomatize set theory varies) theories of $H_{\kappa ^+}$, as $\kappa $ ranges among the infinite cardinals. We also single out $2^{\aleph _0}=\aleph _2$ as the unique solution of the continuum problem which can (and does) belong to some model companion of set theory (enriched with large cardinal axioms). While doing so we bring to light that set theory enriched by large cardinal axioms in the range of supercompactness has as its model companion (with respect to its first order axiomatization in certain natural signatures) the theory of $H_{\aleph _2}$ as given by a strong form of Woodin’s axiom $(*)$ (which holds assuming $\mathsf {MM}^{++}$). Finally this model-theoretic approach to set-theoretic validities is explained and justified in terms of a form of maximality inspired by Hilbert’s axiom of completeness.
The notion of cross-intersecting set pair system of size $m$, $ (\{A_i\}_{i=1}^m, \{B_i\}_{i=1}^m )$ with $A_i\cap B_i=\emptyset$ and $A_i\cap B_j\ne \emptyset$, was introduced by Bollobás and it became an important tool of extremal combinatorics. His classical result states that $m\le\binom{a+b}{a}$ if $|A_i|\le a$ and $|B_i|\le b$ for each $i$. Our central problem is to see how this bound changes with the additional condition $|A_i\cap B_j|=1$ for $i\ne j$. Such a system is called $1$-cross-intersecting. We show that these systems are related to perfect graphs, clique partitions of graphs, and finite geometries. We prove that their maximum size is
at least $5^{n/2}$ for $n$ even, $a=b=n$,
equal to $\bigl (\lfloor \frac{n}{2}\rfloor +1\bigr )\bigl (\lceil \frac{n}{2}\rceil +1\bigr )$ if $a=2$ and $b=n\ge 4$,
at most $|\cup _{i=1}^m A_i|$,
asymptotically $n^2$ if $\{A_i\}$ is a linear hypergraph ($|A_i\cap A_j|\le 1$ for $i\ne j$),
asymptotically ${1\over 2}n^2$ if $\{A_i\}$ and $\{B_i\}$ are both linear hypergraphs.
Dynamical movement primitives (DMPs) method is a useful tool for efficient robotic skills learning from human demonstrations. However, the DMPs method should know the specified constraints of tasks in advance. One flexible solution is to introduce the human superior experience as part of input. In this paper, we propose a framework for robot learning based on demonstration and supervision. Superior experience supplied by teleoperation is introduced to deal with unknown environment constrains and correct the demonstration for next execution. DMPs model with integral barrier Lyapunov function is used to deal with the constrains in robot learning. Additionally, a radial basis function neural network based controller is developed for teleoperation and the robot to track the generated motions. Then, we prove convergence of the generated path and controller. Finally, we deploy the novel framework with two touch robots to certify its effectiveness.
State-of-the-art machine-learning-based models are a popular choice for modeling and forecasting energy behavior in buildings because given enough data, they are good at finding spatiotemporal patterns and structures even in scenarios where the complexity prohibits analytical descriptions. However, their architecture typically does not hold physical correspondence to mechanistic structures linked with governing physical phenomena. As a result, their ability to successfully generalize for unobserved timesteps depends on the representativeness of the dynamics underlying the observed system in the data, which is difficult to guarantee in real-world engineering problems such as control and energy management in digital twins. In response, we present a framework that combines lumped-parameter models in the form of linear time-invariant (LTI) state-space models (SSMs) with unsupervised reduced-order modeling in a subspace-based domain adaptation (SDA) approach, which is a type of transfer-learning (TL) technique. Traditionally, SDA is adopted for exploiting labeled data from one domain to predict in a different but related target domain for which labeled data is limited. We introduced a novel SDA approach where instead of labeled data, we leverage the geometric structure of the LTI SSM governed by well-known heat transfer ordinary differential equations to forecast for unobserved timesteps beyond available measurement data by geometrically aligning the physics-derived and data-derived embedded subspaces closer together. In this initial exploration, we evaluate the physics-based SDA framework on a demonstrative heat conduction scenario by varying the thermophysical properties of the source and target systems to demonstrate the transferability of mechanistic models from physics to observed measurement data.
A proliferation of data-generating devices, sensors, and applications has led to unprecedented amounts of digital data. We live in an era of datafication, one in which life is increasingly quantified and transformed into intelligence for private or public benefit. When used responsibly, this offers new opportunities for public good. The potential of data is evident in the possibilities offered by open data and data collaboratives—both instances of how wider access to data can lead to positive and often dramatic social transformation. However, three key forms of asymmetry currently limit this potential, especially for already vulnerable and marginalized groups: data asymmetries, information asymmetries, and agency asymmetries. These asymmetries limit human potential, both in a practical and psychological sense, leading to feelings of disempowerment and eroding public trust in technology. Existing methods to limit asymmetries (such as open data or consent) as well as some alternatives under consideration (data ownership, collective ownership, personal information management systems) have limitations to adequately address the challenges at hand. A new principle and practice of digital self-determination (DSD) is therefore required. The study and practice of DSD remain in its infancy. The characteristics we have outlined here are only exploratory, and much work remains to be done so as to better understand what works and what does not. We suggest the need for a new research framework or agenda to explore DSD and how it can address the asymmetries, imbalances, and inequalities—both in data and society more generally—that are emerging as key public policy challenges of our era.
Due to the ever-increasing demand for food commodities and issues arising in their transport from rural to urban areas, commercial agricultural practices with the help of vertical farming are being taken up near urban regions. For the realization of agricultural practices on high-rise vertical farms, where human intervention is quite laborious, robotic assistance would be an effective solution to perform agricultural processes like seeding, transplanting, harvesting, health monitoring, nutrient-water supply, etc. The requirements and complexities of these tasks to be performed are different such as end-effector requirement, payload capacity required, amount of clutter while performing the task, etc. In such cases, an individual robotic configuration would not serve all the purposes and each task may require a different configuration. Purchasing a large number of configurations, as per requirement, is not economical and will also increase the cost of maintenance. Thus, the design of a reconfigurable robot manipulator is proposed in this work which can cater to modular layouts. A thorough study of the processes involved in the farming of leafy vegetables is done and the tasks to be performed by the manipulator are identified. Constrained optimization is performed based on reachability, while minimizing DoF, for the tasks of transplanting, plant heath monitoring, and harvesting to find the optimal configurations which can perform the given tasks. The study resulted in 5-DoF, 4-DoF, and 6-DoF configurations for transplanting, plant heath monitoring, and harvesting, respectively, thus emphasizing the need of a reconfigurable solution. The configurations are realized using modular library and verified to satisfy reachability to provide a complete solution.
Given a graph $G$ and an integer $\ell \ge 2$, we denote by $\alpha _{\ell }(G)$ the maximum size of a $K_{\ell }$-free subset of vertices in $V(G)$. A recent question of Nenadov and Pehova asks for determining the best possible minimum degree conditions forcing clique-factors in $n$-vertex graphs $G$ with $\alpha _{\ell }(G) = o(n)$, which can be seen as a Ramsey–Turán variant of the celebrated Hajnal–Szemerédi theorem. In this paper we find the asymptotical sharp minimum degree threshold for $K_r$-factors in $n$-vertex graphs $G$ with $\alpha _\ell (G)=n^{1-o(1)}$ for all $r\ge \ell \ge 2$.
To address coupling motion issues and realize large constant force range of microgrippers, we present a serial two-degree-of-freedom compliant constant force microgripper (CCFMG) in this paper. To realize a large output displacement in a compact structure, Scott–Russell displacement amplification mechanisms, bridge-type displacement amplification mechanisms, and lever amplification mechanisms are combined to compensate stroke of piezoelectric actuators. In addition, constant force modules are utilized to achieve a constant force output. We investigated CCFMG’s performances by means of pseudo-rigid body models and finite element analysis. Simulation results show that the proposed CCFMG has a stroke of 781.34 ${\unicode[Times]{x03BC}}\mathrm{m}$ in the X-direction and a stroke of 258.05 ${\unicode[Times]{x03BC}}\mathrm{m}$ in the Y-direction, and the decoupling rates in two directions are 1.1% and 0.9%, respectively. The average output constant force of the clamp is 37.49 N. The amplification ratios of the bridge-type amplifier and the Scott–Russell amplifier are 7.02 and 3, respectively. Through finite element analysis-based optimization, the constant force stroke of CCFMG is increased from the initial 1.6 to 3 mm.
A random two-cell embedding of a given graph $G$ is obtained by choosing a random local rotation around every vertex. We analyse the expected number of faces of such an embedding, which is equivalent to studying its average genus. In 1991, Stahl [5] proved that the expected number of faces in a random embedding of an arbitrary graph of order $n$ is at most $n\log (n)$. While there are many families of graphs whose expected number of faces is $\Theta (n)$, none are known where the expected number would be super-linear. This led the authors of [1] to conjecture that there is a linear upper bound. In this note we confirm their conjecture by proving that for any $n$-vertex multigraph, the expected number of faces in a random two-cell embedding is at most $2n\log (2\mu )$, where $\mu$ is the maximum edge-multiplicity. This bound is best possible up to a constant factor.
One of the drivers for pushing for open data as a form of corruption control stems from the belief that in making government operations more transparent, it would be possible to hold public officials accountable for how public resources are spent. These large datasets would then be open to the public for scrutiny and analysis, resulting in lower levels of corruption. Though data quality has been largely studied and many advancements have been made, it has not been extensively applied to open data, with some aspects of data quality receiving more attention than others. One key aspect however—accuracy—seems to have been overlooked. This gap resulted in our inquiry: how is accurate open data produced and how might breakdowns in this process introduce opportunities for corruption? We study a government agency situated within the Brazilian Federal Government in order to understand in what ways is accuracy compromised. Adopting a distributed cognition (DCog) theoretical framework, we found that the production of open data is not a neutral activity, instead it is a distributed process performed by individuals and artifacts. This distributed cognitive process creates opportunities for data to be concealed and misrepresented. Two models mapping data production were generated, the combination of which provided an insight into how cognitive processes are distributed, how data flow, are transformed, stored, and processed, and what instances provide opportunities for data inaccuracies and misrepresentations to occur. The results obtained have the potential to aid policymakers in improving data accuracy.
Idea evaluation is used to identify and select ideas for development as future innovations. However, approaching idea evaluation as a decision gate can limit the role of the person evaluating ideas, create fixation bias, and underutilise the person’s creative potential. Although studies show that during evaluation experts are able to engage in design activities, it is still not clear how they design and develop ideas. The aim of this study was to understand how experts develop ideas during evaluation. Using the think-aloud technique, we identify different ways in which experts develop ideas. Specifically, we show how experts transform initial idea concepts using iterative steps of elaboration and transformation of different idea components. Then, relying on concept-knowledge theory (C-K theory), we identify six types of reasoning that the experts use during idea evaluation. This helps us to distinguish between three different roles that experts can move between during evaluation: gatekeeper, designer managing fixation, and designer managing defixation. These findings suggest that there is value in viewing idea evaluation as a design process because it allows us to identify and leverage the experts’ knowledge and creativity to a fuller extent.
Development of robust concrete mixes with a lower environmental impact is challenging due to natural variability in constituent materials and a multitude of possible combinations of mix proportions. Making reliable property predictions with machine learning can facilitate performance-based specification of concrete, reducing material inefficiencies and improving the sustainability of concrete construction. In this work, we develop a machine learning algorithm that can utilize intermediate target variables and their associated noise to predict the final target variable. We apply the methodology to specify a concrete mix that has high resistance to carbonation, and another concrete mix that has low environmental impact. Both mixes also fulfill targets on the strength, density, and cost. The specified mixes are experimentally validated against their predictions. Our generic methodology enables the exploitation of noise in machine learning, which has a broad range of applications in structural engineering and beyond.
For a $k$-uniform hypergraph $\mathcal{H}$ on vertex set $\{1, \ldots, n\}$ we associate a particular signed incidence matrix $M(\mathcal{H})$ over the integers. For $\mathcal{H} \sim \mathcal{H}_k(n, p)$ an Erdős–Rényi random $k$-uniform hypergraph, ${\mathrm{coker}}(M(\mathcal{H}))$ is then a model for random abelian groups. Motivated by conjectures from the study of random simplicial complexes we show that for $p = \omega (1/n^{k - 1})$, ${\mathrm{coker}}(M(\mathcal{H}))$ is torsion-free.