To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Computational simplification tools can make complex information sources easier to read for engineering designers. To guide and evaluate such approaches, it is necessary to understand how designers process information and how that information can be enhanced and measured. Here, we establish an approach for enhancing and measuring the comprehensibility of technical information for engineering designers. It is grounded in theories of document search and comprehension and provides theoretically supported principles for enhancing information and methods for measuring comprehension experimentally. It is tailored for engineering design in that it (i) does not summarize or remove potentially important information, (ii) is suitable for long, complex sources of information, (iii) can be applied in experiments that simulate real-life information sharing scenarios, and (iv) enables the measurement of domain-specific comprehension. The feasibility of the approach was tested by using patent documents as a test case since they represent a valuable but underutilized source of technical information. A 2 (patent documents) × 2 (conditions: control vs. modified) experiment was conducted with 28 professional engineering designers. Two patent documents were modified with six information design principles. Comprehension scores were higher for the modified patent than for the control, but the change was not statistically significant (p = 0.073). We attribute this either to redundancy effects causing a smaller than expected overall improvement in performance, or differences in prior knowledge for each patent. Overall, this approach offers a novel method for investigating and measuring information comprehensibility in engineering design; however, its effectiveness in enhancing information comprehensibility remains unvalidated.
One of the elegant achievements in the history of proof theory is the characterization of the provably total recursive functions of an arithmetical theory by its proof-theoretic ordinal as a way to measure the time complexity of the functions. Unfortunately, the machinery is not sufficiently fine-grained to be applicable on the weak theories, on the one hand and to capture the bounded functions with bounded definitions of strong theories, on the other. In this paper, we develop such a machinery to address the bounded theorems of both strong and weak theories of arithmetic. In the first part, we provide a refined version of ordinal analysis to capture the feasibly definable and bounded functions that are provably total in $\textrm{PA}+\bigcup _{\beta \prec \alpha } \textrm{TI}({\prec_{\beta}})$, the extension of Peano arithmetic by transfinite induction up to the ordinals below $\alpha$. Roughly speaking, we identify the functions as the ones that are computable by a sequence of $\textrm{PV}$-provable polynomial time modifications on an initial polynomial time value, where the computational steps are indexed by the ordinals below $\alpha$, decreasing by the modifications. In the second part, and choosing $l \leq k$, we use similar technique to capture the functions with bounded definitions in the theory $T^k_2$ (resp. $S^k_2$) as the functions computable by exponentially (resp. polynomially) long sequence of $\textrm{PV}_{k-l +1}$-provable reductions between $l$-turn games starting with an explicit $\textrm{PV}_{k-l +1}$-provable winning strategy for the first game.
This paper studies a bi-dimensional compound risk model with quasi-asymptotically independent and consistently varying-tailed random numbers of claims and establishes an asymptotic formula for the finite-time sum-ruin probability. Additionally, some results related to tail probabilities of random sums are presented, which are of significant interest in their own right. Some numerical studies are carried out to check the accuracy of the asymptotic formula.
We propose a systematic design approach for the precast concrete industry to promote sustainable construction practices. By employing a holistic optimization procedure, we combine the concrete mixture design and structural simulations in a joint, forward workflow that we ultimately seek to invert. In this manner, new mixtures beyond standard ranges can be considered. Any design effort should account for the presence of uncertainties which can be aleatoric or epistemic as when data are used to calibrate physical models or identify models that fill missing links in the workflow. Inverting the causal relations established poses several challenges especially when these involve physics-based models which more often than not, do not provide derivatives/sensitivities or when design constraints are present. To this end, we advocate Variational Optimization, with proposed extensions and appropriately chosen heuristics to overcome the aforementioned challenges. The proposed approach to treat the design process as a workflow, learn the missing links from data/models, and finally perform global optimization using the workflow is transferable to several other materials, structural, and mechanical problems. In the present work, the efficacy of the method is exemplarily illustrated using the design of a precast concrete beam with the objective to minimize the global warming potential while satisfying a number of constraints associated with its load-bearing capacity after 28 days according to the Eurocode, the demolding time as computed by a complex nonlinear finite element model, and the maximum temperature during the hydration.
Retrieval-augmented generation (RAG) adds a simple but powerful feature to chatbots, the ability to upload files just-in-time. Chatbots are trained on large quantities of public data. The ability to upload files just-in-time makes it possible to reduce hallucinations by filling in gaps in the knowledge base that go beyond the public training data such as private data and recent events. For example, in a customer service scenario, with RAG, we can upload your private bill and then the bot can discuss questions about your bill as opposed to generic FAQ questions about bills in general. This tutorial will show how to upload files and generate responses to prompts; see https://github.com/kwchurch/RAG for multiple solutions based on tools from OpenAI, LangChain, HuggingFace transformers and VecML.
Given a family of graphs $\mathcal{F}$ and an integer $r$, we say that a graph is $r$-Ramsey for $\mathcal{F}$ if any $r$-colouring of its edges admits a monochromatic copy of a graph from $\mathcal{F}$. The threshold for the classic Ramsey property, where $\mathcal{F}$ consists of one graph, in the binomial random graph was located in the celebrated work of Rödl and Ruciński.
In this paper, we offer a twofold generalisation to the Rödl–Ruciński theorem. First, we show that the list-colouring version of the property has the same threshold. Second, we extend this result to finite families $\mathcal{F}$, where the threshold statements might also diverge. This also confirms further special cases of the Kohayakawa–Kreuter conjecture. Along the way, we supply a short(-ish), self-contained proof of the $0$-statement of the Rödl–Ruciński theorem.
This work investigates the use of a fuzzy logic controller (FLC) for two-wheeled differential drive mobile robot trajectory tracking control. Due to the inherent complexity associated with tuning the membership functions of an FLC, this work employs a particle swarm optimization algorithm to optimize the parameters of these functions. In order to automate and reduce the number of rule bases, the genetic algorithm is also employed for this study. The effectiveness of the proposed approach is validated through MATLAB simulations involving diverse path tracking scenarios. The performance of the FLC is compared against established controllers, including minimum norm solution, closed-loop inverse kinematics, and Jacobian transpose-based controllers. The results demonstrate that the FLC offers accurate trajectory tracking with reduced root mean square error and controller effort. An experimental, hardware-based investigation is also performed for further verification of the proposed system. In addition, the simulation is conducted for various paths in the presence of noise in order to assess the proposed controller’s robustness. The proposed method is resilient against noise and disturbances, according to the simulation outcomes.
In this study, we present a hybrid kinematic modeling approach for serial robotic manipulators, which offers improved accuracy compared to conventional methods. Our method integrates the geometric properties of the robot with ground truth data, resulting in enhanced modeling precision. The proposed forward kinematic model combines classical kinematic modeling techniques with neural networks trained on accurate ground truth data. This fusion enables us to minimize modeling errors effectively. In order to address the inverse kinematic problem, we utilize the forward hybrid model as feedback within a non-linear optimization process. Unlike previous works, our formulation incorporates the rotational component of the end effector, which is beneficial for applications involving orientation, such as inspection tasks. Furthermore, our inverse kinematic strategy can handle multiple possible solutions. Through our research, we demonstrate the effectiveness of the hybrid models as a high-accuracy kinematic modeling strategy, surpassing the performance of traditional physical models in terms of positioning accuracy.
This paper introduces a simplified matrix method for balancing forces and moments in planar parallel manipulators. The method resorts to Newton’s second law and the concept of angular momentum vector, yet it is not necessary to perform the velocity and acceleration analyses, tasks that were normally unavoidable in seminal contributions. With the introduction of natural matrices, the proposed balancing method is independent of the time and the trajectory generated by the moving links of parallel manipulators. The effectiveness of the method is exemplified by balancing two planar parallel manipulators.
The authors have studied models and control methods for legged robots without having active ankle joints that can not only walk efficiently but also stop and developed a method for generating a gait that starts from an upright stationary state and returns to the same state in one step for a simple walker with one control input. It was clarified, however, that achieving a perfect upright stationary state including zero dynamics is impossible. Based on the observation, in this paper we propose a novel robotic walker with parallel linkage legs that can return to a perfect stationary standing posture in one step while simultaneously controlling the stance-leg motion and zero-moment point (ZMP) using two control inputs. First, we introduce a model of a planar walker that consists of two eight-legged rimless wheels, a body frame, a reaction wheel, and massless rods and describe the system dynamics. Second, we consider two target control conditions; one is control of the stance-leg motion, and the other is control of the ZMP to stabilize zero dynamics. We then determine the control input based on the two conditions with the target control period derived from the linearized model and consider adding a sinusoidal control input with an offset to correct the resultant terminal state of the reaction wheel. The validity of the proposed method is investigated through numerical simulations.
We take another look at the construction by Hofmann and Streicher of a universe $(U,{\mathcal{E}l})$ for the interpretation of Martin-Löf type theory in a presheaf category $[{{{\mathbb{C}}}^{\textrm{op}}},\textsf{Set}]$. It turns out that $(U,{\mathcal{E}l})$ can be described as the nerve of the classifier $\dot{{\textsf{Set}}}^{\textsf{op}} \rightarrow{{\textsf{Set}}}^{\textsf{op}}$ for discrete fibrations in $\textsf{Cat}$, where the nerve functor is right adjoint to the so-called “Grothendieck construction” taking a presheaf $P :{{{\mathbb{C}}}^{\textrm{op}}}\rightarrow{\textsf{Set}}$ to its category of elements $\int _{\mathbb{C}} P$. We also consider change of base for such universes, as well as universes of structured families, such as fibrations.
The snake robot can be used to monitor and maintain underwater structures and environments. The motion of a snake robot is achieved by lateral undulation which is called the gait pattern of the snake robot. The parameters of a gait pattern need to be adjusted for compensating environmental uncertainties. In this work, 3D motion dynamics of a snake robot for the underwater environment is proposed with vertical motion using the buoyancy variation technique and horizontal motion using lateral undulation. “The neutral buoyant snake robot motion in hypothetical plane and added mass effect is negligible”, these previous assumptions are removed in this work. Two different control algorithms are designed for horizontal and vertical motions. The existing super twisting sliding mode control (STSMC) is used for the horizontal serpentine motion of the snake robot. The control law is designed on a reduced-ordered dynamic system based on virtual holonomic constraints. The vertical motion is achieved by controlling the mass variation using a pump. The water pumps are controlled using the event-based controller or Proportional Derivative (PD) controller. The results of the proposed control technique are verified with various external environmental disturbances and uncertainties to check the robustness of the control approach for various path following cases. Moreover, the results of STSMC scheme are compared with SMC scheme to check the effectiveness of STSMC. The practical implementation of the work is also performed using Simscape Multibody environment where the designed control algorithm is deployed on the virtual snake robot.
This systematic review maps the trends of computer-assisted pronunciation training (CAPT) research based on the pedagogy of second language (L2) pronunciation instruction and assessment. The review was limited to empirical studies investigating the effects of CAPT on healthy L2 learners’ pronunciation. Thirty peer-reviewed journal articles published between 1999 and 2022 were selected based on specific inclusion and exclusion criteria. Data were collected about the studies’ contexts, participants, experimental designs, CAPT systems, pronunciation training scopes and approaches, pronunciation assessment practices, and learning measures. Using a pedagogically informed codebook, the pronunciation training and assessment practices were classified and evaluated based on established L2 pronunciation teaching guidelines. The findings indicated that most of the studies focused on the pronunciation training of adult English learners with an emphasis on the production of segmental features (i.e. vowels and consonants) rather than suprasegmental features (i.e. stress, intonation, and rhythm). Despite the innovation promised by CAPT technology, pronunciation practice in the studies reviewed was characterized by the predominant use of drilling through listen-and-repeat and read-aloud activities. As for assessment, most CAPT studies relied on human listeners to measure the accurate production of discrete pronunciation features (i.e. segmental and suprasegmental accuracy). Meanwhile, few studies employed global pronunciation learning measures such as intelligibility and comprehensibility. Recommendations for future research are provided based on the discussion of these results.
Social media challenge several established concepts of memory research. In particular, the day-to-day mundane discourse of social media blur the essential distinction between commemorative and non-commemorative memory. We address these challenges by presenting a methodological framework that explores the dynamics of social memory on various social media. Our method combines top-down data mining with a bottom-up analysis tailored to each platform. We demonstrate the application of our approach by studying how the Holocaust is remembered in different corpora, including a dataset of 5.3 million Facebook posts and comments collected between 2015 and 2017 and a 5 million Tweets and Retweets dataset collected in 2021. We first identify the mnemonic agents initiating the discussion of the memory of the Holocaust and those responding to it. Second, we compare the macro-rhythms of Holocaust discourse on the two platforms, identifying peaks and mundane discussions that extend beyond commemorative occasions. Third, we identify distinctive language and cultural norms specific to the memorialization of the Holocaust on each platform. We conceptualize these dynamics as ‘Mnemonic Markers’ and discuss them as potential pathways for memory researchers who wish to explore the unique memory dynamics afforded by social media.
This article investigates memory practices in connection with retrospective Facebook groups created for remembering specific aspects of the past. It focuses on how members of these groups experience and deal with how Facebook's interface and algorithms enable, shape, and interfere with memory practices. From this point of departure, the article discusses and nuances the idea that a ‘connective turn’ has brought with it an ontological shift in memory culture (Hoskins 2017a) and a ‘greying’ of memories (Hoskins and Halstead 2021). Theoretically, the article draws on Deborah Lupton's (2020) concept of ‘data selves’, which offers an account of how people interact with data and technology. This concept does not view data practices as immaterial but rather as material, corporeal, and affective, thus prompting an understanding of memory practices as hybrid processes where offline and online practices intersect (Gajjala 2019; Merrill forthcoming/2024). In this qualitative study, nine members of retrospective Facebook groups were chosen to participate in semi-structured interviews. The analysis explains the importance of viewing contemporary memory practices as hybrid, showing a greying effect within the affordances of Facebook that shapes both which memories are shared and how memories are shared. In addition, the analysis nuances the idea of an ontological shift in memory culture and the greying of memories by investigating how the interviewees’ deal and struggle with the affordances of the platform in their memory practices.
The authors’ primary goal in this paper is to enhance the study of $T_0$ topological spaces by using the order of specialization of a $T_0$-space to introduce the lower topology (with a subbasis of closed sets $\mathord{\uparrow } x$) and studying the interaction of the original topology and the lower topology. Using the lower topology, one can define and study new properties of the original space that provide deeper insight into its structure. One focus of study is the property R, which asserts that if the intersection of a family of finitely generated sets $\mathord{\uparrow } F$, $F$ finite, is contained in an open set $U$, then the same is true for finitely many of the family. We first show that property R is equivalent to several other interesting properties, for example, the property that all closed subsets of the original space are compact in the lower topology. We then find conditions under which these spaces are compact, well-filtered, and coherent, a weaker variant of stably compact spaces. We also investigate what have been called strong $d$-spaces, develop some of their basic properties, and make connections with the earlier considerations involving spaces satisfying property R. Two key results we obtain are that if a dcpo $P$ with the Scott topology is a strong $d$-space, then it is well-filtered, and if additionally the Scott topology of the product $P\times P$ is the product of the Scott topologies of the factors, then the Scott space of $P$ is sober. We also exhibit connections of this work with de Groot duality.
Autonomous underwater vehicles (AUVs) have played a pivotal role in advancing ocean exploration and exploitation. However, traditional AUVs face limitations when executing missions at minimal or near-zero forward velocities due to the ineffectiveness of their control surfaces, considerably constraining their potential applications. To address this challenge, this paper introduces an innovative vectored thruster system based on a 3RRUR parallel manipulator tailored for micro-sized AUVs. The incorporation of a vectored thruster enhances the performance of micro-sized AUVs when operating at minimal and low forward speeds. A comprehensive exploration of the kinematics of the thrust-vectoring mechanism has been undertaken through theoretical analysis and experimental validation. The findings from theoretical analysis and experimental confirmation unequivocally affirm the feasibility of the devised thrust-vectoring mechanism. The precise control of the vector device is studied using Physics-informed Neural Network and Model Predictive Control (PINN-MPC). Through the adoption of this pioneering thrust-vectoring mechanism rooted in the 3RRUR parallel manipulator, AUVs can efficiently and effectively generate the requisite motion for thrust-vectoring propulsion, overcoming the limitations of traditional AUVs and expanding their potential applications across various domains.
We present a new explicit formula for the determinant that contains superexponentially fewer terms than the usual Leibniz formula. As an immediate corollary of our formula, we show that the tensor rank of the $n \times n$ determinant tensor is no larger than the $n$-th Bell number, which is much smaller than the previously best-known upper bounds when $n \geq 4$. Over fields of non-zero characteristic we obtain even tighter upper bounds, and we also slightly improve the known lower bounds. In particular, we show that the $4 \times 4$ determinant over ${\mathbb{F}}_2$ has tensor rank exactly equal to $12$. Our results also improve upon the best-known upper bound for the Waring rank of the determinant when $n \geq 17$, and lead to a new family of axis-aligned polytopes that tile ${\mathbb{R}}^n$.
This work proposes a novel grasp detection method, the Efficient Grasp Aware Network (EGA-Net), for robotic visual grasp detection. Our method obtains semantic information for grasping through feature extraction. It efficiently obtains feature channel weights related to grasping tasks through the constructed ECA-ResNet module, which can smooth the network’s learning. Meanwhile, we use concatenation to obtain low-level features with rich spatial information. Our method inputs an RGB-D image and outputs the grasp poses and their quality score. The EGA-Net is trained and tested on the Cornell and Jacquard datasets, and we achieve 98.9% and 95.8% accuracy, respectively. The proposed method only takes 24 ms for real-time performance to process an RGB-D image. Moreover, our method achieved better results in the comparison experiment. In the real-world grasp experiments, we use a 6-degree of freedom (DOF) UR-5 robotic arm to demonstrate its robust grasping of unseen objects in various scenes. We also demonstrate that our model can successfully grasp different types of objects without any processing in advance. The experiment results validate our model’s exceptional robustness and generalization.
Visual odometry (VO) is a key technology for estimating camera motion from captured images. In this paper, we propose a novel RGB-D visual odometry by constructing and matching features at the superpixel level that represents better adaptability in different environments than state-of-the-art solutions. Superpixels are content-sensitive and perform well in information aggregation. They could thus characterize the complexity of the environment. Firstly, we designed the superpixel-based feature SegPatch and its corresponding 3D representation MapPatch. By using the neighboring information, SegPatch robustly represents its distinctiveness in various environments with different texture densities. Due to the inclusion of depth measurement, the MapPatch constructs the scene structurally. Then, the distance between SegPatches is defined to characterize the regional similarity. We use the graph search method in scale space for searching and matching. As a result, the accuracy and efficiency of matching process are improved. Additionally, we minimize the reprojection error between the matched SegPatches and estimate camera poses through all these correspondences. Our proposed VO is evaluated on the TUM dataset both quantitatively and qualitatively, showing good balance to adapt to the environment under different realistic conditions.