To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
One common approach to solve multi-objective reinforcement learning (MORL) problems is to extend conventional Q-learning by using vector Q-values in combination with a utility function. However issues can arise with this approach in the context of stochastic environments, particularly when optimising for the scalarised expected reward (SER) criterion. This paper extends prior research, providing a detailed examination of the factors influencing the frequency with which value-based MORL Q-learning algorithms learn the SER-optimal policy for an environment with stochastic state transitions. We empirically examine several variations of the core multi-objective Q-learning algorithm as well as reward engineering approaches and demonstrate the limitations of these methods. In particular, we highlight the critical impact of the noisy Q-value estimates issue on the stability and convergence of these algorithms.
Poor socket fit is the leading cause of prosthetic limb discomfort. However, currently clinicians have limited objective data to support and improve socket design. Finite element analysis predictions might help improve the fit, but this requires internal and external anatomy models. While external 3D surface scans are often collected in routine clinical computer-aided design practice, detailed internal anatomy imaging (e.g., MRI or CT) is not. We present a prototype statistical shape model (SSM) describing the transtibial amputated residual limb, generated using a sparse dataset of 33 MRI and CT scans. To describe the maximal shape variance, training scans are size-normalized to their estimated intact tibia length. A mean limb is calculated and principal component analysis used to extract the principal modes of shape variation. In an illustrative use case, the model is interrogated to predict internal bone shapes given a skin surface shape. The model attributes ~52% of shape variance to amputation height and ~17% to slender-bulbous soft tissue profile. In cross-validation, left-out shapes influenced the mean by 0.14–0.88 mm root mean square error (RMSE) surface deviation (median 0.42 mm), and left-out shapes were recreated with 1.82–5.75 mm RMSE (median 3.40 mm). Linear regression between mode scores from skin-only- and full-model SSMs allowed prediction of bone shapes from the skin with 3.56–10.9 mm RMSE (median 6.66 mm). The model showed the feasibility of predicting bone shapes from surface scans, which addresses a key barrier to implementing simulation within clinical practice, and enables more representative prosthetic biomechanics research.
We present a novel approach to synthesizing recursive functional programs from input–output examples. Synthesizing a recursive function is challenging because recursive subexpressions should be constructed while the target function has not been fully defined yet. We address this challenge by using a new technique we call block-based pruning. A block refers to a recursion- and conditional-free expression (i.e., straight-line code) that yields an output from a particular input. We first synthesize as many blocks as possible for each input–output example, and then we explore the space of recursive programs, pruning candidates that are inconsistent with the blocks. Our method is based on an efficient version space learning, thereby effectively dealing with a possibly enormous number of blocks. In addition, we present a method that uses sampled input–output behaviors of library functions to enable a goal-directed search for a recursive program using the library. We have implemented our approach in a system called Trio and evaluated it on synthesis tasks from prior work and on new tasks. Our experiments show that Trio significantly outperforms prior work.
In order to be effective mathematics educators, teachers need more than content knowledge: they need to be able to make mathematics comprehensible and accessible to their students. Teaching Key Concepts in the Australian Mathematics Curriculum Years 7 to 10 ensures that pre-service and practising teachers in Australia have the tools and resources required to teach lower secondary mathematics. By simplifying the underlying concepts of mathematics, this book equips teachers to design and deliver mathematics lessons at the lower secondary level. The text provides a variety of practical activities and teaching ideas that translate the latest version of the Australian Curriculum into classroom practice. Whether educators have recently studied more complicated mathematics or are teaching out of field, they are supported to recall ideas and concepts that they may have forgotten – or that may not have been made explicit in their own education.
We give a simple diagrammatic proof of the Frobenius property for generic fibrations that does not depend on any additional structure on the interval object such as connections.
Understanding the properties of lower-carbon concrete products is essential for their effective utilization. Insufficient empirical test data hinders practical adoption of these emerging products, and a lack of training data limits the effectiveness of current machine learning approaches for property prediction. This work employs a random forest machine learning model combined with a just-in-time approach, utilizing newly available data throughout the concrete lifecycle to enhance predictions of 28 and 56 day concrete strength. The machine learning hyperparameters and inputs are optimized through a novel unified metric that combines prediction accuracy and uncertainty estimates through the coefficient of determination and the distribution of uncertainty quality. This study concludes that optimizing solely for accuracy selects a different model than optimizing with the proposed unified accuracy and uncertainty metric. Experimental validation compares the 56-day strength of two previously unseen concrete mixes to the machine learning predictions. Even with the sparse dataset, predictions of 56-day strength for the two mixes were experimentally validated to within 90% confidence interval when using slump as an input and further improved by using 28-day strength.
We develop anapproximation for the buffer overflow probability of a stable tandem network in dimensions three or more. The overflow event in terms of the constrained random walk representing the network is the following: the sum of the components of the process hits n before hitting 0. This is one of the most commonly studied rare events in the context of queueing systems and the constrained processes representing them. The approximation is valid for almost all initial points of the process and its relative error decays exponentially in n. The analysis is based on an affine transformation of the process and the problem; as $n\rightarrow \infty$ the transformed process converges to an unstable constrained random walk. The approximation formula consists of the probability of the limit unstable process hitting a limit boundary in finite time. We give an explicit formula for this probability in terms of the utilization rates of the nodes of the network.
Keeping an up-to-date three-dimensional (3D) representation of buildings is a crucial yet time-consuming step for Building Information Modeling (BIM) and digital twins. To address this issue, we propose ICON (Intelligent CONstruction) drone, an unmanned aerial vehicle (UAV) designed to navigate indoor environments autonomously and generate point clouds. ICON drone is constructed using a 250 mm quadcopter frame, a Pixhawk flight controller, and is equipped with an onboard computer, an Red Green Blue-Depth camera and an IMU (Inertial Measurement Unit) sensor. The UAV navigates autonomously using visual-inertial odometer and frontier-based exploration. The collected RGB images during the flight are used for 3D reconstruction and semantic segmentation. To improve the reconstruction accuracy in weak-texture areas in indoor environments, we propose depth-regularized planar-based Gaussian splatting reconstruction, where we use monocular-depth estimation as extra supervision for weak-texture areas. The final outputs are point clouds with building components and material labels. We tested the UAV in three scenes in an educational building: the classroom, the lobby, and the lounge. Results show that the ICON drone could: (1) explore all three scenes autonomously, (2) generate absolute scale point clouds with F1-score of 0.5806, 0.6638, and 0.8167 compared to point clouds collected using a high-fidelity terrestrial LiDAR scanner, and (3) label the point cloud with corresponding building components and material with mean intersection over union of 0.588 and 0.629. The reconstruction algorithm is further evaluated on ScanNet, and results show that our method outperforms previous methods by a large margin on 3D reconstruction quality.
Within the broad context of design research, joint attention within co-creation represents a critical component, linking cognitive actors through dynamic interactions. This study introduces a novel approach employing deep learning algorithms to objectively quantify joint attention, offering a significant advancement over traditional subjective methods. We developed an optimized deep learning algorithm, YOLO-TP, to identify participants’ engagement in design workshops accurately. Our research methodology involved video recording of design workshops and subsequent analysis using the YOLO-TP algorithm to track and measure joint attention instances. Key findings demonstrate that the algorithm effectively quantifies joint attention with high reliability and correlates well with known measures of intersubjectivity and co-creation effectiveness. This approach not only provides a more objective measure of joint attention but also allows for the real-time analysis of collaborative interactions. The implications of this study are profound, suggesting that the integration of automated human activity recognition in co-creation can significantly enhance the understanding and facilitation of collaborative design processes, potentially leading to more effective design outcomes.
The Erdős-Sós Conjecture states that every graph with average degree exceeding $k-1$ contains every tree with $k$ edges as a subgraph. We prove that there are $\delta \gt 0$ and $k_0\in \mathbb N$ such that the conjecture holds for every tree $T$ with $k \ge k_0$ edges and every graph $G$ with $|V(G)| \le (1+\delta )|V(T)|$.
Advances in generative artificial intelligence (AI) have driven a growing effort to create digital duplicates. These semi-autonomous recreations of living and dead people can be used for many purposes. Some of these purposes include tutoring, coping with grief, and attending business meetings. However, the normative implications of digital duplicates remain obscure, particularly considering the possibility of them being applied to genocide memory and education. To address this gap, we examine normative possibilities and risks associated with the use of more advanced forms of generative AI-enhanced duplicates for transmitting Holocaust survivor testimonies. We first review the historical and contemporary uses of survivor testimonies. Then, we scrutinize the possible benefits of using digital duplicates in this context and apply the Minimally Viable Permissibility Principle (MVPP). The MVPP is an analytical framework for evaluating the risks of digital duplicates. It includes five core components: the need for authentic presence, consent, positive value, transparency, and harm-risk mitigation. Using MVPP, we identify potential harms digital duplicates might pose to different actors, including survivors, users, and developers. We also propose technical and socio-technical mitigation strategies to address these harms.
Climate change will impact wind and, therefore, wind power generation with largely unknown effects and magnitude. Climate models can provide insight and should be used for long-term power planning. In this work, we use Gaussian processes to predict power output given wind speeds from a global climate model. We validate the aggregated predictions from past climate model data with actual power generation, which supports using CMIP6 climate model data for multi-decadal wind power predictions and highlights the importance of being location-aware. We find that wind power projections for the two in-between climate scenarios, SSP2–4.5 and SSP3–7.0, closely align with actual wind power generation between 2015 and 2023. Our location-aware future predictions up to 2050 reveal only minor changes in yearly wind power generation. Our analysis also reveals larger uncertainty associated with Germany’s coastal areas in the North than Germany’s South, motivating wind power expansion in regions where the future wind is likely more reliable. Overall, our results indicate that wind energy will likely remain a reliable energy source.
Limited research has explored the delivery of sustainable design in higher education globally. Therefore, the aim of this paper is to investigate educational practices on the topic. Through an online survey, we investigated numerous aspects of units of study exposing topics related to sustainable design with a focus on contents, teaching methods and educational objectives. The survey was accessed by almost 400 educators in the field of sustainable design. The data show that a variety of teaching methods are used, with a critical role played by project-based learning in addition to traditional lectures. Most respondents rated all investigated intended learning outcomes as relevant or very relevant. In terms of contents and methods treated by the respondents, product eco-design and design for X are the most frequently taught methods. Educational approaches and teaching objectives are poorly affected by the discipline of the degree in which units of study are taught. In terms of contents, design degrees include approaches to sustainable design at the spatio-social level more frequently than engineering degrees do.
The Erdős–Simonovits stability theorem is one of the most widely used theorems in extremal graph theory. We obtain an Erdős–Simonovits type stability theorem in multi-partite graphs. Different from the Erdős–Simonovits stability theorem, our stability theorem in multi-partite graphs says that if the number of edges of an $H$-free graph $G$ is close to the extremal graphs for $H$, then $G$ has a well-defined structure but may be far away from the extremal graphs for $H$. As applications, we strengthen a theorem of Bollobás, Erdős, and Straus and solve a conjecture in a stronger form posed by Han and Zhao concerning the maximum number of edges in multi-partite graphs which does not contain vertex-disjoint copies of a clique.
In this study, we introduce a real-time pose estimation for a class of mobile robots with rectangular body (e.g., the common automatic guided vehicles), by integrating odometry and RGB-D images. First, a lightweight object detection model is designed based on the visual information. Then, a pose estimation algorithm is proposed based on the depth value variations within the target region that exhibit specific patterns due to the robot’s three-dimensional geometry and the observation perspective (termed as “differentiated depth information”). To improve the robustness of object detection and pose estimation, a Kalman filter is further constructed by incorporating odometry data. Finally, a series of simulations and experiments are conducted to demonstrate the method’s effectiveness. Experiments show that the proposed algorithm can achieve a speed over 20 Frames Per Second (FPS) together with a good estimation accuracy on a mobile robot equipped with an Nvidia Jetson Nano Developer KIT.
We consider the hypergraph Turán problem of determining $ex(n, S^d)$, the maximum number of facets in a $d$-dimensional simplicial complex on $n$ vertices that does not contain a simplicial $d$-sphere (a homeomorph of $S^d$) as a subcomplex. We show that if there is an affirmative answer to a question of Gromov about sphere enumeration in high dimensions, then $ex(n, S^d) \geq \Omega (n^{d + 1 - (d + 1)/(2^{d + 1} - 2)})$. Furthermore, this lower bound holds unconditionally for 2-LC (locally constructible) spheres, which includes all shellable spheres and therefore all polytopes. We also prove an upper bound on $ex(n, S^d)$ of $O(n^{d + 1 - 1/2^{d - 1}})$ using a simple induction argument. We conjecture that the upper bound can be improved to match the conditional lower bound.
Kinematically redundant parallel mechanisms (PMs) have attracted extensive attention from researchers due to their advantages in avoiding singular configurations and expanding the reachable workspace. However, kinematic redundancy introduces multiple inverse kinematics solutions, leading to uncertainty in the mechanism’s motion state. Therefore, this article proposes a method to optimize the inverse kinematics solutions based on motion/force transmission performance. By dividing the kinematically redundant PM into hierarchical levels and decomposing the redundancy, the transmission wrench screw systems of general redundant limbs and closed-loop redundant limbs are obtained. Then, input, output, and local transmission indices are calculated, respectively, to evaluate the motion/force transmission performance of such mechanisms. To address the problem of multiple inverse kinematics solutions, the local optimal transmission index is employed as a criterion to select the optimal motion/force transmission solution corresponding to a specific pose of the moving platform. By comparing performance atlas before and after optimization, it is demonstrated that the optimized inverse kinematics solutions enlarge the reachable workspace and significantly improve the motion/force transmission performance of the mechanism.
QuickSelect (also known as Find), introduced by Hoare ((1961) Commun. ACM4 321–322.), is a randomized algorithm for selecting a specified order statistic from an input sequence of $n$ objects, or rather their identifying labels usually known as keys. The keys can be numeric or symbol strings, or indeed any labels drawn from a given linearly ordered set. We discuss various ways in which the cost of comparing two keys can be measured, and we can measure the efficiency of the algorithm by the total cost of such comparisons.
We define and discuss a closely related algorithm known as QuickVal and a natural probabilistic model for the input to this algorithm; QuickVal searches (almost surely unsuccessfully) for a specified population quantile $\alpha \in [0, 1]$ in an input sample of size $n$. Call the total cost of comparisons for this algorithm $S_n$. We discuss a natural way to define the random variables $S_1, S_2, \ldots$ on a common probability space. For a general class of cost functions, Fill and Nakama ((2013) Adv. Appl. Probab.45 425–450.) proved under mild assumptions that the scaled cost $S_n / n$ of QuickVal converges in $L^p$ and almost surely to a limit random variable $S$. For a general cost function, we consider what we term the QuickVal residual:
\begin{equation*} \rho _n \,{:\!=}\, \frac {S_n}n - S. \end{equation*}
The residual is of natural interest, especially in light of the previous analogous work on the sorting algorithm QuickSort (Bindjeme and Fill (2012) 23rd International Meeting on Probabilistic, Combinatorial, and Asymptotic Methods for the Analysis of Algorithms (AofA'12), Discrete Mathematics, and Theoretical Computer Science Proceedings, AQ, Association: Discrete Mathematics and Theoretical Computer Science, Nancy, pp. 339–348; Neininger (2015) Random Struct. Algorithms46 346–361; Fuchs (2015) Random Struct. Algorithms46 677–687; Grübel and Kabluchko (2016) Ann. Appl. Probab.26 3659–3698; Sulzbach (2017) Random Struct. Algorithms50 493–508). In the case $\alpha = 0$ of QuickMin with unit cost per key-comparison, we are able to calculate–àla Bindjeme and Fill for QuickSort (Bindjeme and Fill (2012) 23rd International Meeting on Probabilistic, Combinatorial, and Asymptotic Methods for the Analysis of Algorithms (AofA'12), Discrete Mathematics and Theoretical Computer Science Proceedings, AQ, Association: Discrete Mathematics and Theoretical Computer Science, Nancy, pp. 339–348.)–the exact (and asymptotic) $L^2$-norm of the residual. We take the result as motivation for the scaling factor $\sqrt {n}$ for the QuickVal residual for general population quantiles and for general cost. We then prove in general (under mild conditions on the cost function) that $\sqrt {n}\,\rho _n$ converges in law to a scale mixture of centered Gaussians, and we also prove convergence of moments.