To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Understanding the properties of lower-carbon concrete products is essential for their effective utilization. Insufficient empirical test data hinders practical adoption of these emerging products, and a lack of training data limits the effectiveness of current machine learning approaches for property prediction. This work employs a random forest machine learning model combined with a just-in-time approach, utilizing newly available data throughout the concrete lifecycle to enhance predictions of 28 and 56 day concrete strength. The machine learning hyperparameters and inputs are optimized through a novel unified metric that combines prediction accuracy and uncertainty estimates through the coefficient of determination and the distribution of uncertainty quality. This study concludes that optimizing solely for accuracy selects a different model than optimizing with the proposed unified accuracy and uncertainty metric. Experimental validation compares the 56-day strength of two previously unseen concrete mixes to the machine learning predictions. Even with the sparse dataset, predictions of 56-day strength for the two mixes were experimentally validated to within 90% confidence interval when using slump as an input and further improved by using 28-day strength.
We develop anapproximation for the buffer overflow probability of a stable tandem network in dimensions three or more. The overflow event in terms of the constrained random walk representing the network is the following: the sum of the components of the process hits n before hitting 0. This is one of the most commonly studied rare events in the context of queueing systems and the constrained processes representing them. The approximation is valid for almost all initial points of the process and its relative error decays exponentially in n. The analysis is based on an affine transformation of the process and the problem; as $n\rightarrow \infty$ the transformed process converges to an unstable constrained random walk. The approximation formula consists of the probability of the limit unstable process hitting a limit boundary in finite time. We give an explicit formula for this probability in terms of the utilization rates of the nodes of the network.
Keeping an up-to-date three-dimensional (3D) representation of buildings is a crucial yet time-consuming step for Building Information Modeling (BIM) and digital twins. To address this issue, we propose ICON (Intelligent CONstruction) drone, an unmanned aerial vehicle (UAV) designed to navigate indoor environments autonomously and generate point clouds. ICON drone is constructed using a 250 mm quadcopter frame, a Pixhawk flight controller, and is equipped with an onboard computer, an Red Green Blue-Depth camera and an IMU (Inertial Measurement Unit) sensor. The UAV navigates autonomously using visual-inertial odometer and frontier-based exploration. The collected RGB images during the flight are used for 3D reconstruction and semantic segmentation. To improve the reconstruction accuracy in weak-texture areas in indoor environments, we propose depth-regularized planar-based Gaussian splatting reconstruction, where we use monocular-depth estimation as extra supervision for weak-texture areas. The final outputs are point clouds with building components and material labels. We tested the UAV in three scenes in an educational building: the classroom, the lobby, and the lounge. Results show that the ICON drone could: (1) explore all three scenes autonomously, (2) generate absolute scale point clouds with F1-score of 0.5806, 0.6638, and 0.8167 compared to point clouds collected using a high-fidelity terrestrial LiDAR scanner, and (3) label the point cloud with corresponding building components and material with mean intersection over union of 0.588 and 0.629. The reconstruction algorithm is further evaluated on ScanNet, and results show that our method outperforms previous methods by a large margin on 3D reconstruction quality.
Within the broad context of design research, joint attention within co-creation represents a critical component, linking cognitive actors through dynamic interactions. This study introduces a novel approach employing deep learning algorithms to objectively quantify joint attention, offering a significant advancement over traditional subjective methods. We developed an optimized deep learning algorithm, YOLO-TP, to identify participants’ engagement in design workshops accurately. Our research methodology involved video recording of design workshops and subsequent analysis using the YOLO-TP algorithm to track and measure joint attention instances. Key findings demonstrate that the algorithm effectively quantifies joint attention with high reliability and correlates well with known measures of intersubjectivity and co-creation effectiveness. This approach not only provides a more objective measure of joint attention but also allows for the real-time analysis of collaborative interactions. The implications of this study are profound, suggesting that the integration of automated human activity recognition in co-creation can significantly enhance the understanding and facilitation of collaborative design processes, potentially leading to more effective design outcomes.
The Erdős-Sós Conjecture states that every graph with average degree exceeding $k-1$ contains every tree with $k$ edges as a subgraph. We prove that there are $\delta \gt 0$ and $k_0\in \mathbb N$ such that the conjecture holds for every tree $T$ with $k \ge k_0$ edges and every graph $G$ with $|V(G)| \le (1+\delta )|V(T)|$.
Advances in generative artificial intelligence (AI) have driven a growing effort to create digital duplicates. These semi-autonomous recreations of living and dead people can be used for many purposes. Some of these purposes include tutoring, coping with grief, and attending business meetings. However, the normative implications of digital duplicates remain obscure, particularly considering the possibility of them being applied to genocide memory and education. To address this gap, we examine normative possibilities and risks associated with the use of more advanced forms of generative AI-enhanced duplicates for transmitting Holocaust survivor testimonies. We first review the historical and contemporary uses of survivor testimonies. Then, we scrutinize the possible benefits of using digital duplicates in this context and apply the Minimally Viable Permissibility Principle (MVPP). The MVPP is an analytical framework for evaluating the risks of digital duplicates. It includes five core components: the need for authentic presence, consent, positive value, transparency, and harm-risk mitigation. Using MVPP, we identify potential harms digital duplicates might pose to different actors, including survivors, users, and developers. We also propose technical and socio-technical mitigation strategies to address these harms.
Climate change will impact wind and, therefore, wind power generation with largely unknown effects and magnitude. Climate models can provide insight and should be used for long-term power planning. In this work, we use Gaussian processes to predict power output given wind speeds from a global climate model. We validate the aggregated predictions from past climate model data with actual power generation, which supports using CMIP6 climate model data for multi-decadal wind power predictions and highlights the importance of being location-aware. We find that wind power projections for the two in-between climate scenarios, SSP2–4.5 and SSP3–7.0, closely align with actual wind power generation between 2015 and 2023. Our location-aware future predictions up to 2050 reveal only minor changes in yearly wind power generation. Our analysis also reveals larger uncertainty associated with Germany’s coastal areas in the North than Germany’s South, motivating wind power expansion in regions where the future wind is likely more reliable. Overall, our results indicate that wind energy will likely remain a reliable energy source.
Limited research has explored the delivery of sustainable design in higher education globally. Therefore, the aim of this paper is to investigate educational practices on the topic. Through an online survey, we investigated numerous aspects of units of study exposing topics related to sustainable design with a focus on contents, teaching methods and educational objectives. The survey was accessed by almost 400 educators in the field of sustainable design. The data show that a variety of teaching methods are used, with a critical role played by project-based learning in addition to traditional lectures. Most respondents rated all investigated intended learning outcomes as relevant or very relevant. In terms of contents and methods treated by the respondents, product eco-design and design for X are the most frequently taught methods. Educational approaches and teaching objectives are poorly affected by the discipline of the degree in which units of study are taught. In terms of contents, design degrees include approaches to sustainable design at the spatio-social level more frequently than engineering degrees do.
The Erdős–Simonovits stability theorem is one of the most widely used theorems in extremal graph theory. We obtain an Erdős–Simonovits type stability theorem in multi-partite graphs. Different from the Erdős–Simonovits stability theorem, our stability theorem in multi-partite graphs says that if the number of edges of an $H$-free graph $G$ is close to the extremal graphs for $H$, then $G$ has a well-defined structure but may be far away from the extremal graphs for $H$. As applications, we strengthen a theorem of Bollobás, Erdős, and Straus and solve a conjecture in a stronger form posed by Han and Zhao concerning the maximum number of edges in multi-partite graphs which does not contain vertex-disjoint copies of a clique.
In this study, we introduce a real-time pose estimation for a class of mobile robots with rectangular body (e.g., the common automatic guided vehicles), by integrating odometry and RGB-D images. First, a lightweight object detection model is designed based on the visual information. Then, a pose estimation algorithm is proposed based on the depth value variations within the target region that exhibit specific patterns due to the robot’s three-dimensional geometry and the observation perspective (termed as “differentiated depth information”). To improve the robustness of object detection and pose estimation, a Kalman filter is further constructed by incorporating odometry data. Finally, a series of simulations and experiments are conducted to demonstrate the method’s effectiveness. Experiments show that the proposed algorithm can achieve a speed over 20 Frames Per Second (FPS) together with a good estimation accuracy on a mobile robot equipped with an Nvidia Jetson Nano Developer KIT.
We consider the hypergraph Turán problem of determining $ex(n, S^d)$, the maximum number of facets in a $d$-dimensional simplicial complex on $n$ vertices that does not contain a simplicial $d$-sphere (a homeomorph of $S^d$) as a subcomplex. We show that if there is an affirmative answer to a question of Gromov about sphere enumeration in high dimensions, then $ex(n, S^d) \geq \Omega (n^{d + 1 - (d + 1)/(2^{d + 1} - 2)})$. Furthermore, this lower bound holds unconditionally for 2-LC (locally constructible) spheres, which includes all shellable spheres and therefore all polytopes. We also prove an upper bound on $ex(n, S^d)$ of $O(n^{d + 1 - 1/2^{d - 1}})$ using a simple induction argument. We conjecture that the upper bound can be improved to match the conditional lower bound.
Kinematically redundant parallel mechanisms (PMs) have attracted extensive attention from researchers due to their advantages in avoiding singular configurations and expanding the reachable workspace. However, kinematic redundancy introduces multiple inverse kinematics solutions, leading to uncertainty in the mechanism’s motion state. Therefore, this article proposes a method to optimize the inverse kinematics solutions based on motion/force transmission performance. By dividing the kinematically redundant PM into hierarchical levels and decomposing the redundancy, the transmission wrench screw systems of general redundant limbs and closed-loop redundant limbs are obtained. Then, input, output, and local transmission indices are calculated, respectively, to evaluate the motion/force transmission performance of such mechanisms. To address the problem of multiple inverse kinematics solutions, the local optimal transmission index is employed as a criterion to select the optimal motion/force transmission solution corresponding to a specific pose of the moving platform. By comparing performance atlas before and after optimization, it is demonstrated that the optimized inverse kinematics solutions enlarge the reachable workspace and significantly improve the motion/force transmission performance of the mechanism.
QuickSelect (also known as Find), introduced by Hoare ((1961) Commun. ACM4 321–322.), is a randomized algorithm for selecting a specified order statistic from an input sequence of $n$ objects, or rather their identifying labels usually known as keys. The keys can be numeric or symbol strings, or indeed any labels drawn from a given linearly ordered set. We discuss various ways in which the cost of comparing two keys can be measured, and we can measure the efficiency of the algorithm by the total cost of such comparisons.
We define and discuss a closely related algorithm known as QuickVal and a natural probabilistic model for the input to this algorithm; QuickVal searches (almost surely unsuccessfully) for a specified population quantile $\alpha \in [0, 1]$ in an input sample of size $n$. Call the total cost of comparisons for this algorithm $S_n$. We discuss a natural way to define the random variables $S_1, S_2, \ldots$ on a common probability space. For a general class of cost functions, Fill and Nakama ((2013) Adv. Appl. Probab.45 425–450.) proved under mild assumptions that the scaled cost $S_n / n$ of QuickVal converges in $L^p$ and almost surely to a limit random variable $S$. For a general cost function, we consider what we term the QuickVal residual:
\begin{equation*} \rho _n \,{:\!=}\, \frac {S_n}n - S. \end{equation*}
The residual is of natural interest, especially in light of the previous analogous work on the sorting algorithm QuickSort (Bindjeme and Fill (2012) 23rd International Meeting on Probabilistic, Combinatorial, and Asymptotic Methods for the Analysis of Algorithms (AofA'12), Discrete Mathematics, and Theoretical Computer Science Proceedings, AQ, Association: Discrete Mathematics and Theoretical Computer Science, Nancy, pp. 339–348; Neininger (2015) Random Struct. Algorithms46 346–361; Fuchs (2015) Random Struct. Algorithms46 677–687; Grübel and Kabluchko (2016) Ann. Appl. Probab.26 3659–3698; Sulzbach (2017) Random Struct. Algorithms50 493–508). In the case $\alpha = 0$ of QuickMin with unit cost per key-comparison, we are able to calculate–àla Bindjeme and Fill for QuickSort (Bindjeme and Fill (2012) 23rd International Meeting on Probabilistic, Combinatorial, and Asymptotic Methods for the Analysis of Algorithms (AofA'12), Discrete Mathematics and Theoretical Computer Science Proceedings, AQ, Association: Discrete Mathematics and Theoretical Computer Science, Nancy, pp. 339–348.)–the exact (and asymptotic) $L^2$-norm of the residual. We take the result as motivation for the scaling factor $\sqrt {n}$ for the QuickVal residual for general population quantiles and for general cost. We then prove in general (under mild conditions on the cost function) that $\sqrt {n}\,\rho _n$ converges in law to a scale mixture of centered Gaussians, and we also prove convergence of moments.
The rise of artificial intelligence is challenging the foundations of intellectual property. In AI versus IP: Rewriting Creativity, science writer Robin Feldman offers a balanced perspective as she explains how artificial intelligence (AI) threatens to erode all of intellectual property (IP) – patents, trademarks, copyrights, trade secrets, and rights of publicity. Using analogies to the Bridgerton fantasy series and the Good Housekeeping 'Seal of Approval,' Professor Feldman also offers solutions to ensure a peaceful coexistence between AI and IP. And if you've ever wanted to understand just how modern AI programs like ChatGPT, Claude, Gemini, Grok, Meta AI, and others work, AI versus IP: Rewriting Creativity explains it all in simple language, no math required. AI and IP can coexist, Feldman argues, but only if we fully understand them and only with considerable effort and forethought.
This handbook offers an important exploration of generative AI and its legal and regulatory implications from interdisciplinary perspectives. The volume is divided into four parts. Part I provides the necessary context and background to understand the topic, including its technical underpinnings and societal impacts. Part II probes the emerging regulatory and policy frameworks related to generative AI and AI more broadly across different jurisdictions. Part III analyses generative AI's impact on specific areas of law, from non-discrimination and data protection to intellectual property, corporate governance, criminal law and more. Part IV examines the various practical applications of generative AI in the legal sector and public administration. Overall, this volume provides a comprehensive resource for those seeking to understand and navigate the substantial and growing implications of generative AI for the law.
Data Rights in Transition maps the development of data rights that formed and reformed in response to the socio-technical transformations of the postwar twentieth century. The authors situate these rights, with their early pragmatic emphasis on fair information processing, as different from and less symbolically powerful than utopian human rights of older centuries. They argue that, if an essential role of human rights is 'to capture the world's imagination', the next generation of data rights needs to come closer to realising that vision – even while maintaining their pragmatic focus on effectiveness. After a brief introduction, the sections that follow focus on socio-technical transformations, emergence of the right to data protection, and new and emerging rights such as the right to be forgotten and the right not to be subject to automated decision-making, along with new mechanisms of governance and enforcement.
An original family of labelled sequent calculi $\mathsf {G3IL}^{\star }$ for classical interpretability logics is presented, modularly designed on the basis of Verbrugge semantics (a.k.a. generalised Veltman semantics) for those logics. We prove that each of our calculi enjoys excellent structural properties, namely, admissibility of weakening, contraction and, more relevantly, cut. A complexity measure of the cut is defined by extending the notion of range previously introduced by Negri w.r.t. a labelled sequent calculus for Gödel–Löb provability logic, and a cut-elimination algorithm is discussed in detail. To our knowledge, this is the most extensive and structurally well-behaving class of analytic proof systems for modal logics of interpretability currently available in the literature.