To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper presents a concurrent optimization approach for the design and motion of a quadruped in order to achieve energy-efficient cyclic behaviors. Computational techniques are applied to improve the development of a novel quadruped prototype. The scale of the robot and its actuators are optimized for energy efficiency considering the complete actuator model including friction, torque, and bandwidth limitations. This method and the optimal bounding trajectories are tested on the first (non-optimized) prototype design iteration showing that our formulation produces a trajectory that (i) can be easily replayed on the real robot and (ii) reduces the power consumption w.r.t. hand-tuned motion heuristics. Power consumption is then optimized for several periodic tasks with co-design. Our results include, but are not limited to, a bounding and backflip task. It appears that, for jumping forward, robots with longer thighs perform better, while, for backflips, longer shanks are better suited. To explore the tradeoff between these different designs, a Pareto set is constructed to guide the next iteration of the prototype. On this set, we find a new design, which will be produced in future work, showing an improvement of at least 52% for each separate task.
The thinking and emotional brains work together to help lawyers develop expertise in a process called memory consolidation. Information enters the thinking brain through the senses, such as the eyes and ears, and travels to the memory-processing hippocampus. Newer memories are remembered from the network of brain cells that loop between the thinking brain and the hippocampus in the emotional brain. Stable memory, a lawyer’s hard-earned expertise, is recalled from the connectome, which is the unique architecture of neurons in the lawyer’s thinking brain.
The Legal Brain is an essential guide for legal professionals seeking to understand the impact of chronic stress on their brain and mental health. Drawing on the latest neuroscience and psychology research, the book translates complex scientific concepts into actionable advice for legal professionals looking to enhance their well-being and thrive amidst the demands and stressors of the profession. Chapters cover optimizing cognitive fitness and performance, avoiding or healing cognitive damage, and protecting “the lawyer brain.” Whether you are a law student, practicing lawyer, judge, or leader of a legal organization, this book provides valuable insights and strategies for building resilience, maintaining peak performance, and protecting your most important asset - your brain.
In this paper, we present a constructive and proof-relevant development of graph theory, including the notion of maps, their faces and maps of graphs embedded in the sphere, in homotopy type theory (HoTT). This allows us to provide an elementary characterisation of planarity for locally directed finite and connected multigraphs that takes inspiration from topological graph theory, particularly from combinatorial embeddings of graphs into surfaces. A graph is planar if it has a map and an outer face with which any walk in the embedded graph is walk-homotopic to another. A result is that this type of planar maps forms a homotopy set for a graph. As a way to construct examples of planar graphs inductively, extensions of planar maps are introduced. We formalise the essential parts of this work in the proof assistant Agda with support for HoTT.
Traditionally, electricity distribution networks were designed for unidirectional power flow without the need to accommodate generation installed at the point of use. However, with the increase in Distributed Energy Resources and other Low Carbon Technologies, the role of distribution networks is changing. This shift brings challenges, including the need for intensive metering and more frequent reconfiguration to identify threats from voltage and thermal violations. Mitigating action through reconfiguration is informed by State Estimation, which is especially challenging for low voltage distribution networks where the constraints of low observability, non-linear load relationships, and highly unbalanced systems all contribute to the difficulty of producing accurate state estimates. To counter low observability, this paper proposes the application of a novel transfer learning methodology, based upon the concept of conditional online Bayesian transfer, to make forward predictions of bus pseudo-measurements. Day ahead load forecasts at a fully observed point on the network are adjusted using the intraday residuals at other points in the network to provide them with load forecasts without the need for a complete set of forecast models at all substations. These form pseudo-measurements that then inform the state estimates at future time points. This methodology is demonstrated on both a representative IEEE Test network and on an actual GB 11 kV feeder network.
This paper mainly studies an autonomous path-planning and real-time path-tracking optimization method for snake robot. Snake robots can perform search and rescue, exploration, and other tasks in a variety of complex environments. Robots with visual sensors such as LiDAR can avoid obstacles in the environment through autonomous navigation to reach the target point. However, in an unstructured environment, the navigation of snake robot is easily affected by the external environment, causing the robot to deviate from the planned path. In order to solve the problem that snake robots are easily affected by environmental factors in unstructured environments, resulting in poor path-following ability, this paper uses the Los algorithm combined with steering control to plan the robot in real time and control the robot’s steering parameters in real time, ensuring that the robot can stably follow the planned path.
Online algorithms are a rich area of research with widespread applications in scheduling, combinatorial optimization, and resource allocation problems. This lucid textbook provides an easy but rigorous introduction to online algorithms for graduate and senior undergraduate students. In-depth coverage of most of the important topics is presented with special emphasis on elegant analysis. The book starts with classical online paradigms like the ski-rental, paging, list-accessing, bin packing, where performance of online algorithms is studied under the worst-case input and moves on to newer paradigms like 'beyond worst case', where online algorithms are augmented with predictions using machine learning algorithms. The book goes on to cover multiple applied problems such as routing in communication networks, server provisioning in cloud systems, communication with energy harvested from renewable sources, and sub-modular partitioning. Finally, a wide range of solved examples and practice exercises are included, allowing hands-on exposure to the concepts.
The atomic bomb uses fission of heavy elements to produce a large amount of energy. It was designed and deployed during World War II by the United States military. The first test of an atomic bomb occurred in July 1945 in New Mexico and was given the name Trinity; this test was not declassified until 1949. In that year, Geoffrey Ingram Taylor released two papers detailing his process in calculating the energy yield of the atomic bomb from pictures of the Trinity explosion alone. Many scientists made similar calculations concurrently, although Taylor is often accredited with them. Since then, many scientists have also attempted to calculate a yield through various methods. This paper walks through these methods with a focus on Taylor’s method—based on first principles—as well as redoing the calculations that he performed with modern tools. In this paper, we make use of state-of-the-art computer vision tools to find a more precise measurement of the blast radius, as well as using curve fitting and numerical integration methods. With more precise measurements we are able to follow in Taylor’s footstep toward a more accurate approximation.
This paper proposes a virtual reality-based dual-mode teleoperation architecture to assist human operators in remotely operating robotic manipulation systems in a safe and flexible way. The architecture, implemented via a finite state machine, enables the operator to switch between two operational modes: the Approach mode, where the operator indirectly controls the robotic system by specifying its target configuration via the immersive virtual reality (VR) interface, and the Telemanip mode, where the operator directly controls the robot end-effector motion via input devices. The two independent control modes have been tested along the task of reaching a glass on a table by a sample population of 18 participants. Two working groups have been considered to distinguish users with previous experience with VR technologies from the novices. The results of the user study presented in this work show the potential of the proposed architecture in terms of usability, both physical and mental workload, and user satisfaction. Finally, a statistical analysis showed no significant differences along these three metrics between the two considered groups demonstrating ease of use of the proposed architecture by both people with and with no previous experience in VR.
In this work, we consider two sets of dependent variables $\{X_{1},\ldots,X_{n}\}$ and $\{Y_{1},\ldots,Y_{n}\}$, where $X_{i}\sim EW(\alpha_{i},\lambda_{i},k_{i})$ and $Y_{i}\sim EW(\beta_{i},\mu_{i},l_{i})$, for $i=1,\ldots, n$, which are coupled by Archimedean copulas having different generators. We then establish different inequalities between two extremes, namely, $X_{1:n}$ and $Y_{1:n}$ and $X_{n:n}$ and $Y_{n:n}$, in terms of the usual stochastic, star, Lorenz, hazard rate, reversed hazard rate and dispersive orders. Several examples and counterexamples are presented for illustrating all the results established here. Some of the results here extend the existing results of [5] (Barmalzan, G., Ayat, S.M., Balakrishnan, N., & Roozegar, R. (2020). Stochastic comparisons of series and parallel systems with dependent heterogeneous extended exponential components under Archimedean copula. Journal of Computational and Applied Mathematics380: Article No. 112965).
Thermohaline staircases are a widespread stratification feature that impacts the vertical transport of heat and nutrients and are consistently observed throughout the Canada Basin of the Arctic Ocean. Observations of staircases from the same time period and geographic region form clusters in temperature-salinity (T–S) space. Here, for the first time, we use an automated clustering algorithm called the hierarchical density-based spatial clustering of applications with noise to detect and connect individual well-mixed staircase layers across profiles from ice-tethered profilers. Our application only requires an estimate of the typical layer thickness and expected salinity range of staircases. We compare this method to two previous studies that used different approaches to detect layers and reproduce several results, including the mean lateral density ratio $ {R}_L $ and that the difference in salinity between neighboring layers is a magnitude larger than the salinity variance within a layer. We find that we can accurately and automatically track individual layers in coherent staircases across time and space between different profiles. In evaluating the algorithm’s performance, we find evidence of different physical features, namely splitting or merging layers and remnant intrusions. Further, we find a dependence of $ {R}_L $ on pressure, whereas previous studies have reported constant $ {R}_L $. Our results demonstrate that clustering algorithms are an effective and parsimonious method of identifying staircases in ocean profile data.
Based on the theories of radical education, this article discusses the education of electronic music in Venezuela. After a historiographical review of the state of music education in the country, which shows that there is little information on the subject, the institutional life that has promoted electroacoustic music in Venezuela is approached from a critical perspective. This documentary research gathers and analyses data provided by a bibliographic review and unstructured interviews with experts in the field. Among the most salient findings is the discontinuity in the teaching of electroacoustic music, as well as a critical review of the notions of radical education in the case of Venezuela, where the educational system shows stagnation in the face of the global context.
Autonomous navigation has been a long-standing research topic, and researchers have worked on many challenging problems in indoor and outdoor environments. One application area of navigation solutions is material handling in industrial environments. With Industry 4.0, the simple problem in traditional factories has evolved into the use of autonomous mobile robots within flexible production islands in a self-decision-making structure. Two main stages of such a navigation system are safe transportation of the vehicle from one point to another and reaching destinations at industrial standards. The main concern in the former is roughly determining the vehicle’s pose to follow the route, while the latter aims to reach the target with high accuracy and precision. Often, it may not be possible or require extra effort to satisfy requirements with a single localization method. Therefore, a multi-stage localization approach is proposed in this study. Particle filter-based large-scale localization approaches are utilized during the vehicle’s movement from one point to another, while scan-matching-based methods are used in the docking stage. The localization system enables the appropriate approach based on the vehicle’s status and task through a decision-making mechanism. The decision-making mechanism uses a similarity metric obtained through the correntropy criterion to decide when and how to switch from large-scale localization to precise localization. The feasibility and performance of the developed method are corroborated through field tests. These evaluations demonstrate that the proposed method accomplishes tasks with sub-centimeter and sub-degree accuracy and precision without affecting the operation of the navigation algorithms in real time.
To address the question of how to deliver time-sensitive software for cyber-physical systems (CPS) requires a range of modelling and analysis techniques to be developed and integrated. A number of these required techniques are unique to time-sensitive software where timeliness is a correctness property rather than a performance attribute. This paper focuses on how to obtain worst-case estimates of the software’s execution time; in particular, it considers how workload models are derived from assumptions about the system’s run-time behaviour. The specific contribution of this paper is the exploration of the notion that a system can be subject to more than one workload model. Examples illustrate how such multi-models can lead to improved schedulability and hence more efficient CPS. An important property of the approach is that the derived analysis exhibits model-bounded behaviour. This ensures that the maximum load on the system is never higher than that implied by the individual models.
Artificial intelligence and cognitive science are two core research areas in design. Artificial intelligence shows the capability of analysing massive amounts of data which supports making predictions, uncovering patterns and generating insights in varying design activities, while cognitive science provides the advantage of revealing the inherent mental processes and mechanisms of humans in design. Both artificial intelligence and cognitive science in design research are focused on delivering more innovative and efficient design outcomes and processes. Therefore, this thematic collection on “Applications of Artificial Intelligence and Cognitive Science in Design” brings together state-of-the-art research in artificial intelligence and cognitive science to showcase the emerging trend of applying artificial intelligence techniques and neurophysiological and biometric measures in design research. Three promising future research directions: 1) human-in-the-loop AI for design, 2) multimodal measures for design, and 3) AI for design cognitive data analysis and interpretation, are suggested by analysing the research papers collected. A framework for integration of artificial intelligence and cognitive science in design, incorporating the three research directions, is proposed to inspire and guide design researchers in exploring human-centred design methods, strategies, solutions, tools and systems.