To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Real-time and efficient path planning is critical for all robotic systems. In particular, it is of greater importance for industrial robots since the overall planning and execution time directly impact the cycle time and automation economics in production lines. While the problem may not be complex in static environments, classical approaches are inefficient in high-dimensional environments in terms of planning time and optimality. Collision checking poses another challenge in obtaining a real-time solution for path planning in complex environments. To address these issues, we propose an end-to-end learning-based framework viz., Path Planning and Collision checking Network (PPCNet). The PPCNet generates the path by computing waypoints sequentially using two networks: the first network generates a waypoint, and the second one determines whether the waypoint is on a collision-free segment of the path. The end-to-end training process is based on imitation learning that uses data aggregation from the experience of an expert planner to train the two networks, simultaneously. We utilize two approaches for training a network that efficiently approximates the exact geometrical collision checking function. Finally, the PPCNet is evaluated in two different simulation environments and a practical implementation on a robotic arm for a bin-picking application. Compared to the state-of-the-art path-planning methods, our results show significant improvement in performance by greatly reducing the planning time with comparable success rates and path lengths.
Globally, forests are net carbon sinks that partly mitigates anthropogenic climate change. However, there is evidence of increasing weather-induced tree mortality, which needs to be better understood to improve forest management under future climate conditions. Disentangling drivers of tree mortality is challenging because of their interacting behavior over multiple temporal scales. In this study, we take a data-driven approach to the problem. We generate hourly temperate weather data using a stochastic weather generator to simulate 160,000 years of beech, pine, and spruce forest dynamics with a forest gap model. These data are used to train a generative deep learning model (a modified variational autoencoder) to learn representations of three-year-long monthly weather conditions (precipitation, temperature, and solar radiation) in an unsupervised way. We then associate these weather representations with years of high biomass loss in the forests and derive weather prototypes associated with such years. The identified prototype weather conditions are associated with 5–22% higher median biomass loss compared to the median of all samples, depending on the forest type and the prototype. When prototype weather conditions co-occur, these numbers increase to 10–25%. Our research illustrates how generative deep learning can discover compounding weather patterns associated with extreme impacts.
Researchers have encountered many issues while studying rare illnesses such as lack of information, limited sample sizes, difficulty in diagnosis, and more. However, perhaps the biggest challenge is to recruit a large enough sample size for clinical studies; at the same time, obtaining chronological data for these patients is even more difficult. This has urged us to implement a decentralized crowdsourcing medical data sharing platform to obtain chronological rare data for certain diseases, providing both patients and other stakeholders an easier and more secure way of trading medical data by utilizing blockchain technology. This facilitates the obtention of the most elusive types of health data by dynamically allocating extra financial incentives depending on data scarcity. We also provide a novel framework for medical data cross-validation where the system checks the volunteer reviewer count. The review score depends on the count, and the more the reviewers, the bigger the final score. We also explain how differential privacy is used to protect the privacy of individual medical data while enabling data monetization.
Despite the importance of quantifying how the spatial patterns of heavy precipitation will change with warming, we lack tools to objectively analyze the storm-scale outputs of modern climate models. To address this gap, we develop an unsupervised, spatial machine-learning framework to quantify how storm dynamics affect changes in heavy precipitation. We find that changes in heavy precipitation (above the 80th percentile) are predominantly explained by changes in the frequency of these events, rather than by changes in how these storm regimes produce precipitation. Our study shows how unsupervised machine learning, paired with domain knowledge, may allow us to better understand the physics of the atmosphere and anticipate the changes associated with a warming world.
The use of computer technology to automate the enforcement of law is a promising alternative to simplify bureaucratic procedures. However, careless automation might result in an inflexible and dehumanized law enforcement system driven by algorithms that do not account for the particularities of individuals or minorities. In this article, we argue that hybrid smart contracts deployed to monitor rather than blindly enforce regulations can be used to add flexibility. Enforcement is a suitable alternative only when prevention is strictly necessary; however, we argue that in many situations a corrective approach based on monitoring is more flexible and suitable. To add more flexibility, the hybrid smart contract can be programmed to stop to request the intervention of a human or of a group of them when human judgment is needed.
Snake robots can move flexibly due to their special bodies and gaits. However, it is difficult to plan their motion in multi-obstacle environments due to their complex models. To solve this problem, this work investigates a reinforcement learning-based motion planning method. To plan feasible paths, together with a modified deep Q-learning algorithm, a Floyd-moving average algorithm is proposed to ensure smoothness and adaptability of paths for snake robots’ passing. An improved path integral algorithm is used to work out gait parameters to control snake robots to move along the planned paths. To speed up the training of parameters, a strategy combining serial training, parallel training, and experience replaying modules is designed. Moreover, we have designed a motion planning framework consists of path planning, path smoothing, and motion planning. Various simulations are conducted to validate the effectiveness of the proposed algorithms.
Assistive technology (AT) is any artefact that enables participation in activities usually limited by disability. Frequently, AT suffers from poor design engagement and utilisation. Moreover, up to 30% of all AT is abandoned within a year, negatively impacting users. This presents an ongoing challenge for occupational therapists (OTs) who work with assistive technologies. A literature review was conducted using a Preferred Reporting Items for Systematic Reviews and Meta-Analysis protocol to understand this issue and its implications for the design community. This study explores current themes of AT abandonment and the role of OT within the lens of design thinking. Studies, including design intervention in AT, were subsequently highlighted. This led to comparing this literature with more traditional health literature, exploring the potential enablers and barriers for design in engaging with AT. This evidenced the benefits of collaboration between design and OT disciplines to improve the product and reduce abandonment issues.
Discrete structures model a vast array of objects ranging from DNA sequences to internet networks. The theory of generating functions provides an algebraic framework for discrete structures to be enumerated using mathematical tools. This book is the result of 25 years of work developing analytic machinery to recover asymptotics of multivariate sequences from their generating functions, using multivariate methods that rely on a combination of analytic, algebraic, and topological tools. The resulting theory of analytic combinatorics in several variables is put to use in diverse applications from mathematics, combinatorics, computer science, and the natural sciences. This new edition is even more accessible to graduate students, with many more exercises, computational examples with Sage worksheets to illustrate the main results, updated background material, additional illustrations, and a new chapter providing a conceptual overview.
Complex networks are key to describing the connected nature of the society that we live in. This book, the second of two volumes, describes the local structure of random graph models for real-world networks and determines when these models have a giant component and when they are small-, and ultra-small, worlds. This is the first book to cover the theory and implications of local convergence, a crucial technique in the analysis of sparse random graphs. Suitable as a resource for researchers and PhD-level courses, it uses examples of real-world networks, such as the Internet and citation networks, as motivation for the models that are discussed, and includes exercises at the end of each chapter to develop intuition. The book closes with an extensive discussion of related models and problems that demonstratemodern approaches to network theory, such as community structure and directed models.
The recognizing underwater targets is a crucial component of autonomous underwater vehicle patrols and detection efforts. In the process of visual image recognition in real underwater environment, the spatial and semantic features of the target often appear to different degrees of loss, and the scarcity of specific types of underwater samples leads to unbalanced data on categories. This kind of problem makes the target features appear weak and seriously affects the accuracy of underwater target recognition. Traditional deep learning methods based on data and feature enhancement cannot achieve ideal recognition effect. Based on the above difficulties, this paper proposes an improved feature enhancement network for weak feature target recognition. Firstly, a multi-scale spatial and semantic feature enhancement module is constructed to extract the feature information of the extraction target accurately. Secondly, this paper solves the influence of target feature distortion on classification through multi-scale feature comparison of positive and negative samples. Finally, the Rank & Sort Loss function was used to train the depth target detection to solve the problem of recognition accuracy under highly unbalanced sample data. Experimental results show that the recognition accuracy of the proposed method is 2.28% and 3.84% higher than that of the existing algorithms in the recognition of underwater fuzzy and distorted target images, which demonstrates the effectiveness and superiority of the proposed method.
In the previous chapter, we introduced word embeddings, which are real-valued vectors that encode semantic representation of words. We discussed how to learn them and how they capture semantic information that makes them useful for downstream tasks. In this chapter, we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task.
Ever since we began to build software systems that interacted with humans, there have ethical concerns about the ways in which we interact with them. In [830], for example, Weizenbaum observes of the world’s first chatterbot that “ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility. A certain danger lurks there.”2 Fast forward more than 60 years, and this observation that a “certain danger lurks there” has emerged as a range of different concerns about the ways in which software (and hardware) systems are developed and deployed, and the range of data that modern data-driven systems rely upon. The space of machine ethics is vast, and a large number of texts, papers, and policy documents now exist on the subject.
In this chapter, we describe several common applications (including the ones we touched on before) and multiple possible neural approaches for each. We focus on simple neural approaches that work well and should be familiar to anybody beginning research in natural language processing or interested in deploying robust strategies in industry. In particular, we describe the implementation of the following applications: text classification, part-of-speech tagging, named entity recognition, syntactic dependency parsing, relation extraction, question answering, and machine translation.
The previous chapter introduced feed-forward neural networks and demonstrated that, theoretically, implementing the training procedure for an arbitrary feed-forward neural network is relatively simple. Unfortunately, neural networks trained this way will suffer from several problems such as stability of the training process – that is, slow convergence due to parameters jumping around a good minimum – and overfitting. In this chapter, we will describe several practical solutions that mitigate these problems. In particular, we discuss minibatching, multiple optimization algorithms, other activation and cost functions, regularization, dropout, temporal averaging, and parameter initialization and normalization.
In this chapter we investigate graph distances in preferential attachment models. We focus on typical distances as well as the diameter of preferential attachment models. We again rely on path-counting techniques, as well as local limit results. Since the local limit is a rather involved quantity, some parts of our analysis are considerably harder than those in Chapters 6 and 7.