To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The use of natural pozzolans in concrete applications is gaining more attention because of the associated environmental, economic, and technical benefits. In this study, reference cemented mine backfill samples were prepared using Portland cement, and experimental samples were prepared by partially replacing Portland cement with 10 or 20 wt.% fly ash as a byproduct (artificial) pozzolan or pumice as a natural pozzolan. Samples were cured for 7, 14, and 28 days to investigate uniaxial compressive strength development. Backfill samples containing 10 wt.% pumice had almost a similar compressive strength as reference samples. There is strong potential for pumice to be used in cemented backfill to minimize costs, improve backfill properties, and promote the sustainability of the mining industry.
With a machine learning approach and less focus on linguistic details, this gentle introduction to natural language processing develops fundamental mathematical and deep learning models for NLP under a unified framework. NLP problems are systematically organised by their machine learning nature, including classification, sequence labelling, and sequence-to-sequence problems. Topics covered include statistical machine learning and deep learning models, text classification and structured prediction models, generative and discriminative models, supervised and unsupervised learning with latent variables, neural networks, and transition-based methods. Rich connections are drawn between concepts throughout the book, equipping students with the tools needed to establish a deep understanding of NLP solutions, adapt existing models, and confidently develop innovative models of their own. Featuring a host of examples, intuition, and end of chapter exercises, plus sample code available as an online resource, this textbook is an invaluable tool for the upper undergraduate and graduate student.
This article considers what types of strategic communication messaging regarding migration policy are likely to be more or less effective. To do so, the article summarizes the literature to, first, note the broadly postulated effectiveness of value-based messaging and, second, note how underdefined this concept remains. To overcome this shortcoming, I introduce Schwarz’s psychological theory of “basic human values” and use European Social Survey data to visualize the relationship between these values and attitudes to immigration. I argue that messaging with a value-basis that is concordant with that of its audience is more likely to elicit sympathy, whereas that which is discordant with the values of its audience is more likely to elicit antipathy. Given the value-balanced orientations of those with moderate attitudes to immigration, persuasive migration messaging should attempt to mobilize values of its opposition; that is pro-migration messaging should mobilize Schwarz’s values of conformity, tradition, security, and power, whereas anti-migration messaging should mobilize values of universalism, benevolence, self-direction, and stimulation. I then turn to an inventory of 135 migration communication campaigns provided by the International Centre for Migration Policy Development. I show that few pro-migration campaigns contained value-based messaging, whereas all anti-migration campaigns did. Similarly, very few pro-migration campaigns included values besides “universalism” and “benevolence,” whereas anti-migration campaigns included values associated with both pro- and anti-migration attitudes. I visually demonstrate examples of each case before discussing ramifications for policy communication.
High-voltage power cables are important channels for power transmission systems. Their special geographical environment and harsh natural environment can lead to many different faults. At present, such special operations in dangerous and harsh environments are performed manually, which not only has high labor intensity and low work efficiency but also has great personal safety risks. In order to solve such difficult problems, this paper studies the power maintenance robot for insulator string replacement, spacer replacement, damper and drainage plate maintenance; the basic configuration and the operation motion planning have been proposed; and the virtual prototype of the inspection maintenance robots has been developed, and then the mechanical structure of the robots has been optimized by the robot kinematics modeling and analyzed the working space based on the Monte Carlo method. The system platform, operation function, structural characteristics and related key technologies involved in the robot system development were systematically summarized; the deep integration point for the robot technology with big data, cloud computing, artificial intelligence, and ubiquitous power Internet-of-Things technologies was also discussed. Finally, the physical prototype of the insulator replacement, drainage plate tightening, and damper replacement operation robot has been developed; several experimental tests on a 220 V live line have been conducted so as to verify the robot engineering practicality; and the main development and future research direction have also been pointed out at last.
GPT-3 made the mainstream media headlines this year, generating far more interest than we’d normally expect of a technical advance in NLP. People are fascinated by its ability to produce apparently novel text that reads as if it was written by a human. But what kind of practical applications can we expect to see, and can they be trusted?
The manipulation of articulated objects is of primary importance in Robotics and can be considered as one of the most complex manipulation tasks. Traditionally, this problem has been tackled by developing ad hoc approaches, which lack flexibility and portability. In this paper, we present a framework based on answer set programming (ASP) for the automated manipulation of articulated objects in a robot control architecture. In particular, ASP is employed for representing the configuration of the articulated object for checking the consistency of such representation in the knowledge base and for generating the sequence of manipulation actions. The framework is exemplified and validated on the Baxter dual-arm manipulator in the first, simple scenario. Then, we extend such scenario to improve the overall setup accuracy and to introduce a few constraints in robot actions execution to enforce their feasibility. The extended scenario entails a high number of possible actions that can be fruitfully combined together. Therefore, we exploit macro actions from automated planning in order to provide more effective plans. We validate the overall framework in the extended scenario, thereby confirming the applicability of ASP also in more realistic Robotics settings and showing the usefulness of macro actions for the robot-based manipulation of articulated objects.
Cost-effective sampling design is a problem of major concern in some experiments especially when the measurement of the characteristic of interest is costly or painful or time-consuming. In this article, we investigate ratio-type estimators of the population mean of the study variable, involving either the first or the third quartile of the auxiliary variable, using ranked set sampling (RSS) and extreme ranked set sampling (ERSS) schemes. The properties of the estimators are obtained. The estimators in RSS and ERSS are compared to their counterparts in simple random sampling (SRS) for normal data. The numerical results show that the estimators in RSS and ERSS are significantly more efficient than their counterparts in SRS.
Description logics (DLs) are well-known knowledge representation formalisms focused on the representation of terminological knowledge. Due to their first-order semantics, these languages (in their classical form) are not suitable for representing and handling uncertainty. A probabilistic extension of a light-weight DL was recently proposed for dealing with certain knowledge occurring in uncertain contexts. In this paper, we continue that line of research by introducing the Bayesian extension of the propositionally closed DL . We present a tableau-based procedure for deciding consistency and adapt it to solve other probabilistic, contextual, and general inferences in this logic. We also show that all these problems remain ExpTime-complete, the same as reasoning in the underlying classical .
We prove that, for any $t \ge 3$, there exists a constant c = c(t) > 0 such that any d-regular n-vertex graph with the second largest eigenvalue in absolute value λ satisfying $\lambda \le c{d^{t - 1}}/{n^{t - 2}}$ contains vertex-disjoint copies of kt covering all but at most ${n^{1 - 1/(8{t^4})}}$ vertices. This provides further support for the conjecture of Krivelevich, Sudakov and Szábo (Combinatorica24 (2004), pp. 403–426) that (n, d, λ)-graphs with n ∈ 3ℕ and $\lambda \le c{d^2}/n$ for a suitably small absolute constant c > 0 contain triangle-factors. Our arguments combine tools from linear programming with probabilistic techniques, and apply them in a certain weighted setting. We expect this method will be applicable to other problems in the field.
We build a computational framework to support the planning of development and the evaluation of budgetary strategies toward the 2030 Agenda. The methodology takes into account some of the complexities of the political economy underpinning the policymaking process: the multidimensionality of development, the interlinkages between these dimensions, and the inefficiencies of policy interventions, as well as institutional factors that promote or discourage these inefficiencies. The framework is scalable and usable even with limited publicly available information: development-indicator data. However, it can be further refined as more data becomes available, for example, on public expenditure. We demonstrate its usage through an application for the Mexican federal government. For this, we infer historical policy priorities, that is, the non-observable allocations of transformative resources that generated past changes in development indicators. We also show how to use the tool to assess the feasibility of development goals, to measure policy coherence, and to identify accelerators. Overall, the framework and its computational tools allow policymakers and other stakeholders to embrace a complexity (and a quantitative) view to tackle the challenges of the Sustainable Development Goals.
For research in the fields of engineering asset management (EAM) and system health, relevant data resides in the information systems of the asset owners, typically industrial corporations or government bodies. For academics to access EAM data sets for research purposes can be a difficult and time-consuming task. To facilitate a more consistent approach toward releasing asset-related data, we have developed a data risk assessment tool (DRAT). This tool evaluates and suggests controls to manage, risks associated with the release of EAM datasets to academic entities for research purposes. Factors considered in developing the tool include issues such as where accountability for approval sits in organizations, what affects an individual manager’s willingness to approve release, and how trust between universities and industry can be established and damaged. This paper describes the design of the DRAT tool and demonstrates its use on case studies provided by EAM owners for past research projects. The DRAT tool is currently being used to manage the data release process in a government-industry-university research partnership.
Inaccuracy and information measures based on cumulative residual entropy are quite useful and have attracted considerable attention in many fields including reliability theory. Using a point process martingale approach and a compensator version of Kumar and Taneja's generalized inaccuracy measure of two nonnegative continuous random variables, we define here an inaccuracy measure between two coherent systems when the lifetimes of their common components are observed. We then extend the results to the situation when the components in the systems are subject to failure according to a double stochastic Poisson process.
This comprehensive account of the concept and practices of deduction is the first to bring together perspectives from philosophy, history, psychology and cognitive science, and mathematical practice. Catarina Dutilh Novaes draws on all of these perspectives to argue for an overarching conceptualization of deduction as a dialogical practice: deduction has dialogical roots, and these dialogical roots are still largely present both in theories and in practices of deduction. Dutilh Novaes' account also highlights the deeply human and in fact social nature of deduction, as embedded in actual human practices; as such, it presents a highly innovative account of deduction. The book will be of interest to a wide range of readers, from advanced students to senior scholars, and from philosophers to mathematicians and cognitive scientists.
This textbook introduces fundamental concepts, major models, and popular applications of pattern recognition for a one-semester undergraduate course. To ensure student understanding, the text focuses on a relatively small number of core concepts with an abundance of illustrations and examples. Concepts are reinforced with hands-on exercises to nurture the student's skill in problem solving. New concepts and algorithms are framed by real-world context and established as part of the big picture introduced in an early chapter. A problem-solving strategy is employed in several chapters to equip students with an approach for new problems in pattern recognition. This text also points out common errors that a new player in pattern recognition may encounter, and fosters the ability for readers to find useful resources and independently solve a new pattern recognition task through various working examples. Students with an undergraduate understanding of mathematical analysis, linear algebra, and probability will be well prepared to master the concepts and mathematical analysis presented here.
In this paper, we apply flexible data-driven analysis methods on large-scale mass transit data to identify areas for improvement in the engineering and operation of urban rail systems. Specifically, we use data from automated fare collection (AFC) and automated vehicle location (AVL) systems to obtain a more precise characterisation of the drivers of journey time variance on the London Underground, and thus an improved understanding of delay. Total journey times are decomposed via a probabilistic assignment algorithm, and semiparametric regression is undertaken to disentangle the effects of passenger-specific travel characteristics from network-related factors. For total journey times, we find that network characteristics, primarily train speeds and headways, represent the majority of journey time variance. However, within the typically twice as onerous access and egress time components, passenger-level heterogeneity is more influential. On average, we find that intra-passenger heterogeneity represents 6% and 19% of variance in access and egress times, respectively, and that inter-passenger effects have a similar or greater degree of influence than static network characteristics. The analysis shows that while network-specific characteristics are the primary drivers of journey time variance in absolute terms, a nontrivial proportion of passenger-perceived variance would be influenced by passenger-specific characteristics. The findings have potential applications related to improving the understanding of passenger movements within stations, for example, the analysis can be used to assess the relative way-finding complexity of stations, which can in turn guide transit operators in the targeting of potential interventions.