To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Visual place recognition (VPR) in condition-varying environments is still an open problem. Popular solutions are convolutional neural network (CNN)-based image descriptors, which have been shown to outperform traditional image descriptors based on hand-crafted visual features. However, there are two drawbacks of current CNN-based descriptors: (a) their high dimension and (b) lack of generalization, leading to low efficiency and poor performance in real robotic applications. In this paper, we propose to use a convolutional autoencoder (CAE) to tackle this problem. We employ a high-level layer of a pre-trained CNN to generate features and train a CAE to map the features to a low-dimensional space to improve the condition invariance property of the descriptor and reduce its dimension at the same time. We verify our method in four challenging real-world datasets involving significant illumination changes, and our method is shown to be superior to the state-of-the-art. The code of our work is publicly available at https://github.com/MedlarTea/CAE-VPR.
We investigate how four internet of things (IoT) companies perceive the large quantity of community-generated content as a significant source of innovation. We study the extent to which these companies are willing to align their internal organisation to cope with the external community dynamics and define beneficial modes of collaboration for all involved stakeholders. Four IoT companies adopting open-source hardware principles were selected as case studies. The data collection was based on 18 interviews highlighting both the perspectives of the companies and their corresponding communities and the opinions of key experts in the domain. In our findings, we illustrate the different manifestations of open business models and the companies’ concrete approaches to working with external stakeholders. It is shown that companies with a business history more clearly claim sovereignty over their strategic decisions in a community-infused model, while, on the other hand, the community-based companies pursue a community-led strategy.
Biobased composites, which are considered a sustainable alternative to plastics, are yet to create a significant influence on product design and manufacturing. A key reason for this is perceptual handicaps associated with biobased composites and this study was aimed at understanding the mechanisms behind biocomposite perception, in the context of digital visuals. This study of digital biocomposite visuals demonstrated that material perception is influenced by the visual characteristics of the material. Data analysis of the perceptual attributes of the materials pointed towards clear ‘clustering’ of the materials against these attributes. Analysis shows that visual features like fibres and surface appearance may impact aesthetic and functional evaluation and there is no effect on age, gender or polymer type. We also propose a reference framework to categorise biobased composites based on visual order.
Alternating direction method of multipliers (ADMM) receives much attention in the field of optimization and computer science, etc. The generalized ADMM (G-ADMM) proposed by Eckstein and Bertsekas incorporates an acceleration factor and is more efficient than the original ADMM. However, G-ADMM is not applicable in some models where the objective function value (or its gradient) is computationally costly or even impossible to compute. In this paper, we consider the two-block separable convex optimization problem with linear constraints, where only noisy estimations of the gradient of the objective function are accessible. Under this setting, we propose a stochastic linearized generalized ADMM (called SLG-ADMM) where two subproblems are approximated by some linearization strategies. And in theory, we analyze the expected convergence rates and large deviation properties of SLG-ADMM. In particular, we show that the worst-case expected convergence rates of SLG-ADMM are $\mathcal{O}\left( {{N}^{-1/2}}\right)$ and $\mathcal{O}\left({\ln N} \cdot {N}^{-1}\right)$ for solving general convex and strongly convex problems, respectively, where N is the iteration number, similarly hereinafter, and with high probability, SLG-ADMM has $\mathcal{O}\left ( \ln N \cdot N^{-1/2} \right ) $ and $\mathcal{O}\left ( \left ( \ln N \right )^{2} \cdot N^{-1} \right ) $ constraint violation bounds and objective error bounds for general convex and strongly convex problems, respectively.
In this paper, I develop an algorithmic impossible-worlds model of belief and knowledge that provides a middle ground between models that entail that everyone is logically omniscient and those that are compatible with even the most egregious kinds of logical incompetence. In outline, the model entails that an agent believes (knows) $\phi $ just in case she can easily (and correctly) compute that $\phi $ is true and thus has the capacity to make her actions depend on whether $\phi $. The model thereby captures the standard view that belief and knowledge ground are constitutively connected to dispositions to act. As I explain, the model improves upon standard algorithmic models developed by Parikh, Halpern, Moses, Vardi, and Duc, among other ways, by integrating them into an impossible-worlds framework. The model also avoids some important disadvantages of recent candidate middle-ground models based on dynamic epistemic logic or step logic, and it can subsume their most important advantages.
The time-optimal path following (OPF) problem is to find a time evolution along a prescribed path in task space with shortest time duration. Numerical solution algorithms rely on an algorithm-specific (usually equidistant) sampling of the path parameter. This does not account for the dynamics in joint space, that is, the actual motion of the robot, however. Moreover, a well-known problem is that large joint velocities are obtained when approaching singularities, even for slow task space motions. This can be avoided by a sampling in joint space, where the path parameter is replaced by the arc length. Such discretization in task space leads to an adaptive refinement according to the nonlinear forward kinematics and guarantees bounded joint velocities. The adaptive refinement is also beneficial for the numerical solution of the problem. It is shown that this yields trajectories with improved continuity compared to an equidistant sampling. The OPF is reformulated as a second-order cone programming and solved numerically. The approach is demonstrated for a 6-DOF industrial robot following various paths in task space.
With the dramatic improvement in quality, machine translation has emerged as a tool widely adopted by language learners. Its use, however, has been a divisive issue in language education. We conducted an approximate replication of Lee (2020) about the impact of machine translation on EFL writing. This study used a mixed-methods approach with automatic text analyzer Coh-Metrix and human ratings, supplemented with questionnaires, interviews, and screen recordings. The findings obtained support most of the original work, suggesting that machine translation can help language learners improve their EFL writing proficiency, specifically in strengthening lexical expressions. Students generally hold positive attitudes towards machine translation, despite some skeptical views regarding the values of machine translation. Most students express a strong wish to learn how to effectively use machine translation. Machine translation literacy instruction is therefore suggested for incorporation into the curriculum for language students.
The goal of few-shot semantic segmentation is to learn a segmentation model that can segment novel classes in queries when only a few annotated support examples are available. Due to large intra-class variations, the building of accurate semantic correlation remains a challenging job. Current methods typically use 4D kernels to learn the semantic correlation of feature maps. However, they still face the challenge of reducing the consumption of computation and memory while keeping the availability of correlations mined by their methods. In this paper, we propose the adaptively mining correlation network (AMCNet) to alleviate the aforementioned issues. The key points of AMCNet are the proposed adaptive separable 4D kernel and the learnable pyramid correlation module, which form the basic block for correlation encoder and provide a learnable concatenation operation over pyramid correlation tensors, respectively. Experiments on the PASCAL VOC 2012 dataset show that our AMCNet surpasses the state-of-the-art method by $0.7\%$ and $2.2\%$ on 1-shot and 5-shot segmentation scenarios, respectively.
The grey wolf optimizer (GWO) as a new intelligent optimization algorithm has been successfully applied in many fields because of its simple structure, few adjustment parameters and easy implementation. This paper mainly aims at the defects of GWO in path planning application, such as easily falling into local optimization, poor convergence and poor accuracy, and turn point grey wolf optimization (TPGWO) algorithm is proposed. First, the idea of cross-mutation and roulette is used to increase the initial population of GWO and improve the search range. At the same time, the convergence factor function is improved to become a nonlinear update. In the early stage, the search range is expanded, and in the later stage, the convergence speed is increased, while the parameters in the convergence factor function can be adjusted according to the number of obstacles and the map area to change the turning point of the function to improve the convergence speed and accuracy of the algorithm. The turning times and turning angles of the obtained path are added to the fitness function as penalty values to improve the path accuracy. The optimization test is carried out through 16 test functions, and the test results prove the convergence and robustness of TPGWO algorithm. Finally, the TPGWO algorithm is applied to the path planning of patrol robot for simulation experiments. Compared with the GWO algorithm and Particle Swarm Optimization, the simulation results show that the TPGWO algorithm has better convergence, stability and accuracy in the path planning of patrol robot.
We introduce a new measure of inaccuracy based on extropy between distributions of the nth upper (lower) record value and parent random variable and discuss some properties of it. A characterization problem for the proposed extropy inaccuracy measure has been studied. It is also shown that the defined measure of inaccuracy is invariant under scale but not under location transformation. We characterize certain specific lifetime distribution functions. Nonparametric estimators based on the empirical and kernel methods for the proposed measures are also obtained. The performance of estimators is also discussed using a real dataset.
We consider a model of binary opinion dynamics where one opinion is inherently “superior” than the other, and social agents exhibit a “bias” toward the superior alternative. Specifically, it is assumed that an agent updates its choice to the superior alternative with probability α > 0 irrespective of its current opinion and opinions of other agents. With probability $1-\alpha$, it adopts majority opinion among two randomly sampled neighbors and itself. We are interested in the time it takes for the network to converge to a consensus on the superior alternative. In a complete graph of size n, we show that irrespective of the initial configuration of the network, the average time to reach consensus scales as $\Theta(n\,\log n)$ when the bias parameter α is sufficiently high, that is, $\alpha \gt \alpha_c$ where αc is a threshold parameter that is uniquely characterized. When the bias is low, that is, when $\alpha \in (0,\alpha_c]$, we show that the same rate of convergence can only be achieved if the initial proportion of agents with the superior opinion is above certain threshold $p_c(\alpha)$. If this is not the case, then we show that the network takes $\Omega(\exp(\Theta(n)))$ time on average to reach consensus.
The paper discusses the analytical expressions of a motion profile characterized by elliptic jerk. This motion profile is obtained through a kinematic approach, defining the jerk profile and then obtaining acceleration, velocity, and displacement laws by successive integrations. A dimensionless formulation is adopted for the sake of generality. The main characteristics of the profile are analyzed, outlining the relationships between the profile parameters. A kinematic comparison with other motion laws is carried out: trapezoidal velocity, trapezoidal acceleration, cycloidal, sinusoidal jerk, and modified sinusoidal jerk. Then, the features of these motion profiles are evaluated in a dynamic case study, assessing the vibrations induced to a second-order linear system with different levels of damping. The results show that the proposed motion law provides a good compromise between different performance indexes (settling time, maximum absolute values of velocity and acceleration).
Before reading and studying the results on random graphs included in the text one should become familiar with the basic rules of asymptotic computation, find leading terms in combinatorial expressions, choose suitable bounds for the binomials, as well as get acquainted with probabilistic tools needed to study tail bounds, i.e., the probability that a random variable exceeds (or is smaller than) some real value. This chapter offers the reader a short description of these important technical tools used throughout the text.
Several studies have found that virtual exchange (VE) has a positive impact on intercultural effectiveness (IE) development. However, few VE studies have measured and unpacked perceived learning gains from VE in this area using data from multiple VEs and mixed-methods approaches. In this study, we explored the impact of VE on perceived IE development among pre-service teachers in two exchanges. Using k-means cluster analysis of reported gains in IE, we identified three groups of students who reported high-medium-low IE gains. Cluster analysis informed our qualitative analysis of students’ reflections on VE. Having analysed data from 486 diary entries at four successive time measurements, we identified three factors critical to students’ perceived IE development: students’ ability to overcome challenges during VE, level of engagement with their partners, and engagement with cultural difference. These findings shed light on what experiences in VE influence participants’ perceptions of their intercultural learning. The study provides recommendations for the design of online collaborative learning programmes, such as VE, that might help address students’ diverse needs.
In this chapter, we see how many random edges are required to have a particular fixed size subgraph w.h.p. In addition, we will consider the distribution of the number of copies of strictly balanced subgraphs. From these general results, one can deduce thresholds for small trees, stars, cliques, bipartite cliques, and many other small subgraphs which play an important role in the analysis of the properties not only of classic random graphs but also in the interpretation of characteristic features of real-world networks. Computing the frequency of small subgraphs is a fundamental problem in network analysis, used across diverse domains: bioinformatics, social sciences, and infrastructure networks studies.
In this chapter, we study some typical properties of the degree sequence of a random graph. We begin by discussing the typical degrees in a sparse random graph, i.e., one with cn/2 edges for some positive constant c. We prove some results on the asymptotic distribution of degrees. We continue by looking at the typical values of the minimum and maximum degrees in dense random graphs, i.e., when edge probability p is constant. Given these properties of the degree sequence of dense graphs, we can then describe a simple canonical labeling algorithm that enables one to solve the graph isomorphism problem on a dense random graph.