To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Radiofrequency ablation (RFA) is a non-invasive image-guided procedure where tumors are heated in the body with electrical current. RFA procedures are commonly indicated for patients with limited local disease or who are not surgical candidates. Current methods of RFA use multiple cords and wires that ergonomically complicate the procedure and present the risk of cutting or shorting the circuit if they are damaged. A wireless RFA technique based on electromagnetic induction is presented in this paper. The transmitting and receiving coils were coupled to resonate at the same frequency to ensure the highest power output. The receiving coil was connected to two insulated electrodes on a catheter, which allowed the current to flow to the targeted tissue. The prototype system was tested with ex-vivo bovine tissue, which has similar thermal and electrical properties to human tissue. The setup can monitor the received power, efficiency, temperature, and ablation zone during ablation procedures. The maximum received power was 15 W, and the average maximum efficiency was 63.27%. The novel system was also able to ablate up to a 2 cm ablation zone in non-perfused tissue. This proof of concept for performing RFA wirelessly with electromagnetic induction may merit further optimization.
The purpose of this paper is to design a stabilizing controller for a car with n connected trailers. The proposed control algorithm is constructed on the Lyapunov theory. In this paper, the purpose of navigating the system toward the desired point considering the slip phenomenon as a main source of uncertainty is analyzed. First mathematical models are presented. Then, a stabilizing control approach based on the Lyapunov theory is presented. Subsequently, an uncertainty estimator is taken into account to overcome the wheel slip effects. Obtained results show the convergence properties of the proposed control algorithm against the slip phenomenon.
The variable stiffness actuator (VSA) is helpful to realize the post-collision safety strategies for safe human–robot interaction.1 The stiffness of the robot will be reduced to protect the user from injury when the collision between the robot and human occurs. However, The VSA has a mechanism limit constraint that can cause harm to users even if the stiffness is minimized. Accordingly, in this article, a concept combining danger index and robust fault detection and isolation is presented and applied to active–passive variable stiffness elastic actuator (APVSEA). APVSEA can actively change joint stiffness with the change of danger index. Experimental results show that this concept can effectively confirm the fault mode and provide additional protection measures to ensure the safety of users when the joint stiffness has been adjusted to the minimum.
In this work we consider three well-studied broadcast protocols: push, pull and push&pull. A key property of all these models, which is also an important reason for their popularity, is that they are presumed to be very robust, since they are simple, randomized and, crucially, do not utilize explicitly the global structure of the underlying graph. While sporadic results exist, there has been no systematic theoretical treatment quantifying the robustness of these models. Here we investigate this question with respect to two orthogonal aspects: (adversarial) modifications of the underlying graph and message transmission failures.
We explore in particular the following notion of local resilience: beginning with a graph, we investigate up to which fraction of the edges an adversary may delete at each vertex, so that the protocols need significantly more rounds to broadcast the information. Our main findings establish a separation among the three models. On one hand, pull is robust with respect to all parameters that we consider. On the other hand, push may slow down significantly, even if the adversary may modify the degrees of the vertices by an arbitrarily small positive fraction only. Finally, push&pull is robust when no message transmission failures are considered, otherwise it may be slowed down.
On the technical side, we develop two novel methods for the analysis of randomized rumour-spreading protocols. First, we exploit the notion of self-bounding functions to facilitate significantly the round-based analysis: we show that for any graph the variance of the growth of informed vertices is bounded by its expectation, so that concentration results follow immediately. Second, in order to control adversarial modifications of the graph we make use of a powerful tool from extremal graph theory, namely Szemerédi’s Regularity Lemma.
Considering the Panigrahi and Chatterjee model (2017) for variable generalised Chaplygin gas, in this paper we found for this kind of exotic matter an analytic expression for the adiabatic compressibility βs. It was analyzed the behaviour of the adiabatic compressibility in the limit of high and low pressure. The derived equation for βs was used to deduce the value of the heat capacity at constant pressure Cp for variable generalised Chaplygin gas.
Benchmarks can be a useful step toward the goals of the field (when the benchmark is on the critical path), as demonstrated by the GLUE benchmark, and deep nets such as BERT and ERNIE. The case for other benchmarks such as MUSE and WN18RR is less well established. Hopefully, these benchmarks are on a critical path toward progress on bilingual lexicon induction (BLI) and knowledge graph completion (KGC). Many KGC algorithms have been proposed such as Trans[DEHRM], but it remains to be seen how this work improves WordNet coverage. Given how much work is based on these benchmarks, the literature should have more to say than it does about the connection between benchmarks and goals. Is optimizing P@10 on WN18RR likely to produce more complete knowledge graphs? Is MUSE likely to improve Machine Translation?
A well-known observation of Lovász is that if a hypergraph is not 2-colourable, then at least one pair of its edges intersect at a single vertex. In this short paper we consider the quantitative version of Lovász’s criterion. That is, we ask how many pairs of edges intersecting at a single vertex should belong to a non-2-colourable n-uniform hypergraph. Our main result is an exact answer to this question, which further characterizes all the extremal hypergraphs. The proof combines Bollobás’s two families theorem with Pluhar’s randomized colouring algorithm.
In ML-style module type theory, sealing often leads to situations in which type variables must leave scope, and this creates a need for signatures that avoid such variables. Unfortunately, in general, there is no best signature that avoids a variable, so modules do not always enjoy principal signatures. This observation is called the avoidance problem. In the past, the problem has been circumvented using a variety of devices for moving variables so they can remain in scope. These devices work, but have heretofore lacked a logical foundation. They have also lacked a presentation in which the dynamic semantics is given on the same phrases as the static semantics, which limits their applications. We can provide a best supersignature avoiding a variable by fiat, by adding an existential signature that is the least upper bound of its instances. This idea is old, but a workable metatheory has not previously been worked out. This work resolves the metatheoretic issues using ideas borrowed from focused logic. We show that the new theory results in a type discipline very similar to the aforementioned devices used in prior work. In passing, this gives a type-theoretic justification for the generative stamps used in the early days of the static semantics of ML modules. All the proofs are formalized in Coq.
This study examines changes in individual social capital during adult life within a 19-year period. Social capital theory and life course theory are combined, and it is argued that changes in social networks do not necessarily go together with changes in social capital: while personal networks are known to decline in size with age, social capital can be expected to accumulate, in particular for those who had a better starting position and therefore more resources to share. Panel data from the survey of the social networks of the Dutch (SSND) (1999–2018) at four points of measurement are employed to inquire into this argument. Social capital is measured by the position generator instrument, and three indicators, that is, resource extensity, mean prestige access, and resource range are analyzed. Results of fixed effect models show that, on average, people maintain access to social capital, and that men and higher educated gain social capital through their life as opposed to women and lower educated. Implications for the understanding of the reproduction of social inequality are discussed. The paper concludes with a reflection upon the value of ego-centered network analysis in the era of big data and data science.
Community-based physical activity programs, such as the Recreovía, are effective in promoting healthy behaviors in Latin America. To understand Recreovías’ challenges and scalability, we characterized its social network longitudinally while studying its participants’ social cohesion and interactions. First, we constructed the Main network of the program’s Facebook profile in 2013 to determine the main stakeholders and communities of participants. Second, we studied the Temporal network growth of the Facebook profiles of three Recreovía locations from 2008 to 2016. We implemented a Time Windows in Networks algorithm to determine observation periods and a scaling model of cities’ growth to measure social cohesion over time. Our results show physical activity instructors as the main stakeholders (20.84% nodes of the network). As emerging cohesion, we found: (1) incremental growth of Facebook users (43–272 nodes), friendships (55–2565 edges), clustering coefficient (0.19–0.21), and density (0.04–0.07); (2) no preferential attachment behavior; and (3) a social cohesion super-linear growth with 1.73 new friendships per joined user. Our results underscore the physical activity instructors’ influence and the emergent cohesion in innovation periods as a co-benefit of the program. This analysis associates the social and healthy behavior dimensions of a program occurring in natural environments under a systemic approach.
We give a fully polynomial-time randomized approximation scheme (FPRAS) for the number of bases in bicircular matroids. This is a natural class of matroids for which counting bases exactly is #P-hard and yet approximate counting can be done efficiently.