To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper introduces a non-linear and continuous-time opinion dynamics model with additive noise and state-dependent interaction rates between agents. The model features interaction rates which are proportional to a negative power of the opinion distances. We establish a non-local partial differential equation for the distribution of opinion distances and use Mellin transforms to provide an explicit formula for the stationary solution of the latter, when it exists. Our approach leads to new qualitative and quantitative results on this type of dynamics. To the best of our knowledge these Mellin transform results are the first quantitative results on the equilibria of opinion dynamics with distance-dependent interaction rates. The closed-form expressions for this class of dynamics are obtained for the two-agent case. However, the results can be used in mean-field models featuring several agents whose interaction rates depend on the empirical average of their opinions. The technique also applies to linear dynamics, namely with a constant interaction rate, on an interaction graph.
This paper considers a variant of the classical Cramér–Lundberg model that is particularly appropriate in the credit context, with the distinguishing feature that it corresponds to a finite number of obligors. The focus is on computing the ruin probability, i.e. the probability that the initial reserve, increased by the interest received from the obligors and decreased by the losses due to defaults, drops below zero. As well as an exact analysis (in terms of transforms) of this ruin probability, an asymptotic analysis is performed, including an efficient importance-sampling-based simulation approach.
The base model is extended in multiple dimensions: (i) we consider a model in which there may, in addition, be losses that do not correspond to defaults, (ii) then we analyze a model in which the individual obligors are coupled via a regime switching mechanism, (iii) then we extend the model so that between the losses the reserve process behaves as a Brownian motion rather than a deterministic drift, and (iv) we finally consider a set-up with multiple groups of statistically identical obligors.
We study, under mild conditions, the weak approximation constructed from a standard Poisson process for a class of Gaussian processes, and establish its sample path moderate deviations. The techniques consist of a good asymptotic exponential approximation in moderate deviations, the Besov–Lèvy modulus embedding, and an exponential martingale technique. Moreover, our results are applied to the weak approximations associated with the moving average of Brownian motion, fractional Brownian motion, and an Ornstein–Uhlenbeck process.
Coupling-from-the-past (CFTP) methods have been used to generate perfect samples from finite Gibbs hard-sphere models, an important class of spatial point processes consisting of a set of spheres with the centers on a bounded region that are distributed as a homogeneous Poisson point process (PPP) conditioned so that spheres do not overlap with each other. We propose an alternative importance-sampling-based rejection methodology for the perfect sampling of these models. We analyze the asymptotic expected running time complexity of the proposed method when the intensity of the reference PPP increases to infinity while the (expected) sphere radius decreases to zero at varying rates. We further compare the performance of the proposed method analytically and numerically with that of a naive rejection algorithm and of popular dominated CFTP algorithms. Our analysis relies upon identifying large deviations decay rates of the non-overlapping probability of spheres whose centers are distributed as a homogeneous PPP.
In the classical simple random walk the steps are independent, that is, the walker has no memory. In contrast, in the elephant random walk, which was introduced by Schütz and Trimper [19] in 2004, the next step always depends on the whole path so far. Our main aim is to prove analogous results when the elephant has only a restricted memory, for example remembering only the most remote step(s), the most recent step(s), or both. We also extend the models to cover more general step sizes.
A class of controlled branching processes with continuous time is introduced and some limiting distributions are obtained in the critical case. An extension of this class as regenerative controlled branching processes with continuous time is proposed and some asymptotic properties are considered.
Over the past 25 years, there has been an explosion of interest in the area of random tilings. The first book devoted to the topic, this timely text describes the mathematical theory of tilings. It starts from the most basic questions (which planar domains are tileable?), before discussing advanced topics about the local structure of very large random tessellations. The author explains each feature of random tilings of large domains, discussing several different points of view and leading on to open problems in the field. The book is based on upper-division courses taught to a variety of students but it also serves as a self-contained introduction to the subject. Test your understanding with the exercises provided and discover connections to a wide variety of research areas in mathematics, theoretical physics, and computer science, such as conformal invariance, determinantal point processes, Gibbs measures, high-dimensional random sampling, symmetric functions, and variational problems.
Tuberculosis (TB) is the leading cause of death caused by single pathogenic microorganism, Mycobacterium tuberculosis (MTB). The study aims to explore the associations of microRNA (miRNA) single-nucleotide polymorphisms (SNPs) with pulmonary TB (PTB) risk. A population-based case−control study was conducted, and 168 newly diagnosed smear-positive PTB cases and 251 non-TB controls were recruited. SNPs located within miR-27a (rs895819), miR-423 (rs6505162), miR-196a-2 (rs11614913), miR-146a (rs2910164), miR-618 (rs2682818) were selected and MassARRAY® MALDI-TOF System was employed for genotyping. SPSS19.0 was adopted for statistical analysis, non-conditional logistic regression was performed. Odds ratios (ORs) and 95% confidence intervals (95% CIs) were computed to estimate the associations. Associations of haplotypes with PTB risk were performed with online tool. Rs895819 CT/CC genotype was associated with reduced PTB risk among female population (OR = 0.45, 95% CI: 0.23–0.98), P = 0.045. Haplotypes (combined with rs895819, rs2682818, rs2910164, rs6505162 and rs11614913) TCCCT, TAGCC, CCCCC, CCGCT and TCGAT were associated with reduced PTB risk and the ORs were 0.67 (95% CI: 0.45–0.99), 0.49 (0.25–0.94), 0.34 (95% CI: 0.14–0.81), 0.22 (95% CI: 0.06–0.84) and 0.24 (95% CI: 0.07–0.79), respectively; while the haplotypes of TAGCT, CCCCT, CACCT and TCCAT were associated with increased PTB risk, and the ORs were 3.63 (95% CI: 1.54–8.55), 2.20 (95% CI: 1.00–4.86), 3.90 (95% CI: 1.47–10.36) and 2.95 (95% CI: 1.09–7.99), respectively. Rs895819 CT/CC genotype was associated with reduced female PTB risk and haplotype TCCCT, TAGCC, CCCCC, CCGCT and TCGAT were associated with reduced PTB risk, while TAGCT, CCCCT, CACCT and TCCAT were associated with increased risk.
A framework is proposed for generative models as a basis for digital twins or mirrors of structures. The proposal is based on the premise that deterministic models cannot account for the uncertainty present in most structural modeling applications. Two different types of generative models are considered here. The first is a physics-based model based on the stochastic finite element (SFE) method, which is widely used when modeling structures that have material and loading uncertainties imposed. Such models can be calibrated according to data from the structure and would be expected to outperform any other model if the modeling accurately captures the true underlying physics of the structure. The potential use of SFE models as digital mirrors is illustrated via application to a linear structure with stochastic material properties. For situations where the physical formulation of such models does not suffice, a data-driven framework is proposed, using machine learning and conditional generative adversarial networks (cGANs). The latter algorithm is used to learn the distribution of the quantity of interest in a structure with material nonlinearities and uncertainties. For the examples considered in this work, the data-driven cGANs model outperforms the physics-based approach. Finally, an example is shown where the two methods are coupled such that a hybrid model approach is demonstrated.
Using telematics technology, insurers are able to capture a wide range of data to better decode driver behavior, such as distance traveled and how drivers brake, accelerate, or make turns. Such additional information also helps insurers improve risk assessments for usage-based insurance, a recent industry innovation. In this article, we explore the integration of telematics information into a classification model to determine driver heterogeneity. For motor insurance during a policy year, we typically observe a large proportion of drivers with zero accidents, a lower proportion with exactly one accident, and a far lower proportion with two or more accidents. We here introduce a cost-sensitive multi-class adaptive boosting (AdaBoost) algorithm we call SAMME.C2 to handle such class imbalances. We calibrate the algorithm using empirical data collected from a telematics program in Canada and demonstrate an improved assessment of driving behavior using telematics compared with traditional risk variables. Using suitable performance metrics, we show that our algorithm outperforms other learning models designed to handle class imbalances.
Almost all hospitals are equipped with air-conditioning systems to provide a comfortable environment for patients and staff. However, the accumulation of dust and moisture within these systems increases the risk of transmission of microbes and have on occasion been associated with outbreaks of infection. Nevertheless, the impact of air-conditioning on the transmission of microorganisms leading to infection remains largely uncertain. We conducted a scoping review to screen systematically the evidence for such an association in the face of the coronavirus disease 2019 epidemic. PubMed, Embase and Web of Science databases were explored for relevant studies addressing microbial contamination of the air, their transmission and association with infectious diseases. The review process yielded 21 publications, 17 of which were cross-sectional studies, three were cohort studies and one case−control study. Our analysis showed that, compared with naturally ventilated areas, microbial loads were significantly lower in air-conditioned areas, but the incidence of infections increased if not properly managed. The use of high-efficiency particulate air (HEPA) filtration not only decreased transmission of airborne bioaerosols and various microorganisms, but also reduced the risk of infections. By contrast, contaminated air-conditioning systems in hospital rooms were associated with a higher risk of patient infection. Cleaning and maintenance of such systems to recommended standards should be performed regularly and where appropriate, the installation of HEPA filters can effectively mitigate microbial contamination in the public areas of hospitals.
The concept of “hybrid twin” (HT) has recently received a growing interest thanks to the availability of powerful machine learning techniques. This twin concept combines physics-based models within a model order reduction framework—to obtain real-time feedback rates—and data science. Thus, the main idea of the HT is to develop on-the-fly data-driven models to correct possible deviations between measurements and physics-based model predictions. This paper is focused on the computation of stable, fast, and accurate corrections in the HT framework. Furthermore, regarding the delicate and important problem of stability, a new approach is proposed, introducing several subvariants and guaranteeing a low computational cost as well as the achievement of a stable time-integration.
On 16–17 January 2020, four suspected mumps cases were reported to the local Public Health Authorities with an epidemiological link to a local school and football club. Of 18 suspected cases identified, 14 were included in this study. Laboratory results confirmed mumps virus as the cause and further sequencing identified genotype G. Our findings highlight that even with a high MMR vaccine coverage, mumps outbreaks in children and young adults can occur. Since most of the cases had documented immunity for mumps, we hypothesise that waning immunity or discordant mumps virus strains are likely explanations for this outbreak.
Concentrated random variables are frequently used in representing deterministic delays in stochastic models. The squared coefficient of variation ($\mathrm {SCV}$) of the most concentrated phase-type distribution of order $N$ is $1/N$. To further reduce the $\mathrm {SCV}$, concentrated matrix exponential (CME) distributions with complex eigenvalues were investigated recently. It was obtained that the $\mathrm {SCV}$ of an order $N$ CME distribution can be less than $n^{-2.1}$ for odd $N=2n+1$ orders, and the matrix exponential distribution, which exhibits such a low $\mathrm {SCV}$ has complex eigenvalues. In this paper, we consider CME distributions with real eigenvalues (CME-R). We present efficient numerical methods for identifying a CME-R distribution with smallest SCV for a given order $n$. Our investigations show that the $\mathrm {SCV}$ of the most concentrated CME-R of order $N=2n+1$ is less than $n^{-1.85}$. We also discuss how CME-R can be used for numerical inverse Laplace transformation, which is beneficial when the Laplace transform function is impossible to evaluate at complex points.
We report a familial cluster of 24 individuals infected with severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2). The index case had a travel history and spent 24 days in the house before being tested and was asymptomatic. Physical overcrowding in the house provided a favourable environment for intra-cluster infection transmission. Restriction of movement of family members due to countrywide lockdown limited the spread in community. Among the infected, only four individuals developed symptoms. The complete genome sequences of SARS-CoV-2 was retrieved using next-generation sequencing from eight clinical samples which demonstrated a 99.99% similarity with reference to Wuhan strain and the phylogenetic analysis demonstrated a distinct cluster, lying in the B.6.6 pangolin lineage.
We obtain a polynomial upper bound on the mixing time $T_{CHR}(\epsilon)$ of the coordinate Hit-and-Run (CHR) random walk on an $n-$dimensional convex body, where $T_{CHR}(\epsilon)$ is the number of steps needed to reach within $\epsilon$ of the uniform distribution with respect to the total variation distance, starting from a warm start (i.e., a distribution which has a density with respect to the uniform distribution on the convex body that is bounded above by a constant). Our upper bound is polynomial in n, R and $\frac{1}{\epsilon}$, where we assume that the convex body contains the unit $\Vert\cdot\Vert_\infty$-unit ball $B_\infty$ and is contained in its R-dilation $R\cdot B_\infty$. Whether CHR has a polynomial mixing time has been an open question.
Although the interferon-γ release assay (IGRA) has become a common diagnostic method for tuberculosis, its value in the diagnosis of tuberculosis in human immunodeficiency virus (HIV) seropositive patients remains controversial. Therefore, this systematically reviews the data for exploring the diagnostic value of IGRA in HIV-infected individuals complicated with active tuberculosis, aiming to provide a clinical basis for future clinical diagnosis of the disease.
Methods
Relevant studies on IGRA for diagnosing tuberculosis in HIV-infected patients were comprehensively collected from Excerpta Medica Database (EMBASE), Medline, Cochrane Library, Chinese Sci-tech Periodical Full-text Database, Chinese Periodical Full-text Database, China National Knowledge Infrastructure (CNKI) and China Wanfang Data up to July 2020. Subsequently, Stata 15.0, an integrated statistical software, was used to analyse the sensitivity, specificity, diagnostic odds ratio (DOR), positive likelihood ratio (PLR) and negative likelihood ratio (NLR) to create receiver operator characteristic (ROC) curves.
Results
A total of 18 high-quality articles were selected, including 20 studies, 11 of which were related to QuantiFERON-TB Gold In-Tube (QFT-GIT) and nine to T-SPOT.TB. The meta-analysis indicated that the pooled sensitivity = 0.75 (95% CI 0.63–0.85), the pooled specificity = 0.82 (95% CI 0.66–0.92), PLR = 4.25 (95% CI 1.97–9.18), NLR = 0.30 (95% CI 0.18–0.50), DOR = 14.21 (95% CI 4.38–46.09) and the area under summary ROC curve was 0.85 (95% CI 0.81–0.88).
Conclusion
IGRA has a good diagnostic value and therefore can aid in the preliminary screening of active tuberculosis in HIV-infected individuals. Its diagnostic effectiveness can be improved by modifying and optimizing the assay design.
Herein, we report the synthesis and characterization of a novel class of polymer composites based on onion-like carbons (OLCs)-silicon diimide by a salt-free polycondensation reaction. The pyridine-catalyzed polymerization reaction was carried out in the presence of various contents (0.1, 0.5, 1, and 2 wt%) of carboxyl-functionalized OLCs in argon atmosphere to provide composites with well-dispersed and covalently incorporated 0D nanocarbons throughout the 3D matrix of silicon diimide polymer. A strong dependency of the optical properties (UV absorbance and the photoluminescence spectra) on the content of functionalized OLCs incorporated within the polymer matrix was observed. The novel polymer composites are suitable precursors for the design of advanced and multifunctional 0D-nanocarbon–containing Si3N4-based ceramic nanocomposites.
The COVID-19 pandemic confronts society with a dilemma between (in)visibility, security, and care. While invisibility might be sought by unregistered and undocumented people, being counted and thus visible during a pandemic is a precondition of existence and care. This article asks whether and how unregistered populations like undocumented migrants should be included in statistics and other “counting” exercises devised to track virus diffusion and its impact. In particular, the paper explores how such inclusion can be just, given that for unregistered people visibility is often associated with surveillance. It also reflects on how policymaking can act upon the relationship between data, visibility, and populations in pragmatic terms. Conversing with science and technology studies and critical data studies, the paper frames the dilemma between (in)visibility and care as an issue of sociotechnical nature and identifies four criteria linked to the sociotechnical characteristics of the data infrastructure enabling visibility. It surveys “counting” initiatives targeting unregistered and undocumented populations undertaken by European countries in the aftermath of the pandemic, and illustrates the medical, economic, and social consequences of invisibility. On the basis of our analysis, we outline four scenarios that articulate the visibility/invisibility binary in novel, nuanced terms, and identify in the “de facto inclusion” scenario the best option for both migrants and the surrounding communities. Finally, we offer policy recommendations to avoid surveillance and overreach and promote instead a more just “de facto” civil inclusion of undocumented populations.
Tuberculosis (TB) in immigrants is becoming a challenge in eliminating TB in Japan. We investigated the epidemiology of TB in foreign students in Japan in 2015–2019. A total of 2007 foreign students with TB whose median age was 22.5 years (1243 (61.9%) were males) were registered. The notification rates peaked in 2016 at 164.0 per 100 000 population and decreased towards 2019. Of the 2007, 535 were from Vietnam, 444 from China and 395 from Nepal. The notification rates were 596.6 per 100 000 person-years (PYs) for Myanmar, 595.4 for the Philippines and 438.6 for Cambodia. The rates were much higher than those of the general populations in their countries of origin for Myanmar, the Philippines, Cambodia, Indonesia, Nepal, Mongolia, Vietnam and China. In comparison with the years 2010–2014, the notification rates for foreign students decreased for the students from Nepal, Vietnam and China. The TB notification rate of the foreign students in Japan can be a good surrogate indicator for the risk of TB among the immigrant subpopulation in Japan and should continuously be monitored. Those who are at higher risk of TB may be annually screened for TB to prevent TB outbreaks.