To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this study, we present and assess data-driven approaches for modeling contact line dynamics, using droplet transport on chemically heterogeneous surfaces as a model system. Ground-truth data for training and validation are generated based on long-wave models that are applicable for slow droplet motion with small contact angles, which are known to accurately reproduce the dynamics with minimal computing resources compared to high-fidelity direct numerical simulations. The data-driven models are based on the Fourier neural operator (FNO) and are developed following two different approaches. The first deploys the data-driven method as an iterative neural network architecture, which predicts the future state of the contact line based on a number of previous states. The second approach corrects the time derivative of the contact line by augmenting its low-order asymptotic approximation with a data-driven counterpart, evolving the resulting system using standard time integration methods. The performance of each approach is evaluated in terms of accuracy and generalizability, concluding that the latter approach, although not originally explored within the original contribution on the FNO, outperforms the former.
Secondary pneumonia occurs in 8–24% of patients with Coronavirus 2019 (COVID-19) infection and is associated with increased morbidity and mortality. Diagnosis of secondary pneumonia can be challenging. The purpose of this study was to evaluate the use of plasma microbial cell free DNA sequencing (mcfNGS) in the evaluation of secondary pneumonia after COVID-19. We performed a single-center case series of patients with COVID-19 who underwent mcfNGS to evaluate secondary pneumonia and reported the organisms identified, concordance with available tests, clinical utility, and outcomes. In 8/13 (61%) cases, mcfNGS detected 1–6 organisms, with clinically significant organisms identified in 4 cases, including Pneumocystis jirovecii, and Legionella spp. Management was changed in 85% (11/13) of patients based on results, including initiation of targeted therapy, de-escalation of empiric antimicrobials, and avoiding contingent escalation of antifungals. mcfNGS may be helpful to identify pathogens causing secondary pneumonia, including opportunistic pathogens in immunocompromised patients with COVID-19. However, providers need to carefully interpret this test within the clinical context.
In this paper, we time-change the generalized counting process (GCP) by an independent inverse mixed stable subordinator to obtain a fractional version of the GCP. We call it the mixed fractional counting process (MFCP). The system of fractional differential equations that governs its state probabilities is obtained using the Z transform method. Its one-dimensional distribution, mean, variance, covariance, probability generating function, and factorial moments are obtained. It is shown that the MFCP exhibits the long-range dependence property whereas its increment process has the short-range dependence property. As an application we consider a risk process in which the claims are modelled using the MFCP. For this risk process, we obtain an asymptotic behaviour of its finite-time ruin probability when the claim sizes are subexponentially distributed and the initial capital is arbitrarily large. Later, we discuss some distributional properties of a compound version of the GCP.
Many studies have investigated the positivity rate of hepatitis B surface antibody (HBsAb) after hepatitis B vaccine (HepB) immunization. However, the antibody level, assessed monthly or at more frequent intervals after each of the three doses, particularly within the first year after birth, has not been previously reported. To elucidate the level of antibody formation at various times after vaccination, the current study used the available detection data of HBsAb in hospitalized children to analyze the HBsAb level after immunization combined with their vaccination history. Both the positivity rate and geometric mean concentration (GMC) increased sequentially with immunization doses, reaching their peaks earlier after the third dose than after the first two doses, and the rate of HBsAb positivity was able to reach 100% between 11 and 90 days after completing the three doses of HepB. Within one year after receiving the three doses, the antibody positivity rate and GMC were maintained above 90% and 100 mIU/mL, respectively, and subsequently steadily declined, reaching the lowest value in the 9th and 10th years. The current findings reveal, in more detail, the level of antibody formation at different times following each dose of HepB in hospitalized children, particularly in the age group up to one year after vaccination. For the subjects of this study, we prefer to believe that the proportion of HBsAb non-response should be less than 5% after full immunization with HepB, provided that the appropriate time for blood collection is chosen.
Residents of long-term care facilities (LTCFs) were disproportionately affected by the COVID-19 pandemic. We assessed the extent to which hospital-associated infections contributed to COVID-19 LTCF outbreaks in England. We matched addresses of cases between March 2020 and June 2021 to reference databases to identify LTCF residents. Linkage to health service records identified hospital-associated infections, with the number of days spent in hospital before positive specimen date used to classify these as definite or probable. Of 149,129 cases in LTCF residents during the study period, 3,748 (2.5%) were definite or probable hospital-associated and discharged to an LTCF. Overall, 431 (0.3%) were identified as index cases of potentially nosocomial-seeded outbreaks (2.7% (431/15,797) of all identified LTCF outbreaks). These outbreaks involved 4,521 resident cases and 1,335 deaths, representing 3.0% and 3.6% of all cases and deaths in LTCF residents, respectively. The proportion of outbreaks that were potentially nosocomial-seeded peaked in late June 2020, early December 2020, mid-January 2021, and mid-April 2021. Nosocomial seeding contributed to COVID-19 LTCF outbreaks but is unlikely to have accounted for a substantial proportion. The continued identification of such outbreaks after the implementation of preventative policies highlights the challenges of preventing their occurrence.
SNP addresses are a pathogen typing method based on whole-genome sequences (WGSs), assigning groups at seven different levels of genetic similarity. Public health surveillance uses it for several gastro-intestinal infections; this work trialled its use in veterinary surveillance for salmonella outbreak detection. Comparisons were made between temporal and spatio-temporal cluster detection models that either defined cases by their SNP address or by phage type, using historical data sets. Clusters of SNP incidents were effectively detected by both methods, but spatio-temporal models consistently detected these clusters earlier than the corresponding temporal models. Unlike phage type, SNP addresses appeared spatially and temporally limited, which facilitated the differentiation of novel, stable, or expanding clusters in spatio-temporal models. Furthermore, these models flagged spatio-temporal clusters containing only two to three cases at first detection, compared with a median of seven cases in phage-type models. The large number of SNP addresses will require automated methods to implement these detection models routinely. Further work is required to explore how temporal changes and different host species may impact the sensitivity and specificity of cluster detection. In conclusion, given validation with more sequencing data, SNP addresses are likely to be a valuable addition to early warning systems in veterinary surveillance.
We study in a general graph-theoretic formulation a long-range percolation model introduced by Lamperti [27]. For various underlying digraphs, we discuss connections between this model and random exchange processes. We clarify, for all $n \in \mathbb{N}$, under which conditions the lattices $\mathbb{N}_0^n$ and $\mathbb{Z}^n$ are essentially covered in this model. Moreover, for all $n \geq 2$, we establish that it is impossible to cover the directed n-ary tree in our model.
Inaccuracy and information measures based on cumulative residual entropy are quite useful and have received considerable attention in many fields, such as statistics, probability, and reliability theory. In particular, many authors have studied cumulative residual inaccuracy between coherent systems based on system lifetimes. In a previous paper (Bueno and Balakrishnan, Prob. Eng. Inf. Sci.36, 2022), we discussed a cumulative residual inaccuracy measure for coherent systems at component level, that is, based on the common, stochastically dependent component lifetimes observed under a non-homogeneous Poisson process. In this paper, using a point process martingale approach, we extend this concept to a cumulative residual inaccuracy measure between non-explosive point processes and then specialize the results to Markov occurrence times. If the processes satisfy the proportional risk hazard process property, then the measure determines the Markov chain uniquely. Several examples are presented, including birth-and-death processes and pure birth process, and then the results are applied to coherent systems at component level subject to Markov failure and repair processes.
The decreasing cost and improved sensor and monitoring system technology (e.g., fiber optics and strain gauges) have led to more measurements in close proximity to each other. When using such spatially dense measurement data in Bayesian system identification strategies, the correlation in the model prediction error can become significant. The widely adopted assumption of uncorrelated Gaussian error may lead to inaccurate parameter estimation and overconfident predictions, which may lead to suboptimal decisions. This article addresses the challenges of performing Bayesian system identification for structures when large datasets are used, considering both spatial and temporal dependencies in the model uncertainty. We present an approach to efficiently evaluate the log-likelihood function, and we utilize nested sampling to compute the evidence for Bayesian model selection. The approach is first demonstrated on a synthetic case and then applied to a (measured) real-world steel bridge. The results show that the assumption of dependence in the model prediction uncertainties is decisively supported by the data. The proposed developments enable the use of large datasets and accounting for the dependency when performing Bayesian system identification, even when a relatively large number of uncertain parameters is inferred.
The finite element method (FEM) is widely used to simulate a variety of physics phenomena. Approaches that integrate FEM with neural networks (NNs) are typically leveraged as an alternative to conducting expensive FEM simulations in order to reduce the computational cost without significantly sacrificing accuracy. However, these methods can produce biased predictions that deviate from those obtained with FEM, since these hybrid FEM-NN approaches rely on approximations trained using physically relevant quantities. In this work, an uncertainty estimation framework is introduced that leverages ensembles of Bayesian neural networks to produce diverse sets of predictions using a hybrid FEM-NN approach that approximates internal forces on a deforming solid body. The uncertainty estimator developed herein reliably infers upper bounds of bias/variance in the predictions for a wide range of interpolation and extrapolation cases using a three-element FEM-NN model of a bar undergoing plastic deformation. This proposed framework offers a powerful tool for assessing the reliability of physics-based surrogate models by establishing uncertainty estimates for predictions spanning a wide range of possible load cases.
Since Bai (2009, Econometrica 77, 1229–1279), considerable extensions have been made to panel data models with interactive fixed effects (IFEs). However, little work has been conducted to understand the associated iterative algorithm, which, to the best of our knowledge, is the most commonly adopted approach in this line of research. In this paper, we refine the algorithm of panel data models with IFEs using the nuclear-norm penalization method and duple least-squares (DLS) iterations. Meanwhile, we allow the regression coefficients to be individual-specific and evolve over time. Accordingly, asymptotic properties are established to demonstrate the theoretical validity of the proposed approach. Furthermore, we show that the proposed methodology exhibits good finite-sample performance using simulation and real data examples.
During October 2021, the County of San Diego Health and Human Services Agency identified five cases of shigellosis among persons experiencing homelessness (PEH). We conducted an outbreak investigation and developed interventions to respond to shigellosis outbreaks among PEH. Confirmed cases occurred among PEH with stool-cultured Shigella sonnei; probable cases were among PEH with Shigella-positive culture-independent diagnostic testing. Patients were interviewed to determine infectious sources and risk factors. Fifty-three patients were identified (47 confirmed, 6 probable); 34 (64%) were hospitalised. None died. No point source was identified. Patients reported inadequate access to clean water and sanitation facilities, including public restrooms closed because of the COVID-19 pandemic. After implementing interventions, including handwashing stations, more frequent public restroom cleaning, sanitation kit distribution, and isolation housing for ill persons, S. sonnei cases decreased to preoutbreak frequencies. Improving public sanitation access was associated with decreased cases and should be considered to prevent outbreaks among PEH.
Adolescent men who have sex with men (AMSM) and transgender women (ATGW) enrolled as part of the PrEP1519 study between April 2019 and February 2021 in Salvador were tested for Neisseria gonorrhoeae (NG) and Chlamydia trachomatis (CT) infections.We performed real-time polymerase chain reaction using oropharyngeal, anal, and urethral swabs; assessed factors associated with NG and CT infections using multivariable Poisson regression analysis with robust variance; and estimated the prevalence ratios (PRs) and 95% confidence intervals (95% CIs). In total, 246 participants were included in the analyses (median age: 18.8; IQR: 18.2–19.4 years). The overall oropharyngeal, anal, and urethral prevalence rates of NG were 17.9%, 9.4%, 7.6%, and 1.9%, respectively. For CT, the overall, oropharyngeal, anal, and urethral prevalence rates were 5.9%, 1.2%, 2.4%, and 1.9%, respectively. A low level of education, clinical suspicion of STI (and coinfection with Mycoplasma hominis were associated with NG infection. The prevalence of NG and CT, especially extragenital infections, was high in AMSM and ATGW. These findings highlight the need for testing samples from multiple anatomical sites among adolescents at a higher risk of STI acquisition, implementation of school-based strategies, provision of sexual health education, and reduction in barriers to care.
Measures of uncertainty are a topic of considerable and growing interest. Recently, the introduction of extropy as a measure of uncertainty, dual to Shannon entropy, has opened up interest in new aspects of the subject. Since there are many versions of entropy, a unified formulation has been introduced to work with all of them in an easy way. Here we consider the possibility of defining a unified formulation for extropy by introducing a measure depending on two parameters. For particular choices of parameters, this measure provides the well-known formulations of extropy. Moreover, the unified formulation of extropy is also analyzed in the context of the Dempster–Shafer theory of evidence, and an application to classification problems is given.
This paper studies an M/M/1 retrial queue with negative customers, passive breakdown, and delayed repairs. Assume that the breakdown behavior of the server during idle periods is different from that during busy periods. Passive breakdowns may occur when the server is idle, due to the lack of monitoring of the server during idle periods. When the passive breakdown occurs, the server does not get repaired immediately and enters a delayed repair phase. Negative customers arrive during the busy period, which will cause the server to break down and remove the serving customers. Under steady-state conditions, we obtain explicit expressions of the probability generating functions for the steady-state distribution, together with some important performance measures for the system. In addition, we present some numerical examples to illustrate the effects of some system parameters on important performance measures and the cost function. Finally, based on the reward-cost structure, we discuss Nash equilibrium and socially optimal strategy and numerically analyze the influence of system parameters on optimal strategies and optimal social benefits.
We show that for every $n\in \mathbb N$ and $\log n\le d\lt n$, if a graph $G$ has $N=\Theta (dn)$ vertices and minimum degree $(1+o(1))\frac{N}{2}$, then it contains a spanning subdivision of every $n$-vertex $d$-regular graph.
The ratemaking process is a key issue in insurance pricing. It consists in pooling together policyholders with similar risk profiles into rating classes and assigning the same premium for policyholders in the same class. In actuarial practice, rating systems are typically not based on all risk factors but rather only some of factors are selected to construct the rating classes. The objective of this study is to investigate the selection of risk factors in order to construct rating classes that exhibit maximum internal homogeneity. For this selection, we adopt the Shapley effects from global sensitivity analysis. While these sensitivity indices are used for model interpretability, we apply them to construct rating classes. We provide a new strategy to estimate them, and we connect them to the intra-class variability and heterogeneity of the rating classes. To verify the appropriateness of our procedure, we introduce a measure of heterogeneity specifically designed to compare rating systems with a different number of classes. Using a well-known car insurance dataset, we show that the rating system constructed with the Shapley effects is the one minimizing this heterogeneity measure.