To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We investigated the drug resistance of Mycobacterium tuberculosis isolates from patients with tuberculosis (TB) and HIV, and those diagnosed with only TB in Sichuan, China. TB isolates were obtained from January 2018 to December 2020 and subjected to drug susceptibility testing (DST) to 11 anti-TB drugs and to GeneXpert MTB/RIF testing. The overall proportion of drug-resistant TB (DR-TB) isolates was 32.1% (n = 10 946). HIV testing was not universally available for outpatient TB cases, only 29.5% (3227/10 946) cases had HIV testing results. The observed proportion of multidrug-resistant TB (MDR-TB) isolates was almost double than that of the national level, with approximately 1.5% and 0.1% of the isolates being extensively drug resistant and universally drug resistant, respectively. The proportions of resistant isolates were generally higher in 2018 and 2019 than in 2020. Furthermore, the sensitivities of GeneXpert during 2018–2020 demonstrated a downward trend (80.9, 95% confidence intervals (CI) 76.8–85.0; 80.2, 95% CI 76.4–84.1 and 75.4, 95% CI 70.7–80.2, respectively). Approximately 69.0% (7557/10 946) of the TB cases with DST results were subjected to GeneXpert detection. Overall, the DR-TB status and the use of GeneXpert in Sichuan have improved, but DR-TB challenges remain. HIV testing for all TB cases is recommended.
In Ethiopia, the magnitude of violence against girls during COVID-19 in the study area is not known. Therefore, this study aimed to assess the violence and associated factors during COVID-19 pandemic among Gondar city secondary school girls in North West Ethiopia. An institution-based cross-sectional study was conducted from January to February 2021. Data were collected from four public and two private Gondar city secondary schools. Investigators used stratified simple random sampling to select participants and the investigators used roster of the students at selected schools. Investigators collected the data using self-reported history of experiencing violence (victimisation). Investigators analysed data using descriptive statistics and multivariable logistic regression. Investigators invited a total of 371 sampled female students to complete self-administered questionnaires. The proportion of girls who experienced violence was 42.05% and psychological violence was the highest form of violence. Having a father who attended informal education (AOR = 1.95, 95% CI 1.08–3.51), ever use of social media 1.65 (AOR = 1.65, 95% CI 1.02–2.69), ever watching sexually explicit material (AOR = 2.04, 95% CI 1.24–3.36) and use of a substance (AOR = 1.92, 95% CI 1.17–3.15) were significantly associated variables with violence. Almost for every five girls, more than two of them experienced violence during the COVID-19 lockdown. The prevalence of violence might be under reported due to desirability bias. Therefore, it is better to create awareness towards violence among substance users, fathers with informal education and social media including user females.
This paper proposes a novel stochastic volatility model with a flexible jump structure. This model allows both contemporaneous and independent arrival of jumps in return and volatility. Moreover, time-varying jump intensities are used to capture jump clustering. In the proposed framework, we provide a semi-analytical solution for the pricing problem of VIX futures and options. Through numerical experiments, we verify the accuracy of our pricing formula and explore the impact of the jump structure on the pricing of VIX derivatives. We find that the correct identification of the market jump structure is crucial for pricing VIX derivatives, and misspecified model setting can yield large errors in pricing.
We obtain here sufficient conditions for increasing concave order and location independent more riskier order of lower record values based on stochastic comparisons of minimum order statistics. We further discuss stochastic orderings of lower record spacings. In particular, we show that increasing convex order of adjacent spacings between minimum order statistics is a sufficient condition for increasing convex order of adjacent spacings of their lower records.
The main aim of this paper is to develop an optimal partial hedging strategy that minimises an investor’s shortfall subject to an initial wealth constraint. The risk criterion we employ is a robust tail risk measure called Range Value-at-Risk (RVaR) which belongs to a wider class of distortion risk measures and contains the well-known measures VaR and CVaR as important limiting cases. Explicit forms of such RVaR-based optimal hedging strategies are derived. In addition, we provide a numerical example to demonstrate how to apply this more comprehensive methodology of partial hedging in the area of mixed finance/insurance contracts in the market with long-range dependence.
The presence of unobserved node-specific heterogeneity in exponential random graph models (ERGM) is a general concern, both with respect to model validity as well as estimation instability. We, therefore, include node-specific random effects in the ERGM that account for unobserved heterogeneity in the network. This leads to a mixed model with parametric as well as random coefficients, labelled as mixed ERGM. Estimation is carried out by iterating between approximate pseudolikelihood estimation for the random effects and maximum likelihood estimation for the remaining parameters in the model. This approach provides a stable algorithm, which allows to fit nodal heterogeneity effects even for large scale networks. We also propose model selection based on the Akaike Information Criterion to check for node-specific heterogeneity.
Antisocial behavior can be contagious, spreading from individual to individual and rippling through social networks. Moreover, it can spread not only through third-party influence from observation, just like innovations or individual behavior do, but also through direct experience, via “pay-it-forward” retaliation. Here, we distinguish between the effects of observation and victimization for the contagion of antisocial behavior by analyzing large-scale digital trace data. We study the spread of cheating in more than a million matches of an online multiplayer first-person shooter game, in which up to 100 players compete individually or in teams against strangers. We identify event sequences in which a player who observes or is killed by a certain number of cheaters starts cheating and evaluate the extent to which these sequences would appear if we preserve the team and interaction structure but assume alternative gameplay scenarios. The results reveal that social contagion is only likely to exist for those who both observe and experience cheating, suggesting that third-party influence and “pay-it-forward” reciprocity interact positively. In addition, the effect is present only for those who both observe and experience more than once, suggesting that cheating is more likely to spread after repeated or multi-source exposure. Approaching online games as models of social systems, we use the findings to discuss strategies for targeted interventions to stem the spread of cheating and antisocial behavior more generally in online communities, schools, organizations, and sports.
Many real-world networks, including social networks and computer networks for example, are temporal networks. This means that the vertices and edges change over time. However, most approaches for modeling and analyzing temporal networks do not explicitly discuss the underlying notion of time. In this paper, we therefore introduce a generalized notion of discrete time for modeling temporal networks. Our approach also allows for considering nondeterministic time and incomplete data, two issues that are often found when analyzing datasets extracted from online social networks, for example. In order to demonstrate the consequences of our generalized notion of time, we also discuss the implications for the computation of (shortest) temporal paths in temporal networks. In addition, we implemented an R-package that provides programming support for all concepts discussed in this paper. The R-package is publicly available for download.
We study the detection and the reconstruction of a large very dense subgraph in a social graph with n nodes and m edges given as a stream of edges, when the graph follows a power law degree distribution, in the regime when $m=O(n. \log n)$. A subgraph S is very dense if it has $\Omega(|S|^2)$ edges. We uniformly sample the edges with a Reservoir of size $k=O(\sqrt{n}.\log n)$. Our detection algorithm checks whether the Reservoir has a giant component. We show that if the graph contains a very dense subgraph of size $\Omega(\sqrt{n})$, then the detection algorithm is almost surely correct. On the other hand, a random graph that follows a power law degree distribution almost surely has no large very dense subgraph, and the detection algorithm is almost surely correct. We define a new model of random graphs which follow a power law degree distribution and have large very dense subgraphs. We then show that on this class of random graphs we can reconstruct a good approximation of the very dense subgraph with high probability. We generalize these results to dynamic graphs defined by sliding windows in a stream of edges.
We study quantitative relationships between the triangle removal lemma and several of its variants. One such variant, which we call the triangle-free lemma, states that for each $\epsilon>0$ there exists M such that every triangle-free graph G has an $\epsilon$-approximate homomorphism to a triangle-free graph F on at most M vertices (here an $\epsilon$-approximate homomorphism is a map $V(G) \to V(F)$ where all but at most $\epsilon \left\lvert{V(G)}\right\rvert^2$ edges of G are mapped to edges of F). One consequence of our results is that the least possible M in the triangle-free lemma grows faster than exponential in any polynomial in $\epsilon^{-1}$. We also prove more general results for arbitrary graphs, as well as arithmetic analogues over finite fields, where the bounds are close to optimal.
As a result of the COVID-19 pandemic, whether and when the world can reach herd immunity and return to normal life and a strategy for accelerating vaccination programmes constitute major concerns. We employed Metropolis–Hastings sampling and an epidemic model to design experiments based on the current vaccinations administered and a more equitable vaccine allocation scenario. The results show that most high-income countries can reach herd immunity in less than 1 year, whereas low-income countries should reach this state after more than 3 years. With a more equitable vaccine allocation strategy, global herd immunity can be reached in 2021. However, the spread of SARS-CoV-2 variants means that an additional 83 days will be needed to reach global herd immunity and that the number of cumulative cases will increase by 113.37% in 2021. With the more equitable vaccine allocation scenario, the number of cumulative cases will increase by only 5.70% without additional vaccine doses. As SARS-CoV-2 variants arise, herd immunity could be delayed to the point that a return to normal life is theoretically impossible in 2021. Nevertheless, a more equitable global vaccine allocation strategy, such as providing rapid vaccine assistance to low-income countries/regions, can improve the prevention of COVID-19 infection even though the virus could mutate.
We study an open discrete-time queueing network. We assume data is generated at nodes of the network as a discrete-time Bernoulli process. All nodes in the network maintain a queue and relay data, which is to be finally collected by a designated sink. We prove that the resulting multidimensional Markov chain representing the queue size of nodes has two behavior regimes depending on the value of the rate of data generation. In particular, we show that there is a nontrivial critical value of the data rate below which the chain is ergodic and converges to a stationary distribution and above which it is non-ergodic, i.e., the queues at the nodes grow in an unbounded manner. We show that the rate of convergence to stationarity is geometric in the subcritical regime.
In this study, we analysed the relationship between meteorological factors and the number of patients with coronavirus disease 2019 (COVID-19). The study period was from 12 April 2020 to 13 October 2020, and daily meteorological data and the daily number of patients with COVID-19 in each state of the United States were collected. Based on the number of COVID-19 patients in each state of the United States, we selected four states (California, Florida, New York, Texas) for analysis. One-way analysis of variance ( ANOVA), scatter plot analysis, correlation analysis and distributed lag nonlinear model (DLNM) analysis were used to analyse the relationship between meteorological factors and the number of patients with COVID-19. We found that the significant influencing factors of the number of COVID-19 cases differed among the four states. Specifically, the number of COVID-19 confirmed cases in California and New York was negatively correlated with AWMD (P < 0.01) and positively correlated with AQI, PM2.5 and TAVG (P < 0.01) but not significantly correlated with other factors. Florida was significantly correlated with TAVG (positive) (P < 0.01) but not significantly correlated with other factors. The number of COVID-19 cases in Texas was only significantly negatively associated with AWND (P < 0.01). The influence of temperature and PM2.5 on the spread of COVID-19 is not obvious. This study shows that when the wind speed was 2 m/s, it had a significant positive correlation with COVID-19 cases. The impact of meteorological factors on COVID-19 may be very complicated. It is necessary to further explore the relationship between meteorological factors and COVID-19. By exploring the influence of meteorological factors on COVID-19, we can help people to establish a more accurate early warning system.
This paper outlines frameworks to use for reserving validation and gives the reader an overview of current techniques being employed. In the experience of the authors, many companies lack an embedded reserve validation framework and reserve validation can appear piecemeal and unstructured. The paper outlines a case study demonstrating how successful machine learning techniques will become and then goes on to discuss the implications of machine learning to the future of reserving departments, processes, data and validation techniques. Reserving validation can take many forms, from simple checks to full independent reviews to add value to the reserving process, enhance governance and increase confidence in and reliability in results. This paper discusses covers common weaknesses and their solutions and suggestions of a framework in which to apply validation tools. The impacts of the COVID-19 pandemic on reserving validation is also covered as are early warning indicators and the topic of IFRS 17 from the standpoint of reserving validation. The paper looks at the future for reserving validation and discusses the data challenges that need overcoming on the path to embedded reserving process validation.
While many of the prevalent stochastic mortality models provide adequate short- to medium-term forecasts, only few provide biologically plausible descriptions of mortality on longer horizons and are sufficiently stable to be of practical use in smaller populations. Among the very first to address the issue of modelling adult mortality in small populations was the SAINT model, which has been used for pricing, reserving and longevity risk management by the Danish Labour Market Supplementary Pension Fund (ATP) for more than a decade. The lessons learned have broadened our understanding of desirable model properties from the practitioner’s point of view and have led to a revision of model components to address accuracy, stability, flexibility, explainability and credibility concerns. This paper serves as an update to the original version published 10 years ago and presents the SAINT model with its modifications and the rationale behind them. The main improvement is the generalization of frailty models from deterministic structures to a flexible class of stochastic models. We show by example how the SAINT framework is used for modelling mortality at ATP and make comparisons to the Lee-Carter model.
This paper considers a generalized panel data transformation model with fixed effects where the structural function is assumed to be additive. In our model, no parametric assumptions are imposed on the transformation function, the structural function, or the distribution of the idiosyncratic error term. The model is widely applicable and includes many popular panel data models as special cases. We propose a kernel-based nonparametric estimator for the structural function. The estimator has a closed-form solution and is easy to implement. We study the asymptotic properties of our estimator and show that it is asymptotically normally distributed. The Monte Carlo simulations demonstrate that our new estimator performs well in finite samples.
AI has had many summers and winters. Proponents have overpromised, and there has been hype and disappointment. In recent years, however, we have watched with awe, surprise, and hope at the successes: Better than human capabilities of image-recognition; winning at Go; useful chatbots that seem to understand your needs; recommendation algorithms harvesting the wisdom of crowds. And with this success comes the spectre of danger. Machine behaviours that embed the worst of human prejudice and biases; techniques trying to exploit human weaknesses to skew elections or prompt self-harming behaviours. Are we seeing a perfect storm of social media, sensor technologies, new algorithms and edge computing? With this backdrop: is AI coming of age?