To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Several discrete geometry problems are equivalent to estimating the size of the largest homogeneous sets in graphs that happen to be the union of few comparability graphs. An important observation for such results is that if G is an n-vertex graph that is the union of r comparability (or more generally, perfect) graphs, then either G or its complement contains a clique of size $n^{1/(r+1)}$.
This bound is known to be tight for $r=1$. The question whether it is optimal for $r\ge 2$ was studied by Dumitrescu and Tóth. We prove that it is essentially best possible for $r=2$, as well: we introduce a probabilistic construction of two comparability graphs on n vertices, whose union contains no clique or independent set of size $n^{1/3+o(1)}$.
Using similar ideas, we can also construct a graph G that is the union of r comparability graphs, and neither G nor its complement contain a complete bipartite graph with parts of size $cn/{(log n)^r}$. With this, we improve a result of Fox and Pach.
Klebsiella pneumoniae is a common pathogen associated with nosocomial infections and is characterised serologically by capsular polysaccharide (K) and lipopolysaccharide O antigens. We surveyed a total of 348 non-duplicate K. pneumoniae clinical isolates collected over a 1-year period in a tertiary care hospital, and determined their O and K serotypes by sequencing of the wbb Y and wzi gene loci, respectively. Isolates were also screened for antimicrobial resistance and hypervirulent phenotypes; 94 (27.0%) were identified as carbapenem-resistant (CRKP) and 110 (31.6%) as hypervirulent (hvKP). isolates fell into 58 K, and six O types, with 92.0% and 94.2% typeability, respectively. The predominant K types were K14K64 (16.38%), K1 (14.66%), K2 (8.05%) and K57 (5.46%), while O1 (46%), O2a (27.9%) and O3 (11.8%) were the most common. CRKP and hvKP strains had different serotype distributions with O2a:K14K64 (41.0%) being the most frequent among CRKP, and O1:K1 (26.4%) and O1:K2 (17.3%) among hvKP strains. Serotyping by gene sequencing proved to be a useful tool to inform the clinical epidemiology of K. pneumoniae infections and provides valuable data relevant to vaccine design.
In this work we consider three well-studied broadcast protocols: push, pull and push&pull. A key property of all these models, which is also an important reason for their popularity, is that they are presumed to be very robust, since they are simple, randomized and, crucially, do not utilize explicitly the global structure of the underlying graph. While sporadic results exist, there has been no systematic theoretical treatment quantifying the robustness of these models. Here we investigate this question with respect to two orthogonal aspects: (adversarial) modifications of the underlying graph and message transmission failures.
We explore in particular the following notion of local resilience: beginning with a graph, we investigate up to which fraction of the edges an adversary may delete at each vertex, so that the protocols need significantly more rounds to broadcast the information. Our main findings establish a separation among the three models. On one hand, pull is robust with respect to all parameters that we consider. On the other hand, push may slow down significantly, even if the adversary may modify the degrees of the vertices by an arbitrarily small positive fraction only. Finally, push&pull is robust when no message transmission failures are considered, otherwise it may be slowed down.
On the technical side, we develop two novel methods for the analysis of randomized rumour-spreading protocols. First, we exploit the notion of self-bounding functions to facilitate significantly the round-based analysis: we show that for any graph the variance of the growth of informed vertices is bounded by its expectation, so that concentration results follow immediately. Second, in order to control adversarial modifications of the graph we make use of a powerful tool from extremal graph theory, namely Szemerédi’s Regularity Lemma.
Our study aimed to systematically analyse the risk factors of coronavirus disease 2019 (COVID-19) patients with severe disease. An electronic search in eight databases to identify studies describing severe or critically ill COVID-19 patients from 1 January 2020 to 3 April 2020. In the end, we meta-analysed 40 studies involving 5872 COVID-19 patients. The average age was higher in severe COVID-19 patients (weighted mean difference; WMD = 10.69, 95%CI 7.83–13.54). Patients with severe disease showed significantly lower platelet count (WMD = −18.63, 95%CI −30.86 to −6.40) and lymphocyte count (WMD = −0.35, 95%CI −0.41 to −0.30) but higher C-reactive protein (CRP; WMD = 42.7, 95%CI 31.12–54.28), lactate dehydrogenase (LDH; WMD = 137.4, 95%CI 105.5–169.3), white blood cell count(WBC), procalcitonin(PCT), D-dimer, alanine aminotransferase (ALT), aspartate aminotransferase (AST) and creatinine(Cr). Similarly, patients who died showed significantly higher WBC, D-dimer, ALT, AST and Cr but similar platelet count and LDH as patients who survived. These results indicate that older age, low platelet count, lymphopenia, elevated levels of LDH, ALT, AST, PCT, Cr and D-dimer are associated with severity of COVID-19 and thus could be used as early identification or even prediction of disease progression.
The spread of COVID-19 is recent in India, which has within 4 months caused over 190 000 infections, as of 1 June 2020, despite four stringent lockdowns. With the current rate of the disease transmission in India, which is home to over 1.35 billion people, the infection spread is predicted to be worse than the USA in the upcoming months. To date, there is a major lack of understanding of the transmission dynamics and epidemiological characteristics of the disease in India, inhibiting effective measures to control the pandemic. We collected all the available data of the individual patients, cases and a range of parameters such as population distribution, testing and healthcare facilities, and weather, across all Indian states till May 2020. Numerical analysis was conducted to determine the effect of each parameter on the COVID-19 situation in India. A significant amount of local transmission in India initiated with travellers returning from abroad. Maharashtra, Tamil Nadu and Delhi are currently the top three infected states in India with doubling time of 14.5 days. The average recovery rate across Indian states is 42%, with a mortality rate below 3%. The rest 55% are currently active cases. In total, 88% of the patients experienced symptoms of high fever, 68% suffered from dry cough and 7.1% patients were asymptomatic. In total, 66.8% patients were males, 73% were in the age group of 20–59 years and over 83% recovered in 11–25 days. Approximately 3.4 million people were tested between 1 April and 25 May 2020, out of which 4% were detected COVID-19-positive. Given the current doubling time of infections, several states may face a major shortage of public beds and healthcare facilities soon. Weather has minimal effect on the infection spread in most Indian states. The study results will help policymakers to predict the trends of the disease spread in the upcoming months and devise better control measures.
This paper studies asymptotic properties of a posterior probability density and Bayesian estimators of spatial econometric models in the classical statistical framework. We focus on the high-order spatial autoregressive model with spatial autoregressive disturbance terms, due to a computational advantage of Bayesian estimation. We also study the asymptotic properties of Bayesian estimation of the spatial autoregressive Tobit model, as an example of nonlinear spatial models. Simulation studies show that even when the sample size is small or moderate, the posterior distribution of parameters is well approximated by a normal distribution, and Bayesian estimators have satisfactory performance, as classical large sample theory predicts.
A variable annuity (VA) is an equity-linked annuity that provides investment guarantees to its policyholder and its contributions are normally invested in multiple underlying assets (e.g., mutual funds), which exposes VA liability to significant market risks. Hedging the market risks is therefore crucial in risk managing a VA portfolio as the VA guarantees are long-dated liabilities that may span decades. In order to hedge the VA liability, the issuing insurance company would need to construct a hedging portfolio consisting of the underlying assets whose positions are often determined by the liability Greeks such as partial dollar Deltas. Usually, these quantities are calculated via nested simulation approach. For insurance companies that manage large VA portfolios (e.g., 100k+ policies), calculating those quantities is extremely time-consuming or even prohibitive due to the complexity of the guarantee payoffs and the stochastic-on-stochastic nature of the nested simulation algorithm. In this paper, we extend the surrogate model-assisted nest simulation approach in Lin and Yang [(2020) Insurance: Mathematics and Economics, 91, 85–103] to efficiently calculate the total VA liability and the partial dollar Deltas for large VA portfolios with multiple underlying assets. In our proposed algorithm, the nested simulation is run using small sets of selected representative policies and representative outer loops. As a result, the computing time is substantially reduced. The computational advantage of the proposed algorithm and the importance of dynamic hedging are further illustrated through a profit and loss (P&L) analysis for a large synthetic VA portfolio. Moreover, the robustness of the performance of the proposed algorithm is tested with multiple simulation runs. Numerical results show that the proposed algorithm is able to accurately approximate different quantities of interest and the performance is robust with respect to different sets of parameter inputs. Finally, we show how our approach could be extended to potentially incorporate stochastic interest rates and estimate other Greeks such as Rho.
In this paper, we complement joint time-series and cross-section convergence results derived in a companion paper Hahn, Kuersteiner, and Mazzocco (2016, Central Limit Theory for Combined Cross-Section and Time Series) by allowing for serial correlation in the time-series sample. The implications of our analysis are limiting distributions that have a well-known form of long-run variances for the time-series limit. We obtain these results at the cost of imposing strict stationarity for the time-series model and conditional independence between the time-series and cross-section samples. Our results can be applied to estimators that combine time-series and cross-section data in the presence of aggregate uncertainty in models with rationally forward-looking agents.