To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Guidance is provided regarding how subscores should be reported as well as on what might be done when subscores ought not be reported. Advice is also given to help practitioners respond to pressure from various stakeholders when subscores would be misleading to report.
The asymptotic behavior of the Jaccard index in G(n, p), the classical Erdös–Rényi random graph model, is studied as n goes to infinity. We first derive the asymptotic distribution of the Jaccard index of any pair of distinct vertices, as well as the first two moments of this index. Then the average of the Jaccard indices over all vertex pairs in G(n, p) is shown to be asymptotically normal under an additional mild condition that $np\to\infty$ and $n^2(1-p)\to\infty$.
We calculate the mean throughput, number of collisions, successes, and idle slots for random tree algorithms with successive interference cancellation. Except for the case of the throughput for the binary tree, all the results are new. We furthermore disprove the claim that only the binary tree maximizes throughput. Our method works with many observables and can be used as a blueprint for further analysis.
Foodborne infections with antimicrobial-resistant Campylobacter spp. remain an important public health concern. Publicly available data collected by the National Antimicrobial Resistance Monitoring System for Enteric Bacteria related to antimicrobial resistance (AMR) in Campylobacter spp. isolated from broiler chickens and turkeys at the slaughterhouse level across the United States between 2013 and 2021 were analysed. A total of 1,899 chicken-origin (1,031 Campylobacter coli (C. coli) and 868 Campylobacter jejuni (C. jejuni)) and 798 turkey-origin (673 C. coli and 123 C. jejuni) isolates were assessed. Chicken isolates exhibited high resistance to tetracycline (43.65%), moderate resistance to ciprofloxacin (19.5%), and low resistance to clindamycin (4.32%) and azithromycin (3.84%). Turkey isolates exhibited very high resistance to tetracycline (69%) and high resistance to ciprofloxacin (39%). The probability of resistance to all tested antimicrobials, except for tetracycline, significantly decreased during the latter part of the study period. Turkey-origin Campylobacter isolates had higher odds of resistance to all antimicrobials than isolates from chickens. Compared to C. jejuni isolates, C. coli isolates had higher odds of resistance to all antimicrobials, except for ciprofloxacin. The study findings emphasize the need for poultry-type-specific strategies to address differences in AMR among Campylobacter isolates.
This paper investigates the role of motorized three-wheelers (MTW) in urban mobility within popular transport, a demand-responsive and unscheduled mode of transportation provided by self-organized small operators frequently operating in grey areas of regulation. Although popular transport is the primary mobility option for millions worldwide, knowledge about its users, operation, and environmental and social impacts remains scarce. This paper sheds light on some of the features and impacts of popular MTW, focusing on two case studies in the Caribbean with different scales and urban trajectories: Puerto Viejo, Costa Rica, and Soledad in Colombia. We explored the relationship between MTW and fragmentation–(in)accessibility–exclusion in these cities, drawing on a framework connecting these concepts in the Latin American and Caribbean context. Using primary data from qualitative and quantitative methods, the paper examines the distribution of inhibitors or enablers of accessibility within the context of unequal, splintered, and fragmented transport and communication infrastructures. Additionally, the environmental impact of MTW in terms of CO2 and PM2.5 emissions is assessed using field data from low-cost sensors. The paper argues that planning for just urban mobility necessitates considering the ecological consequences of various transportation modes and their social consequences and potential for participation and inclusion. The applied methodology introduces low-cost, replicable, and scalable data production and analysis techniques, contributing to future research on sustainable and just mobility in resource-limited urban areas.
Understanding historical environmental determinants associated with the risk of elevated marine water contamination could enhance monitoring marine beaches in a Canadian setting, which can also inform predictive marine water quality models and ongoing climate change preparedness efforts. This study aimed to assess the combination of environmental factors that best predicts Escherichia coli (E. coli) concentration at public beaches in Metro Vancouver, British Columbia, by combining the region’s microbial water quality data and publicly available environmental data from 2013 to 2021. We developed a Bayesian log-normal mixed-effects regression model to evaluate predictors of geometric E. coli concentrations at 15 beaches in the Metro Vancouver Region. We identified that higher levels of geometric mean E. coli levels were predicted by higher previous sample day E. coli concentrations, higher rainfall in the preceding 48 h, and higher 24-h average air temperature at the median or higher levels of the 24-h mean ultraviolet (UV) index. In contrast, higher levels of mean salinity were predicted to result in lower levels of E. coli. Finally, we determined that the average effects of the predictors varied highly by beach. Our findings could form the basis for building real-time predictive marine water quality models to enable more timely beach management decision-making.
Excluding children with Shiga toxin-producing Escherichia coli (STEC) from childcare until microbiologically clear of the pathogen, disrupts families, education, and earnings. Since PCR introduction, non-O157 STEC serotype detections in England have increased. We examined shedding duration by serotype and transmission risk, to guide exclusion advice. We investigated STEC cases aged <6 years, residing in England and attending childcare, with diarrhoea onset or sample date from 31 March 2018 to 30 March 2022. Duration of shedding was the interval between date of onset or date first positive specimen and earliest available negative specimen date. Transmission risk was estimated from proportions with secondary cases in settings attended by infectious cases. There were 367 cases (STEC O157 n = 243, 66.2%; STEC non-O157 n = 124, 33.8%). Median shedding duration was 32 days (IQR 20–44) with no significant difference between O157 and non-O157; 2% (n = 6) of cases shed for ≥100 days. Duration of shedding was reduced by 17% (95% CI 4–29) among cases reporting bloody diarrhoea. Sixteen settings underwent screening; four had secondary cases (close contacts’ secondary transmission rate = 13%). Shedding duration estimates were consistent with previous studies (median 31 days, IQR 17–41). Findings do not warrant guidance changes regarding exclusion and supervised return of prolonged shedders, despite serotype changes.
Centrality measures aim to indicate who is important in a network. Various notions of ‘being important’ give rise to different centrality measures. In this paper, we study how important the central vertices are for the connectivity structure of the network, by investigating how the removal of the most central vertices affects the number of connected components and the size of the giant component. We use local convergence techniques to identify the limiting number of connected components for locally converging graphs and centrality measures that depend on the vertex’s neighbourhood. For the size of the giant, we prove a general upper bound. For the matching lower bound, we specialise to the case of degree centrality on one of the most popular models in network science, the configuration model, for which we show that removal of the highest-degree vertices destroys the giant most.
This authoritative guide directs consumers and users of test scores on when and how to provide subscores and how to make informed decisions based on them. The book is designed to be accessible to practitioners and score users with varying levels of technical expertise, from executives of testing organizations and students who take tests to graduate students in educational measurement, psychometricians, and test developers. The theoretical background required to evaluate subscores and improve them are provided alongside examples of tests with subscores to illustrate their use and misuse. The first chapter covers the history of tests, subtests, scores, and subscores. Later chapters go into subscore reporting, evaluating and improving the quality of subscores, and alternatives to subscores when they are not appropriate. This thorough introduction to the existing research and best practices will be useful to graduate students, researchers, and practitioners.
An emerging field in statistics, distributional regression facilitates the modelling of the complete conditional distribution, rather than just the mean. This book introduces generalized additive models for location, scale and shape (GAMLSS) – one of the most important classes of distributional regression. Taking a broad perspective, the authors consider penalized likelihood inference, Bayesian inference, and boosting as potential ways of estimating models and illustrate their usage in complex applications. Written by the international team who developed GAMLSS, the text's focus on practical questions and problems sets it apart. Case studies demonstrate how researchers in statistics and other data-rich disciplines can use the model in their work, exploring examples ranging from fetal ultrasounds to social media performance metrics. The R code and data sets for the case studies are available on the book's companion website, allowing for replication and further study.
Australia’s mandatory vaccination policies have historically allowed for non-medical exemptions (NMEs), but this changed in 2016 when the Federal Government discontinued NMEs for childhood vaccination requirements. Australian states introduced further mandatory vaccination policies during the COVID-19 pandemic for a range of occupations including healthcare workers (HCWs). There is global evidence to suggest that medical exemptions (MEs) increase following the discontinuation of NMEs; the new swathe of COVID-19 mandatory vaccination policies likely also placed further pressure on ME systems in many jurisdictions. This paper examines the state of play of mandatory vaccination and ME policies in Australia by outlining the structure and operation of these policies for childhood vaccines, then for COVID-19, with a case study of HCW mandates. Next, the paper explores HCWs’ experiences in providing vaccine exemptions to patients (and MEs in particular). Finally, the paper synthesizes existing literature and reflects on the challenges of MEs as a pressure point for people who do not want to vaccinate and for the clinicians who care for them, proposing areas for future research and action.
This research studies the robustness of permanence and the continuous dependence of the stationary distribution on the parameters for a stochastic predator–prey model with Beddington–DeAngelis functional response. We show that if the model is extinct (resp. permanent) for a parameter, it is still extinct (resp. permanent) in a neighbourhood of this parameter. In the case of extinction, the Lyapunov exponent of predator quantity is negative and the prey quantity converges almost to the saturated situation, where the predator is absent at an exponential rate. Under the condition of permanence, the unique stationary distribution converges weakly to the degenerate measure concentrated at the unique limit cycle or at the globally asymptotic equilibrium when the diffusion term tends to 0.
For a subset $A$ of an abelian group $G$, given its size $|A|$, its doubling $\kappa =|A+A|/|A|$, and a parameter $s$ which is small compared to $|A|$, we study the size of the largest sumset $A+A'$ that can be guaranteed for a subset $A'$ of $A$ of size at most $s$. We show that a subset $A'\subseteq A$ of size at most $s$ can be found so that $|A+A'| = \Omega (\!\min\! (\kappa ^{1/3},s)|A|)$. Thus, a sumset significantly larger than the Cauchy–Davenport bound can be guaranteed by a bounded size subset assuming that the doubling $\kappa$ is large. Building up on the same ideas, we resolve a conjecture of Bollobás, Leader and Tiba that for subsets $A,B$ of $\mathbb{F}_p$ of size at most $\alpha p$ for an appropriate constant $\alpha \gt 0$, one only needs three elements $b_1,b_2,b_3\in B$ to guarantee $|A+\{b_1,b_2,b_3\}|\ge |A|+|B|-1$. Allowing the use of larger subsets $A'$, we show that for sets $A$ of bounded doubling, one only needs a subset $A'$ with $o(|A|)$ elements to guarantee that $A+A'=A+A$. We also address another conjecture and a question raised by Bollobás, Leader and Tiba on high-dimensional analogues and sets whose sumset cannot be saturated by a bounded size subset.
Solvency II requires that firms with Internal Models derive the Solvency Capital Requirement directly from the probability distribution forecast generated by the Internal Model. A number of UK insurance undertakings do this via an aggregation model consisting of proxy models and a copula. Since 2016 there have been a number of industry surveys on the application of these models, with the 2019 Prudential Regulation Authority (“PRA”) led industry wide thematic review identifying a number of areas of enhancement. This concluded that there was currently no uniform best practice. While there have been many competing priorities for insurers since 2019, the Working Party expects that firms will have either already made changes to their proxy modelling approach in light of the PRA survey, or will have plans to do so in the coming years. This paper takes the PRA feedback into account and explores potential approaches to calibration and validation, taking into consideration the different heavy models used within the industry and relative materiality of business lines.
For graphs $G$ and $H$, the Ramsey number $r(G,H)$ is the smallest positive integer $N$ such that any red/blue edge colouring of the complete graph $K_N$ contains either a red $G$ or a blue $H$. A book $B_n$ is a graph consisting of $n$ triangles all sharing a common edge.
Recently, Conlon, Fox, and Wigderson conjectured that for any $0\lt \alpha \lt 1$, the random lower bound $r(B_{\lceil \alpha n\rceil },B_n)\ge (\sqrt{\alpha }+1)^2n+o(n)$ is not tight. In other words, there exists some constant $\beta \gt (\sqrt{\alpha }+1)^2$ such that $r(B_{\lceil \alpha n\rceil },B_n)\ge \beta n$ for all sufficiently large $n$. This conjecture holds for every $\alpha \lt 1/6$ by a result of Nikiforov and Rousseau from 2005, which says that in this range $r(B_{\lceil \alpha n\rceil },B_n)=2n+3$ for all sufficiently large $n$.
We disprove the conjecture of Conlon, Fox, and Wigderson. Indeed, we show that the random lower bound is asymptotically tight for every $1/4\leq \alpha \leq 1$. Moreover, we show that for any $1/6\leq \alpha \le 1/4$ and large $n$, $r(B_{\lceil \alpha n\rceil }, B_n)\le \left (\frac 32+3\alpha \right ) n+o(n)$, where the inequality is asymptotically tight when $\alpha =1/6$ or $1/4$. We also give a lower bound of $r(B_{\lceil \alpha n\rceil }, B_n)$ for $1/6\le \alpha \lt \frac{52-16\sqrt{3}}{121}\approx 0.2007$, showing that the random lower bound is not tight, i.e., the conjecture of Conlon, Fox, and Wigderson holds in this interval.
A system experiences random shocks over time, with two critical levels, d1 and d2, where $d_{1} \lt d_{2}$. k consecutive shocks with magnitudes between d1 and d2 partially damaging the system, causing it to transition to a lower, partially working state. Shocks with magnitudes above d2 have a catastrophic effect, resulting in complete failure. This theoretical framework gives rise to a multi-state system characterized by an indeterminate quantity of states. When the time between successive shocks follows a phase-type distribution, a detailed analysis of the system’s dynamic reliability properties such as the lifetime of the system, the time it spends in perfect functioning, as well as the total time it spends in partially working states are discussed.