To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 1 provides an introdution to the study of statistics and covers the following specific topics among others: statistical software in data analysis, descriptive and inferential statistics, measurement of variables, and an introduction to the Stata software package.
In July 2021, Public Health Wales received two notifications of salmonella gastroenteritis. Both cases has attended the same barbecue to celebrate Eid al–Adha, two days earlier. Additional cases attending the same barbecue were found and an outbreak investigation was initiated. The barbecue was attended by a North African community’s social network. On same day, smaller lunches were held in three homes in the social network. Many people attended both a lunch and the barbecue. Cases were defined as someone with an epidemiological link to the barbecue and/or lunches with diarrhoea and/or vomiting with date of onset following these events. We undertook a cohort study of 36 people attending the barbecue and/or lunch, and a nested case-control study using Firth logistic regression. A communication campaign, sensitive towards different cultural practices, was developed in collaboration with the affected community. Consumption of a traditional raw liver dish, ‘marrara’, at the barbecue was the likely vehicle for infection (Firth logistic regression, aOR: 49.99, 95%CI 1.71–1461.54, p = 0.02). Meat and offal came from two local butchers (same supplier) and samples yielded identical whole genome sequences as cases. Future outbreak investigations should be relevant to the community affected by considering dishes beyond those found in routine questionnaires.
Migration is an important risk factor for malaria transmission for malaria transmission, creating networks that connect Plasmodium between communities. This study aims to understand the timing of why people in the Peruvian Amazon migrated and how characteristics of these migrants are associated with malaria risk. A cohort of 2,202 participants was followed for three years (July 2006 - October 2009), with thrice-weekly active surveillance to record infection and recent travel, which included travel destination(s) and duration away. Migration occurred more frequently in the dry season, but the 7-day rolling mean (7DRM) streamflow was positively correlated with migration events (OR 1.25 (95% CI: 1.138, 1.368)). High-frequency and low-frequency migrant populations reported 9.7 (IRR 7.59 (95% CI:.381, 13.160)) and 4.1 (IRR 2.89 (95% CI: 1.636, 5.099)) times more P. vivax cases than those considered non-migrants and 30.7 (IRR 32.42 (95% CI: 7.977, 131.765)) and 7.4 (IRR 7.44 (95% CI: 1.783, 31.066)) times more P. falciparum cases, respectively. High-frequency migrants employed in manual labour within their community were at 2.45 (95% CI: 1.113, 5.416) times higher risk than non-employed low-frequency migrants. This study confirms the importance of migration for malaria risk as well as factors increasing risk among the migratory community, including, sex, occupation, and educational status.
The increase in global population and urbanization is presenting significant challenges to society: space is becoming increasingly scarce, demand is exceeding capacity for deteriorating infrastructure, transportation is fraught with congestion, and environmental impacts are accelerating. Underground space, and particularly tunnels, has a key role to play in tackling these challenges. However, the cost, risk, uncertainty, and complexity of the tunneling process have impeded its growth. In this paper, we envision several technological advancements that can potentially innovate and transform the mechanized tunneling industry, including artificial intelligence (AI), autonomous, and bio-inspired systems. The proliferation of AI may assist human engineers and operators in making informed decisions systematically and quantitatively based on massive real-time data during tunneling. Autonomous tunneling systems may enable precise and predictable tunneling operations with minimal human intervention and facilitate the construction of massive and large-scale underground infrastructure projects that were previously challenging or unfeasible using conventional methods. Bio-inspired systems may provide valuable references and strategies for more efficient tunneling design and construction concepts. While these technological advancements can offer great promise, they also face considerable challenges, such as improving accessibility to and shareability of tunneling data, developing robust, reliable, and explainable machine learning systems, as well as scaling the mechanics and ensuring the applicability of bio-inspired systems from the prototype level to real-world applications. Addressing these challenges is imperative to ensure the successful implementation of these innovations for future tunneling.
Given a fixed graph $H$ and a constant $c \in [0,1]$, we can ask what graphs $G$ with edge density $c$ asymptotically maximise the homomorphism density of $H$ in $G$. For all $H$ for which this problem has been solved, the maximum is always asymptotically attained on one of two kinds of graphs: the quasi-star or the quasi-clique. We show that for any $H$ the maximising $G$ is asymptotically a threshold graph, while the quasi-clique and the quasi-star are the simplest threshold graphs, having only two parts. This result gives us a unified framework to derive a number of results on graph homomorphism maximisation, some of which were also found quite recently and independently using several different approaches. We show that there exist graphs $H$ and densities $c$ such that the optimising graph $G$ is neither the quasi-star nor the quasi-clique (Day and Sarkar, SIAM J. Discrete Math. 35(1), 294–306, 2021). We also show that for $c$ large enough all graphs $H$ maximise on the quasi-clique (Gerbner et al., J. Graph Theory 96(1), 34–43, 2021), and for any $c \in [0,1]$ the density of $K_{1,2}$ is always maximised on either the quasi-star or the quasi-clique (Ahlswede and Katona, Acta Math. Hung. 32(1–2), 97–120, 1978). Finally, we extend our results to uniform hypergraphs.
Calculation of loss scenarios is a fundamental requirement of simulation-based capital models and these are commonly approximated. Within a life insurance setting, a loss scenario may involve an asset-liability optimization. When cashflows and asset values are dependent on only a small number of risk factor components, low-dimensional approximations may be used as inputs into the optimization and resulting in loss approximation. By considering these loss approximations as perturbations of linear optimization problems, approximation errors in loss scenarios can be bounded to first order and attributed to specific proxies. This attribution creates a mechanism for approximation improvements and for the eventual elimination of approximation errors in capital estimates through targeted exact computation. The results are demonstrated through a stylized worked example and corresponding numerical study. Advances in error analysis of proxy models enhance confidence in capital estimates. Beyond error analysis, the presented methods can be applied to general sensitivity analysis and the calculation of risk.
Chlamydia trachomatis (CT) and Neisseria gonorrhoeae (NG) are widely recognised as two prevalent sexually transmitted infections that can have detrimental effects on women’s reproductive health. Previous research has concentrated on studying high-risk populations, resulting in limited epidemiological data regarding the general population. Therefore, the objective of this study was to estimate the prevalence of CT and NG among women attending public primary health care in Tlaxcala, Mexico. The study sample included 2,396 women already participating in the cervical cancer screening programme, from July to November 2014. After obtaining informed consent, the CT and NG tests were conducted on cervical samples, using a nucleic acid amplification test. We estimate the prevalence with 95% confidence intervals (CIs). Women who tested positive were promptly notified and provided with appropriate treatment. In our study population, CT and NG prevalences were 3.2 (95% CI: 2.6–4.0) and 0.01 (95% CI: 0.01–0.03), respectively. CT prevalence was higher in younger women (age < 40), although the results indicate a low prevalence; due to the potentially significant impact of CT and NG on women’s health, we require adequate surveillance, and guaranteeing rapid referral to the correct treatment is a priority for the control of these diseases.
We determine almost sure limits of rescaled intrinsic volumes of the construction steps of fractal percolation in ${\mathbb R}^d$ for any dimension $d\geq 1$. We observe a factorization of these limit variables which allows one, in particular, to determine their expectations and covariance structure. We also show the convergence of the rescaled expectations and variances of the intrinsic volumes of the construction steps to the expectations and variances of the limit variables, and we give rates for this convergence in some cases. These results significantly extend our previous work, which addressed only limits of expectations of intrinsic volumes.
Several governmental organizations all over the world aim for algorithmic accountability of artificial intelligence systems. However, there are few specific proposals on how exactly to achieve it. This article provides an extensive overview of possible transparency and inspectability mechanisms that contribute to accountability for the technical components of an algorithmic decision-making system. Following the different phases of a generic software development process, we identify and discuss several such mechanisms. For each of them, we give an estimate of the cost with respect to time and money that might be associated with that measure.
This study aimed to assess the ability of a real-time reverse transcription polymerase chain reaction (RT-PCR) with multiple targets to detect SARS-CoV-2 and its variants in a single test. Nasopharyngeal specimens were collected from patients in Granada, Spain, between January 2021 and December 2022. Five allele-specific RT-PCR kits were used sequentially, with each kit designed to detect a predominant variant at the time. When the Alpha variant was dominant, the kit included the HV69/70 deletion, E and N genes. When Delta replaced Alpha, the kit incorporated the L452R mutation in addition to E and N genes. When Omicron became dominant, L452R was replaced with the N679K mutation. Before incorporating each variant kit, a comparative analysis was carried out with SARS-CoV-2 whole genome sequencing (WGS). The results demonstrated that RT-PCR with multiple targets can provide rapid and effective detection of SARS-CoV-2 and its variants in a single test. A very high degree of agreement (96.2%) was obtained between the comparison of RT-PCR and WGS. Allele-specific RT-PCR assays make it easier to implement epidemiological surveillance systems for effective public health decision making.
This systematic review and meta-analysis aims to estimate the prevalence of coronavirus disease 2019 (COVID-19) vaccine hesitancy in Turkey, which can aid future health policies and strategies. A comprehensive search was conducted on various databases using keywords related to COVID-19 vaccine hesitancy in Turkey. Quality assessment was performed using Joanna Briggs Institute (JBI) checklist for prevalence studies. Data extraction was conducted. The random effect model (DerSimonian and Laird method) was used in pooled prevalence data analysis (95% confidence interval [CI]). A total of 1,072 articles were identified. After removing duplicates and excluding articles, 61 articles remained for bias assessment. Among these, 19 articles with low risk of bias were included in the review and meta-analysis. Total population included in the analysis was 15,164, vaccine hesitancy was 30.5% (95% Cl: 24.3–36.8%). Prevalence of the vaccine hesitancy was found to be 39.8% (95% Cl: 31.4–48.2%) in studies conducted before the initiation of vaccination, while in studies conducted after the commencement of vaccination, hesitancy was 20.4% (95% Cl: 12.9–28%). We suggest conducting high-quality studies in different populations to understand the level of vaccine hesitancy, as many of the previous studies have mainly focused on healthcare workers and students, and rest were community-based studies, which have generally shown high bias. Also, we suggest that early vaccination can reduce vaccine hesitancy.
Chapter 5 gives an extended empirical example of the Benford agreement procedure for assessing the validity of social science data. The example uses country-level data collected and estimated by the Sea Around Us organization on the dollar values of reported and unreported fish landings from 2010 to 2016. We report Benford agreement analyses for the Sea Around Us data (1) by reporting status, (2) by decade, (3) for a large fishing region of 22 West African countries, and (4) foreach of the 22 individual countries in West Africa.
Chapter 4 begins with a discussion of the types and kinds of data most suitable for an analysis that uses the Benford probability distribution. Next we describe an R computer program – program Benford – designed to evaluate observed data for agreement with the Benford probability distribution; and we give an example of output from the program using a typical dataset. We then move to an overview of our workflow of Benford agreement analyses where we outline our process for assessing the validity of data using Benford agreement analyses. We end the chapter with a discussion of the concept of Benford validity, which we will employ in subsequent chapters.
Chapter 7 takes a closer look at some of the Sea Around Us fish-landings data that we assessed for Benford agreement in Chapter 5. We chose these data because of the mixed agreement findings among them: while the full dataset and several sets of subgroups indicated that the data exhibited Benford validity, when we analyzed West African countries individually, a number of them were found to have unacceptable Benford agreement and therefore problematic Benford validity. We present ways in which researchers can assess the impact of unacceptable Benford agreement on their analyses.