To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Large-scale coordinated efforts have been dedicated to understanding the global health and economic implications of the COVID-19 pandemic. Yet, the rapid spread of discrimination and xenophobia against specific populations has largely been neglected. Understanding public attitudes toward migration is essential to counter discrimination against immigrants and promote social cohesion. Traditional data sources to monitor public opinion are often limited, notably due to slow collection and release activities. New forms of data, particularly from social media, can help overcome these limitations. While some bias exists, social media data are produced at an unprecedented temporal frequency, geographical granularity, are collected globally and accessible in real-time. Drawing on a data set of 30.39 million tweets and natural language processing, this article aims to measure shifts in public sentiment opinion about migration during early stages of the COVID-19 pandemic in Germany, Italy, Spain, the United Kingdom, and the United States. Results show an increase of migration-related Tweets along with COVID-19 cases during national lockdowns in all five countries. Yet, we found no evidence of a significant increase in anti-immigration sentiment, as rises in the volume of negative messages are offset by comparable increases in positive messages. Additionally, we presented evidence of growing social polarization concerning migration, showing high concentrations of strongly positive and strongly negative sentiments.
We estimate the density and its derivatives using a local polynomial approximation to the logarithm of an unknown density function f. The estimator is guaranteed to be non-negative and achieves the same optimal rate of convergence in the interior as on the boundary of the support of f. The estimator is therefore well-suited to applications in which non-negative density estimates are required, such as in semiparametric maximum likelihood estimation. In addition, we show that our estimator compares favorably with other kernel-based methods, both in terms of asymptotic performance and computational ease. Simulation results confirm that our method can perform similarly or better in finite samples compared to these alternative methods when they are used with optimal inputs, that is, an Epanechnikov kernel and optimally chosen bandwidth sequence. We provide code in several languages.
Novel navigation applications provide a driving behavior score for each finished trip to promote safe driving, which is mainly based on experts’ domain knowledge. In this paper, with automobile insurance claims data and associated telematics car driving data, we propose a supervised driving risk scoring neural network model. This one-dimensional convolutional neural network takes time series of individual car driving trips as input and returns a risk score in the unit range of (0,1). By incorporating credibility average risk score of each driver, the classical Poisson generalized linear model for automobile insurance claims frequency prediction can be improved significantly. Hence, compared with non-telematics-based insurers, telematics-based insurers can discover more heterogeneity in their portfolio and attract safer drivers with premiums discounts.
This paper highlights a tension between semiparametric efficiency and bootstrap consistency in the context of a canonical semiparametric estimation problem, namely the problem of estimating the average density. It is shown that although simple plug-in estimators suffer from bias problems preventing them from achieving semiparametric efficiency under minimal smoothness conditions, the nonparametric bootstrap automatically corrects for this bias and that, as a result, these seemingly inferior estimators achieve bootstrap consistency under minimal smoothness conditions. In contrast, several “debiased” estimators that achieve semiparametric efficiency under minimal smoothness conditions do not achieve bootstrap consistency under those same conditions.
Digital identity (eID) systems are a crucial piece in the digital services ecosystem. They connect individuals to a variety of socioeconomic opportunities but can also reinforce power asymmetries between organizations and individuals. Data collection practices can negatively impact an individual’s right to privacy, autonomy, and self-determination. Protecting individual rights, however, may be at odds with imperatives of profit maximization or national security. The use of eID technologies is hence highly contested. Current approaches to governing eID systems have been unable to fully address the trade-offs between the opportunities and risks associated with these systems. The responsible innovation (RI) literature provides a set of principles to govern disruptive innovations, such as eID systems, toward societally desirable outcomes. This article uses RI principles to develop a framework to govern eID systems in a more inclusive, responsible, and user-centered manner. The proposed framework seeks to complement existing practices for eID system governance by bringing forth principles of deliberation and democratic engagement to build trust amongst stakeholders of the eID system and deliver shared socioeconomic benefits.
This article recasts the traditional challenge of calibrating a material constitutive model into a hierarchical probabilistic framework. We consider a Bayesian framework where material parameters are assigned distributions, which are then updated given experimental data. Importantly, in true engineering setting, we are not interested in inferring the parameters for a single experiment, but rather inferring the model parameters over the population of possible experimental samples. In doing so, we seek to also capture the inherent variability of the material from coupon-to-coupon, as well as uncertainties around the repeatability of the test. In this article, we address this problem using a hierarchical Bayesian model. However, a vanilla computational approach is prohibitively expensive. Our strategy marginalizes over each individual experiment, decreasing the dimension of our inference problem to only the hyperparameter—those parameter describing the population statistics of the material model only. Importantly, this marginalization step, requires us to derive an approximate likelihood, for which, we exploit an emulator (built offline prior to sampling) and Bayesian quadrature, allowing us to capture the uncertainty in this numerical approximation. Importantly, our approach renders hierarchical Bayesian calibration of material models computational feasible. The approach is tested in two different examples. The first is a compression test of simple spring model using synthetic data; the second, a more complex example using real experiment data to fit a stochastic elastoplastic model for 3D-printed steel.
Hand hygiene (HH) performance on entering intensive care units (ICUs) is commonly accepted but often inadequately performed. We developed a simple, inexpensive module that connects touchless dispensers of alcohol sanitiser (TDAS) to the automatic doors of a paediatric ICU, and assessed the impact of this intervention on HH compliance of hospital staff and visitors. A prospective observational study was conducted over a 3-week period prior to the intervention, followed by a 4-week period post intervention. HH performance was monitored by a research assistant whose office location enabled direct and video-assisted observation of the ICU entrance. A total of 609 entries to the ICU was recorded. Overall HH performance was 46.9% (92/196) before and 98.5% (406/413) after the intervention. Our findings suggest that HH performance on entering an ICU can be improved via a mechanism that makes operation of an automatic door dependent on use of a TDAS system, and thus contribute to infection control.
This paper proposes and analyses a stochastic model for the spread of an infectious disease transmitted between clients and care workers in the UK domiciliary (home) care setting. Interactions between clients and care workers are modelled using specially generated networks, with network parameters reflecting realistic patterns of care needs and visit allocation. These networks are then used to simulate a susceptible-exposed-infected-recovered/dead (SEIR/D)-type epidemic dynamics with different numbers of infectious and recovery stages. The results indicate that with the same overall capacity provided by care workers, the minimum peak proportion of infection and the smallest overall size of infection are achieved for the highest proportion of overlap between visit allocation, i.e. when care workers have the highest chances of being allocated a visit to the same client they have visited before. An intuitive explanation of this is that while providing the required care coverage, maximising overlap in visit allocation reduces the possibility of an infectious care worker inadvertently spreading the infection to other clients. The model is generic and can be adapted to any directly transmitted infectious disease, such as, more recently, corona virus disease 2019, provided accurate estimates of disease parameters can be obtained from real data.
The dengue virus type 3 (DENV-3) homotypic outbreak cycles reported in Klang Valley, Malaysia in 1992–1995 and 2002 demonstrated different epidemic magnitude and duration. These outbreak cycles were caused by two closely related strains of viruses within the DENV-3 genotype II (DENV-3/II). The role of viral genotypic diversity and factors that could have influenced this phenomenon were investigated. The serum neutralisation sensitivity of DEN3/II strains responsible for the DENV-3 outbreak cycles in 1992–1995 and 2002 were examined. Representative virus isolates from the respective outbreaks were subjected to virus neutralisation assay using identified sera of patients with homotypic (DENV-3) or heterotypic dengue infections (DENV-1 and DENV-2). Results from the study suggested that isolates representing DENV-3/II group E (DENV-3/II-E) from the 1992–1995 outbreak and DENV-3/II group F (DENV-3/II-F) from the 2002 outbreak were neutralised at similar capacity (intergenotypic differences <2-fold) by sera of patients infected with DENV-3, DENV-1 and DENV-2/Asian genotypes. Sera of the DENV-2/Cosmopolitan infection efficiently neutralised DENV-3/II-F (FRNT50 = 508.0) at a similar neutralisation capacity against its own homotypic serotype, DENV-2 (FRNT50 = 452.5), but not against DENV-3/II-E (FRNT50 = 100.8). The different neutralisation sensitivities of DENV-3/II strains towards the cross-reacting DENV-2 heterotypic immunity could play a role in shaping the DENV-3 recurring outbreaks pattern in Malaysia. Two genetic variations, E-132 (H/Y) and E-479 (A/V) were identified on the envelope protein of DENV-3/II-E and DENV-3/II-F, respectively. The E-132 variation was predicted to affect the protein stability. A more extensive study, however, on the implication of the naturally occurring genetic variations within closely related DENV genotypes on the neutralisation profile and protective immunity would be needed for a better understanding of the DENV spread pattern in a hyperendemic setting.
Phenomenological models are popular for describing the epidemic curve. We present how they can be used at different phases in the epidemic, by modelling the daily number of new hospitalisations (or cases). As real-time prediction of the hospital capacity is important, a joint model of the new hospitalisations, number of patients in hospital and in intensive care unit (ICU) is proposed. This model allows estimation of the length of stay in hospital and ICU, even if no (or limited) individual level information on length of stay is available. Estimation is done in a Bayesian framework. In this framework, real-time alarms, defined as the probability of exceeding hospital capacity, can be easily derived. The methods are illustrated using data from the COVID-19 pandemic in March–June 2020 in Belgium, but are widely applicable.
Vulnerability to coronavirus disease (COVID)-19 varies due to differences in interferon gamma (IFNγ) immunity. We investigated whether a key modifiable interferon precursor, interleukin-18, was related to COVID-19, overall and by severity, using Mendelian randomisation. We used four established genome-wide significant genetic predictors of interleukin-18 applied to the most recent genome-wide association study of COVID-19 (June 2021) to obtain Mendelian randomisation inverse variance weighted estimates by severity, i.e. any (cases = 112 612, non-cases = 2 474 079), hospitalised (cases = 24 274, non-cases = 2 061 529) and very severe (cases = 8779, non-cases = 1 001 875) COVID-19. To be comprehensive, we also conducted an exploratory analysis for IFNγ and two related cytokines with less well-established genetic predictors, i.e. interleukin-12 and interleukin-23. Genetically predicted interleukin-18 was associated with lower risk of any COVID-19 (odds ratio (OR) 0.96 per standard deviation, 95% confidence interval (0.94–0.99, P-value 0.004)) and of very severe COVID-19 (OR 0.88, 95% CI 0.78–0.999, P-value 0.048). Sensitivity analysis and a more liberal genetic instrument selection gave largely similar results. Few genome-wide significant genetic predictors were available for IFNγ, interleukin-12 or interleukin-23, and no associations with COVID-19 were evident. Interleukin-18 could be a modifiable target to prevent COVID-19 and should be further explored in an experimental design.
Combinatorial samplers are algorithmic schemes devised for the approximate- and exact-size generation of large random combinatorial structures, such as context-free words, various tree-like data structures, maps, tilings, RNA molecules. They can be adapted to combinatorial specifications with additional parameters, allowing for a more flexible control over the output profile of parametrised combinatorial patterns. One can control, for instance, the number of leaves, profile of node degrees in trees or the number of certain sub-patterns in generated strings. However, such a flexible control requires an additional and nontrivial tuning procedure. Using techniques of convex optimisation, we present an efficient tuning algorithm for multi-parametric combinatorial specifications. Our algorithm works in polynomial time in the system description length, the number of tuning parameters, the number of combinatorial classes in the specification, and the logarithm of the total target size. We demonstrate the effectiveness of our method on a series of practical examples, including rational, algebraic, and so-called Pólya specifications. We show how our method can be adapted to a broad range of less typical combinatorial constructions, including symmetric polynomials, labelled sets and cycles with cardinality lower bounds, simple increasing trees or substitutions. Finally, we discuss some practical aspects of our prototype tuner implementation and provide its benchmark results.
We consider computation of market values of bonus payments in multi-state with-profit life insurance. The bonus scheme consists of additional benefits bought according to a dividend strategy that depends on the past realization of financial risk, the current individual insurance risk, the number of additional benefits currently held, and so-called portfolio-wide means describing the shape of the insurance business. We formulate numerical procedures that efficiently combine simulation of financial risk with classic methods for the outstanding insurance risk. Special attention is given to the case where the number of additional benefits bought only depends on the financial risk. Methods and results are illustrated via a numerical example.
This study aims to carry out and document a systematic analysis of the literature on the importance of actuarial management in insurance managerial decision-making in the twenty-first century. After a short introduction to the business context, the paper presents an analysis of a rigorous review of the literature published between the years 2000 and 2020, which highlights the benefits and challenges of the formal application of various risk management tools. The following topics are emphasized: (i) actuarial cycle control and uncertainty management, (ii) quantitative strategic risk framework and financial modelling, and (iii) enterprise risk management with a value-based approach. This work will help future researchers to gain a better understanding of, and explicitly account for, the different contributions and benefits of actuarial management in the context of managerial decision-making in an organisation.
In this paper, we present a new nonparametric method for estimating a conditional quantile function and develop its weak convergence theory. The proposed estimator is computationally easy to implement and automatically ensures quantile monotonicity by construction. For inference, we propose to use a residual bootstrap method. Our Monte Carlo simulations show that this new estimator compares well with the check-function-based estimator in terms of estimation mean squared error. The bootstrap confidence bands yield adequate coverage probabilities. An empirical example uses a dataset of Canadian high school graduate earnings, illustrating the usefulness of the proposed method in applications.
This paper studies the optimal insurance design from the perspective of an insured when there is possibility for the insurer to default on its promised indemnity. Default of the insurer leads to limited liability, and the promised indemnity is only partially recovered in case of a default. To alleviate the potential ex post moral hazard, an incentive compatibility condition is added to restrict the permissible indemnity function. Assuming that the premium is determined as a function of the expected coverage and under the mean–variance preference of the insured, we derive the explicit structure of the optimal indemnity function through the marginal indemnity function formulation of the problem. It is shown that the optimal indemnity function depends on the first and second order expectations of the random recovery rate conditioned on the realized insurable loss. The methodology and results in this article complement the literature regarding the optimal insurance subject to the default risk and provide new insights on problems of similar types.
The bandwidth theorem of Böttcher, Schacht and Taraz states that any n-vertex graph G with minimum degree $\big(\tfrac{k-1}{k}+o(1)\big)n$ contains all n-vertex k-colourable graphs H with bounded maximum degree and bandwidth o(n). Recently, a subset of the authors proved a random graph analogue of this statement: for $p\gg \big(\tfrac{\log n}{n}\big)^{1/\Delta}$ a.a.s. each spanning subgraph G of G(n,p) with minimum degree $\big(\tfrac{k-1}{k}+o(1)\big)pn$ contains all n-vertex k-colourable graphs H with maximum degree $\Delta$, bandwidth o(n), and at least $C p^{-2}$ vertices not contained in any triangle. This restriction on vertices in triangles is necessary, but limiting. In this paper, we consider how it can be avoided. A special case of our main result is that, under the same conditions, if additionally all vertex neighbourhoods in G contain many copies of $K_\Delta$ then we can drop the restriction on H that $Cp^{-2}$ vertices should not be in triangles.
Enterotoxigenic Escherichia coli (ETEC) strains that express various fimbrial or nonfimbrial colonisation factors (CFs) and enterotoxins are critical causes of diarrhoeal diseases. Human ETEC serotype O169:H41 (O169) has been a representative of epidemic ETEC worldwide; the organism shows massive adherence to HEp-2 cells similar to enteroaggregative E. coli. Previously, we determined the complete sequence of the unstable virulence plasmid, pEntYN10. The plasmid included a unique set of genes encoding a novel CF resembling K88 (F4) of porcine ETEC, in addition to CS6, a well-known representative CF of human ETEC, and another novel CF similar to CS8 (CFA/III) of human ETEC. In the present study, we focused on K88-like CF (hereafter, K88O169) that may allow the organisms to infect domestic livestock like original K88-harbouring strains that can cause diarrhoea in piglets. Samples were tested for antibodies against recombinant proteins of possible paralogous adhesins, FaeG1 and FaeG2, from K88O169 and the FaeG of typical K88 (F4). The seroepidemiological study using recombinant antigens (two paralogs FaeG1 and FaeG2 from K88O169) showed reactivity of porcine (18.0%) and bovine (17.1%) sera to K88O169 FaeG1 and/or FaeG2 antigens on indirect ELISA tests. These results suggest that E. coli with K88O169 adhesin can infect various hosts, including pigs and cattle. This is the first report of domestic livestock having antibodies to K88O169 of human ETEC. Although human ETEC had been thought to be distinguished from those of domestic animals based on CFs, zoonotic strains may conceal themselves among human ETEC organisms. The concept of One Health should be adopted to intervene in ETEC infections among animals and humans.