To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper discusses the challenges and opportunities in accessing data to improve workplace relations law enforcement, with reference to minimum employment standards such as wages and working hours regulation. Our paper highlights some innovative examples of government and trade union efforts to collect and use data to improve the detection of noncompliance. These examples reveal the potential of data science as a compliance tool but also suggest the importance of realizing a data ecosystem that is capable of being utilized by machine learning applications. The effectiveness of using data and data science tools to improve workplace law enforcement is impacted by the ability of regulatory actors to access useful data they do not collect or hold themselves. Under “open data” principles, government data is increasingly made available to the public so that it can be combined with nongovernment data to generate value. Through mapping and analysis of the Australian workplace relations data ecosystem, we show that data availability relevant to workplace law compliance falls well short of open data principles. However, we argue that with the right protocols in place, improved data collection and sharing will assist regulatory actors in the effective enforcement of workplace laws.
We investigate here the behaviour of a large typical meandric system, proving a central limit theorem for the number of components of a given shape. Our main tool is a theorem of Gao and Wormald that allows us to deduce a central limit theorem from the asymptotics of large moments of our quantities of interest.
When people are asked to recall their social networks, theoretical and empirical work tells us that they rely on shortcuts, or heuristics. Cognitive social structures (CSSs) are multilayer social networks where each layer corresponds to an individual’s perception of the network. With multiple perceptions of the same network, CSSs contain rich information about how these heuristics manifest, motivating the question, Can we identify people who share the same heuristics? In this work, we propose a method for identifying cognitive structure across multiple network perceptions, analogous to how community detection aims to identify social structure in a network. To simultaneously model the joint latent social and cognitive structure, we study CSSs as three-dimensional tensors, employing low-rank nonnegative Tucker decompositions (NNTuck) to approximate the CSS—a procedure closely related to estimating a multilayer stochastic block model (SBM) from such data. We propose the resulting latent cognitive space as an operationalization of the sociological theory of social cognition by identifying individuals who share relational schema. In addition to modeling cognitively independent, dependent, and redundant networks, we propose a specific model instance and related statistical test for testing when there is social-cognitive agreement in a network: when the social and cognitive structures are equivalent. We use our approach to analyze four different CSSs and give insights into the latent cognitive structures of those networks.
Eaton (1992) considered a general parametric statistical model paired with an improper prior distribution for the parameter and proved that if a certain Markov chain, constructed using the model and the prior, is recurrent, then the improper prior is strongly admissible, which (roughly speaking) means that the generalized Bayes estimators derived from the corresponding posterior distribution are admissible. Hobert and Robert (1999) proved that Eaton’s Markov chain is recurrent if and only if its so-called conjugate Markov chain is recurrent. The focus of this paper is a family of Markov chains that contains all of the conjugate chains that arise in the context of a Poisson model paired with an arbitrary improper prior for the mean parameter. Sufficient conditions for recurrence and transience are developed and these are used to establish new results concerning the strong admissibility of non-conjugate improper priors for the Poisson mean.
The Institute and Faculty of Actuaries UK Asbestos Working Party update 2020 sets out the methodology and assumptions used to estimate the potential cost of asbestos-related claims to the UK Employers’ Liability Insurance Market. The Working Party has estimated the UK EL Insurance Market cost for the following asbestos-related disease types: mesothelioma, lung cancer, asbestosis and pleural thickening, and pleural plaques. For each disease type the Working Party has constructed a range of scenarios to highlight the uncertainty of these estimates. The Working Party reminds practitioners that use the Working Party scenarios that they should always consider the experience and trends that have occurred since the scenarios were published, adjusting the scenarios to take into account new information.
We establish here an integral inequality for real log-concave functions, which can be viewed as an average monotone likelihood property. This inequality is then applied to examine the monotonicity of failure rates.
The calculation of life and health insurance liabilities is based on assumptions about mortality and disability rates, and insurance companies face systematic insurance risks if assumptions about these rates change. In this paper, we study how to manage systematic insurance risks in a multi-state setup by considering securities linked to the transition intensities of the model. We assume there exists a market for trading two securities linked to, for instance, mortality and disability rates, the de-risking option and the de-risking swap, and we describe the optimization problem to find the de-risking strategy that minimizes systematic insurance risks in a multi-state setup. We develop a numerical example based on the disability model, and the results imply that systematic insurance risks significantly decrease when implementing de-risking strategies.
The $d$-process generates a graph at random by starting with an empty graph with $n$ vertices, then adding edges one at a time uniformly at random among all pairs of vertices which have degrees at most $d-1$ and are not mutually joined. We show that, in the evolution of a random graph with $n$ vertices under the $d$-process with $d$ fixed, with high probability, for each $j \in \{0,1,\dots,d-2\}$, the minimum degree jumps from $j$ to $j+1$ when the number of steps left is on the order of $\ln (n)^{d-j-1}$. This answers a question of Ruciński and Wormald. More specifically, we show that, when the last vertex of degree $j$ disappears, the number of steps left divided by $\ln (n)^{d-j-1}$ converges in distribution to the exponential random variable of mean $\frac{j!}{2(d-1)!}$; furthermore, these $d-1$ distributions are independent.
The European public sector has for a long time tried to change its activities and its relation to the public through the production and provision of data and data-based technologies. Recent debates raised attention to data uses, through which societal value may be realized. However, often absent from these discussions is a conceptual and methodological debate on how to grasp and study such uses. This collection proposes a turn toward data practices—intended here as the analysis of data uses and policies, as they are articulated, understood, or turned into situated activities by different actors in specific contexts, involving organizational rules, socioeconomic factors, discourses, and artifacts. Through a mix of conceptual and methodological studies, the contributions explore how data-driven innovation within public institutions is understood, imagined, planned for, conducted, or assessed. The situations examined in this special issue show, for instance, that data initiatives carried out by different actors lack institutional rules to align data use to the actual needs of citizens; that data scientists are important moral actors whose ethical reasoning should be fostered; and that the materiality of data practices, such as databases, enables and constrains opportunities for public engagement. Collectively, the contributions offer new insights into what constitutes “data-driven innovation practices,” how different practices are assembled, and what their different political, moral, economic, and organizational implications are. The contributions focus on three particular topics of concern: the making of ethical and normative values in practice; organizational collaborations with and around data; and methodological innovations of studying data practices.
This study aimed to investigate the diverse clinical manifestations and simple early biomarkers predicting mortality of COVID-19 patients admitted to the emergency department (ED). A total of 710 patients with COVID-19 were enrolled from 6,896 patients presenting to the ED between January 2022 and March 2022. During the study period, a total of 478 patients tested positive for COVID-19, among whom 222 (46.4%) presented with extrapulmonary manifestations of COVID-19; 49 (10.3%) patients displayed gastrointestinal manifestations, followed by neurological (n = 41; 8.6%) and cardiac manifestations (n = 31; 6.5%). In total, 54 (11.3%) patients died. A Cox proportional hazards model revealed that old age, acute kidney injury at presentation, increased total leukocyte counts, low platelet counts, decreased albumin levels, and increased LDH levels were the independent predictors of mortality. The albumin levels exhibited the highest area under the curve in receiver operating characteristic analysis, with a value of 0.860 (95% confidence interval, 0.796–0.875). The study showed the diverse clinical presentations and simple-to-measure prognostic markers in COVID-19 patients presenting to the ED. Serum albumin levels can serve as a novel and simple early biomarker to identify COVID-19 patients at high risk of death.
The Risk Margin under Solvency II is determined as the cost of holding capital over the lifetime of liabilities, whereby future costs are discounted to the valuation date at risk-free rates. An implicit assumption of the current method is that the Risk Margin should allow for new capital to be raised after the occurrence of losses no larger than required capital. Using “Cost of Capital” as general valuation method, various approaches are discussed, giving rise to several alternative calculation methods of the Risk Margin. A comparison is made with the adjustment proposed by EIOPA in 2020 and also an approach is explored where future capital raisings are treated as contingent commitments. Each of the approaches discussed can be justified on its own merits in the context of Solvency II legislation, and leads to substantially different results for liabilities with long durations. Therefore, a more precise specification of the function of the Risk Margin and underlying assumptions is desirable.
The U.S. federal government annually awards billions of dollars as contracts to procure different products and services from external businesses. Although the federal government’s immense purchasing power provides a unique opportunity to invest in the nation’s women-owned businesses (WOBs) and minority-owned businesses (MOBs) and advance the entrepreneurial dreams of many more Americans, gender and racial disparities in federal procurement are pervasive. In this study, we undertake a granular examination of these disparities by analyzing the data on 1,551,610 contracts awarded by 58 different federal government agencies. Specifically, we examine the representation of WOBs and MOBs in contracts with varying levels of STEM intensity and across 19 different contract categories, which capture the wide array of products and services purchased by the federal government. We show that contracts with higher levels of STEM intensity are associated with a lower likelihood of being awarded to WOBs and MOBs. Interestingly, the negative association between a contract’s STEM intensity and its likelihood to be awarded to MOBs is particularly salient for Black-, and Hispanic-owned businesses. Among the 19 categories of contracts, Black-owned businesses are more likely to receive contracts that are characterized by lower median pay levels. Collectively, these results provide data-driven evidence demonstrating the need to make a distinction between the different categories of MOBs and consider the type of products and services being procured while carrying out an examination of racial disparities in federal procurement.
This paper is motivated by the findings of a review in 2020 by the Institute and Faculty of Actuaries, which found that commutation factors differed widely between schemes, they were typically significantly below transfer value factors, and that in nearly 30% of cases, trustees did not act fully on the actuary’s advice. The author suggests that regulation of commutation factors, instead of factors being decided by trustees’ discretion, could be a suitable way forward. The focus is commutation factors for UK-defined benefit pension schemes, having regard to the law that governs the discretion available to trustees. Relevant legal principles are explained, including the requirement for trustees’ decisions to be made for a proper purpose and to be made with due care and skill, taking into account relevant considerations. These principles are applied to the setting of commutation factors. The author describes four methods trustees may use to assess the actuarial equivalence of the pension being exchanged for cash, which is ordinarily part of the process to set commutation factors. None of the four is entirely satisfactory, although it is suggested that there are some advantages in viewing commutation as a transaction between trustees and members. The possible use of market-consistent factors is one of the topics discussed. Commutation factors can also incorporate guarantee charges and/or deductions for underfunding, although the author explains the argument that the latter should not be commonly applied. The role of employers’ and members’ expectations is discussed and can explain why commutation factors can reasonably be less than 100% of actuarial equivalence. It is argued that the impact of commutation on employers’ contributions can in some circumstances justify adjusting commutation factors. The paper also considers other reasons sometimes put forward for reducing factors: tax, utility and optionality. The author also argues that reviewing commutation factors only every three years sits uneasily with legal principles. Further enquiry is suggested as to the responsibilities in law of actuaries when certifying that factors are reasonable. The author suggests that trust law permits trustees to use their discretion in a way that can produce a wide range of outcomes, which may be regarded as unsatisfactory for determining what may be an important part of a member’s reward package, and that a better approach may be for the government to introduce regulations on commutation factors, including a form of disclosure to help inform members’ choice on exercising the option.
We focus on exponential semi-Markov decision processes with unbounded transition rates. We first provide several sufficient conditions under which the value iteration procedure converges to the optimal value function and optimal deterministic stationary policies exist. These conditions are also valid for general semi-Markov decision processes possibly with accumulation points. Then, we apply our results to a service rate control problem with impatient customers. The resulting exponential semi-Markov decision process has unbounded transition rates, which makes the well-known uniformization technique inapplicable. We analyze the structure of the optimal policy and the monotonicity of the optimal value function by using the customization technique that was introduced by the author in prior work.
This paper proposes a theoretical insurance model to explain well-documented loss underreporting and to study how strategic underreporting affects insurance demand. We consider a utility-maximizing insured who purchases a deductible insurance contract and follows a barrier strategy to decide whether she should report a loss. The insurer adopts a bonus-malus system with two rate classes, and the insured will move to or stay in the more expensive class if she reports a loss. First, we fix the insurance contract (deductibles) and obtain the equilibrium reporting strategy in semi-closed form. A key result is that the equilibrium barriers in both rate classes are strictly greater than the corresponding deductibles, provided that the insured economically prefers the less expensive rate class, thereby offering a theoretical explanation to underreporting. Second, we study an optimal deductible insurance problem in which the insured strategically underreports losses to maximize her utility. We find that the equilibrium deductibles are strictly positive, suggesting that full insurance, often assumed in related literature, is not optimal. Moreover, in equilibrium, the insured underreports a positive amount of her loss. Finally, we examine how underreporting affects the insurer’s expected profit.
This paper retrospectively analysed the prevalence of macrolide-resistant Mycoplasma pneumoniae (MRMP) in some parts of China. Between January 2013 and December 2019, we collected 4,145 respiratory samples, including pharyngeal swabs and alveolar lavage fluid. The highest PCR-positive rate of M. pneumoniae was 74.5% in Beijing, the highest resistance rate was 100% in Shanghai, and Gansu was the lowest with 20%. The highest PCR-positive rate of M. pneumoniae was 74.5% in 2013, and the highest MRMP was 97.4% in 2019; the PCR-positive rate of M. pneumoniae for adults in Beijing was 17.9% and the MRMP was 10.48%. Among the children diagnosed with community-acquired pneumonia (CAP), the PCR-positive and macrolide-resistant rates of M. pneumoniae were both higher in the severe ones. A2063G in domain V of 23S rRNA was the major macrolide-resistant mutation, accounting for more than 90%. The MIC values of all MRMP to erythromycin and azithromycin were ≥ 64 μg/ml, and the MICs of tetracycline and levofloxacin were ≤ 0.5 μg/ml and ≤ 1 μg/ml, respectively. The macrolide resistance varied in different regions and years. Among inpatients, the macrolide-resistant rate was higher in severe pneumonia. A2063G was the common mutation, and we found no resistance to tetracycline and levofloxacin.
This study aimed to understand the population and contact tracer uptake of the quick response (QR)-code-based function of the New Zealand COVID Tracer App (NZCTA) used for digital contact tracing (DCT). We used a retrospective cohort of all COVID-19 cases between August 2020 and February 2022. Cases of Asian and other ethnicities were 2.6 times (adjusted relative risk (aRR) 2.58, 99 per cent confidence interval (95% CI) 2.18, 3.05) and 1.8 times (aRR 1.81, 95% CI 1.58, 2.06) more likely than Māori cases to generate a token during the Delta period, and this persisted during the Omicron period. Contact tracing organization also influenced location token generation with cases handled by National Case Investigation Service (NCIS) staff being 2.03 (95% CI 1.79, 2.30) times more likely to generate a token than cases managed by clinical staff at local Public Health Units (PHUs). Public uptake and participation in the location-based system independent of contact tracer uptake were estimated at 45%. The positive predictive value (PPV) of the QR code system was estimated to be close to nil for detecting close contacts but close to 100% for detecting casual contacts. Our paper shows that the QR-code-based function of the NZCTA likely made a negligible impact on the COVID-19 response in New Zealand (NZ) in relation to isolating potential close contacts of cases but likely was effective at identifying and notifying casual contacts.
We set up a formal framework to characterize encompassing of nonparametric models through the $L^2$ distance. We contrast it to previous literature on the comparison of nonparametric regression models. We then develop testing procedures for the encompassing hypothesis that are fully nonparametric. Our test statistics depend on kernel regression, raising the issue of bandwidth’s choice. We investigate two alternative approaches to obtain a “small bias property” for our test statistics. We show the validity of a wild bootstrap method. We empirically study the use of a data-driven bandwidth and illustrate the attractive features of our tests for small and moderate samples.
The bipartite independence number of a graph $G$, denoted as $\tilde \alpha (G)$, is the minimal number $k$ such that there exist positive integers $a$ and $b$ with $a+b=k+1$ with the property that for any two disjoint sets $A,B\subseteq V(G)$ with $|A|=a$ and $|B|=b$, there is an edge between $A$ and $B$. McDiarmid and Yolov showed that if $\delta (G)\geq \tilde \alpha (G)$ then $G$ is Hamiltonian, extending the famous theorem of Dirac which states that if $\delta (G)\geq |G|/2$ then $G$ is Hamiltonian. In 1973, Bondy showed that, unless $G$ is a complete bipartite graph, Dirac’s Hamiltonicity condition also implies pancyclicity, i.e., existence of cycles of all the lengths from $3$ up to $n$. In this paper, we show that $\delta (G)\geq \tilde \alpha (G)$ implies that $G$ is pancyclic or that $G=K_{\frac{n}{2},\frac{n}{2}}$, thus extending the result of McDiarmid and Yolov, and generalizing the classic theorem of Bondy.