To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Ideological and relational polarization are two increasingly salient political divisions in Western societies. We integrate the study of these phenomena by describing society as a multilevel network of social ties between people and attitudinal ties between people and political topics. We then define and propose a set of metrics to measure ‘network polarization’: the extent to which a community is ideologically and socially divided. Using longitudinal network modelling, we examine whether observed levels of network polarization can be explained by three processes: social selection, social influence, and latent-cause reinforcement. Applied to new longitudinal friendship and political attitude network data from two Swiss university cohorts, our metrics show mild polarization. The models explain this outcome and suggest that friendships and political attitudes are reciprocally formed and sustained. We find robust evidence for friend selection based on attitude similarity and weaker evidence for social influence. The results further point to latent-cause reinforcement processes: (dis)similar attitudes are more likely to be formed or maintained between individuals whose attitudes are already (dis)similar on a range of political issues. Applied across different cultural and political contexts, our approach may help to understand the degree and mechanisms of divisions in society.
Covering formulation, algorithms and structural results and linking theory to real-world applications in controlled sensing (including social learning, adaptive radars and sequential detection), this book focuses on the conceptual foundations of partially observed Markov decision processes (POMDPs). It emphasizes structural results in stochastic dynamic programming, enabling graduate students and researchers in engineering, operations research, and economics to understand the underlying unifying themes without getting weighed down by mathematical technicalities. In light of major advances in machine learning over the past decade, this edition includes a new Part V on inverse reinforcement learning as well as a new chapter on non-parametric Bayesian inference (for Dirichlet processes and Gaussian processes), variational Bayes and conformal prediction.
A graduate-level introduction to advanced topics in Markov chain Monte Carlo (MCMC), as applied broadly in the Bayesian computational context. The topics covered have emerged as recently as the last decade and include stochastic gradient MCMC, non-reversible MCMC, continuous time MCMC, and new techniques for convergence assessment. A particular focus is on cutting-edge methods that are scalable with respect to either the amount of data, or the data dimension, motivated by the emerging high-priority application areas in machine learning and AI. Examples are woven throughout the text to demonstrate how scalable Bayesian learning methods can be implemented. This text could form the basis for a course and is sure to be an invaluable resource for researchers in the field.
The payoff in the Chow–Robbins coin-tossing game is the proportion of heads when you stop. Stopping to maximize expectation was addressed by Chow and Robbins (1965), who proved there exist integers ${k_n}$ such that it is optimal to stop at n tosses when heads minus tails is ${k_n}$. Finding ${k_n}$ was unsolved except for finitely many cases by computer. We prove an $o(n^{-1/4})$ estimate of the stopping boundary of Dvoretsky (1967), which then proves ${k_n} = \left\lceil {\alpha \sqrt n \,\, - 1/2\,\, + \,\,\frac{{\left( { - 2\zeta (\! -1/2)} \right)\sqrt \alpha }}{{\sqrt \pi }}{n^{ - 1/4}}} \right\rceil $ except for n in a set of density asymptotic to 0, at a power law rate. Here, $\alpha$ is the Shepp–Walker constant from the Brownian motion analog, and $\zeta$ is Riemann’s zeta function. An $n^{ - 1/4}$ dependence was conjectured by Christensen and Fischer (2022). Our proof uses moments involving Catalan and Shapiro Catalan triangle numbers which appear in a tree resulting from backward induction, and a generalized backward induction principle. It was motivated by an idea of Häggström and Wästlund (2013) to use backward induction of upper and lower Value bounds from a horizon, which they used numerically to settle a few cases. Christensen and Fischer, with much better bounds, settled many more cases. We use Skorohod’s embedding to get simple upper and lower bounds from the Brownian analog; our upper bound is the one found by Christensen and Fischer in another way. We use them first for yet many more examples and a conjecture, then algebraically in the tree, with feedback to get much sharper Value bounds near the border, and analytic results. Also, we give a formula that gives the exact optimal stop rule for all n up to about a third of a billion; it uses the analytic result plus terms arrived at empirically.
We present a short and simple proof of the celebrated hypergraph container theorem of Balogh–Morris–Samotij and Saxton–Thomason. On a high level, our argument utilises the idea of iteratively taking vertices of largest degree from an independent set and constructing a hypergraph of lower uniformity which preserves independent sets and inherits edge distribution. The original algorithms for constructing containers also remove in each step vertices of high degree, which are not in the independent set. Our modified algorithm postpones this until the end, which surprisingly results in a significantly simplified analysis.
Healthcare costs tend to increase with age. In particular, in the case of illness, the last year before death can be an exceptionally costly period as the need for healthcare increases. Using a novel private insurance dataset containing over one million records of claims submitted by individuals to their health insurance providers during the last year of life, our research seeks to shed light on the costs before death in Switzerland. Our work documents how spending patterns change with proximity to dying. We use machine learning algorithms to identify and quantify the key effects that drive a person’s spending during this critical period. Our findings provide a more profound understanding of the costs associated with hospitalization before death, the role of age, and the variation in costs based on the services, including care services, which individuals require.
This paper focuses on the comparison of networks on the basis of statistical inference. For that purpose, we rely on smooth graphon models as a nonparametric modeling strategy that is able to capture complex structural patterns. The graphon itself can be viewed more broadly as local density or intensity function on networks, making the model a natural choice for comparison purposes. More precisely, to gain information about the (dis-)similarity between networks, we extend graphon estimation towards modeling multiple networks simultaneously. In particular, fitting a single model implies aligning different networks with respect to the same graphon estimate. To do so, we employ an EM-type algorithm. Drawing on this network alignment consequently allows a comparison of the edge density at local level. Based on that, we construct a chi-squared-type test on equivalence of network structures. Simulation studies and real-world examples support the applicability of our network comparison strategy.
Structural health monitoring (SHM) is increasingly applied in civil engineering. One of its primary purposes is detecting and assessing changes in structure conditions to increase safety and reduce potential maintenance downtime. Recent advancements, especially in sensor technology, facilitate data measurements, collection, and process automation, leading to large data streams. We propose a function-on-function regression framework for (nonlinear) modeling the sensor data and adjusting for covariate-induced variation. Our approach is particularly suited for long-term monitoring when several months or years of training data are available. It combines highly flexible yet interpretable semi-parametric modeling with functional principal component analysis and uses the corresponding out-of-sample Phase-II scores for monitoring. The method proposed can also be described as a combination of an “input–output” and an “output-only” method.
We interrogate efforts to legislate artificial intelligence (AI) through Canada’s Artificial Intelligence and Data Act (AIDA) and argue it represents a series of missed opportunities that so delayed the Act that it died. We note how much of this bill was explicitly tied to economic development and implicitly tied to a narrow jurisdictional form of shared prosperity. Instead, we contend that the benefits of AI are not shared but disproportionately favour specific groups, in this case, the AI industry. This trend appears typical of many countries’ AI and data regulations, which tend to privilege the few, despite promises to favour the many. We discuss the origins of AIDA, drafted by Canada’s federal Department for Innovation Science and Economic Development (ISED). We then consider four problems: (1) AIDA relied on public trust in a digital and data economy; (2) ISED tried to both regulate and promote AI and data; (3) Public consultation was insufficient for AIDA; and (4) Workers’ rights in Canada and worldwide were excluded in AIDA. Without strong checks and balances built into regulation like AIDA, innovation will fail to deliver on its claims. We recommend the Canadian government and, by extension, other governments invest in an AI act that prioritises: (1) Accountability mechanisms and tools for the public and private sectors; (2) Robust workers’ rights in terms of data handling; and (3) Meaningful public participation in all stages of legislation. These policies are essential to countering wealth concentration in the industry, which would stifle progress and widespread economic growth.
We study continuous-time Markov chains on the nonnegative integers under mild regularity conditions (in particular, the set of jump vectors is finite and both forward and backward jumps are possible). Based on the so-called flux balance equation, we derive an iterative formula for calculating stationary measures. Specifically, a stationary measure $\pi(x)$ evaluated at $x\in\mathbb{N}_0$ is represented as a linear combination of a few generating terms, similarly to the characterization of a stationary measure of a birth–death process, where there is only one generating term, $\pi(0)$. The coefficients of the linear combination are recursively determined in terms of the transition rates of the Markov chain. For the class of Markov chains we consider, there is always at least one stationary measure (up to a scaling constant). We give various results pertaining to uniqueness and nonuniqueness of stationary measures, and show that the dimension of the linear space of signed invariant measures is at most the number of generating terms. A minimization problem is constructed in order to compute stationary measures numerically. Moreover, a heuristic linear approximation scheme is suggested for the same purpose by first approximating the generating terms. The correctness of the linear approximation scheme is justified in some special cases. Furthermore, a decomposition of the state space into different types of states (open and closed irreducible classes, and trapping, escaping and neutral states) is presented. Applications to stochastic reaction networks are well illustrated.
The problem of reconstructing a distribution with bounded support from its moments is practically relevant in many fields, such as chemical engineering, electrical engineering, and image analysis. The problem is closely related to a classical moment problem, called the truncated Hausdorff moment problem (THMP). We call a method that finds or approximates a solution to the THMP a Hausdorff moment transform (HMT). In practice, selecting the right HMT for specific objectives remains a challenge. This study introduces a systematic and comprehensive method for comparing HMTs based on accuracy, computational complexity, and precision requirements. To enable fair comparisons, we present approaches for generating representative moment sequences. The study also enhances existing HMTs by reducing their computational complexity. Our findings show that the performances of the approximations differ significantly in their convergence, accuracy, and numerical complexity and that the decay order of the moment sequence strongly affects the accuracy requirement.
This commentary examines the dual role of artificial intelligence (AI) in shaping electoral integrity and combating misinformation, with a focus on the 2025 Philippine elections. It investigates how AI has been weaponised to manipulate narratives and suggests strategies to counteract disinformation. Drawing on case studies from the Philippines, Taiwan, and India—regions in the Indo-Pacific with vibrant democracies, high digital engagement, and recent experiences with election-related misinformation—it highlights the risks of AI-driven content and the innovative measures used to address its spread. The commentary advocates for a balanced approach that incorporates technological solutions, regulatory frameworks, and digital literacy to safeguard democratic processes and promote informed public participation. The rise of generative AI tools has significantly amplified the risks of disinformation, such as deepfakes, and algorithmic biases. These technologies have been exploited to influence voter perceptions and undermine democratic systems, creating a pressing need for protective measures. In the Philippines, social media platforms have been used to spread revisionist narratives, while Taiwan employs AI for real-time fact-checking. India’s proactive approach, including a public misinformation tipline, showcases effective countermeasures. These examples highlight the complex challenges and opportunities presented by AI in different electoral contexts. The commentary stresses the need for regulatory frameworks designed to address AI’s dual-use nature, advocating for transparency, real-time monitoring, and collaboration between governments, civil society, and the private sector. It also explores the criteria for effective AI solutions, including scalability, adaptability, and ethical considerations, to guide future interventions. Ultimately, it underscores the importance of digital literacy and resilient information ecosystems in supporting informed democratic participation.
This paper develops a theoretical framework to examine the technology adoption decisions of insurers and their impact on market share, considering heterogeneous customers and two representative insurers. Intuitively, when technology accessibility is observable, an insurer’s access to a new technology increases its market share, no matter whether it adopts the technology or not. However, when technology accessibility is unobservable, the insurer’s access to the new technology has additional side effects on its market share. First, the insurer may apply the available technology even if it increases costs and premiums, thereby decreasing market share. Second, the unobservable technology accessibility leads customers to expect that all insurers might have access to the new technology and underestimate the premium of those without access. This also decreases the market share of an insurer with access to the new technology. Our findings help explain the unclear relationship between technology adoption and the market share of insurance companies in practice.
We consider a new approach in the definition of two-dimensional heavy-tailed distributions. Specifically, we introduce the classes of two-dimensional long-tailed, of two-dimensional dominatedly varying, and of two-dimensional consistently varying distributions. Next, we define the closure property with respect to two-dimensional convolution and to joint max-sum equivalence in order to study whether they are satisfied by these classes. Further, we examine the joint-tail behavior of two random sums, under generalized tail asymptotic independence. Afterward, we study the closure property under scalar product and two-dimensional product convolution, and by these results, we extended our main result in the case of jointly randomly weighted sums. Our results contained some applications where we establish the asymptotic expression of the ruin probability in a two-dimensional discrete-time risk model.
This paper utilizes neural networks (NNs) for cycle detection in the insurance industry. The efficacy of NNs is compared on simulated data to the standard methods used in the underwriting cycles literature. The results show that NN models perform well in detecting cycles even in the presence of outliers and structural breaks. The methodology is applied to a granular data set of prices per risk profile from the Brazilian insurance industry.
In Chapter 6 we present a general approach relying on the diffusion approximation to prove renewal theorems for Markov chains, so we consider Markov chains which may be approximated by a diffusion process. For a transient Markov chain with asymptotically zero drift, the average time spent by the chain in a unit interval is, roughly speaking, the reciprocal of the drift.
We apply a martingale-type technique and show that the asymptotic behaviour of the renewal measure depends heavily on the rate at which the drift vanishes. As in the last two chapters, two main cases are distinguished, either the drift of the chain decreases as 1/x or much more slowly than that. In contrast with the case of an asymptotically positive drift considered in Chapter 10, the case of vanishing drift is quite tricky to analyse since the Markov chain tends to infinity rather slowly.
In Chapter 3 we consider (right) transient Markov chains taking values in R. We are interested in down-crossing probabilities for them. These clearly depend on the asymptotic properties of the chain drift at infinity.
In Chapter 9 we consider a recurrent Markov chain possessing an invariant measure which is either probabilistic in the case of positive recurrence or σ-finite in the case of null recurrence. Our main aim here is to describe the asymptotic behaviour of the invariant distribution tail for a class of Markov chains with asymptotically zero drift going to zero more slowly than 1/x. We start with a result which states that a typical stationary Markov chain with asymptotically zero drift always generates a heavy-tailed invariant distribution which is very different from the case of Markov chains with asymptotically negative drift bounded away from zero. Then we develop techniques needed for deriving precise tail asymptotics of Weibullian type.