To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Political polarization is a group phenomenon in which opposing factions, often of unequal size, exhibit asymmetrical influence and behavioral patterns. Within these groups, elites and masses operate under different motivations and levels of influence, challenging simplistic views of polarization. Yet, existing methods for measuring polarization in social networks typically reduce it to a single value, assuming homogeneity in polarization across the entire system. While such approaches confirm the rise of political polarization in many social contexts, they overlook structural complexities that could explain its underlying mechanisms. We propose a method that decomposes existing polarization and alignment measures into distinct components. These components separately capture polarization processes involving elites and masses from opposing groups. Applying this method to Twitter discussions surrounding the 2019 and 2023 Finnish parliamentary elections, we find that (1) opposing groups rarely have a balanced contribution to observed polarization, and (2) while elites strongly contribute to structural polarization and consistently display greater alignment across various topics, the masses, too, have recently experienced a surge in alignment. Our method provides an improved analytical lens through which to view polarization, explicitly recognizing the complexity of and need to account for elite-mass dynamics in polarized environments.
We show that the Potts model on a graph can be approximated by a sequence of independent and identically distributed spins in terms of Wasserstein distance at high temperatures. We prove a similar result for the Curie–Weiss–Potts model on the complete graph, conditioned on being close enough to any of its equilibrium macrostates, in the low-temperature regime. Our proof technique is based on Stein’s method for comparing the stationary distributions of two Glauber dynamics with similar updates, one of which is rapid mixing and contracting on a subset of the state space. Along the way, we prove a new upper bound on the mixing time of the Glauber dynamics for the conditional measure of the Curie–Weiss–Potts model near an equilibrium macrostate.
In this paper we investigate large-scale linear systems driven by a fractional Brownian motion (fBm) with Hurst parameter $H\in [1/2, 1)$. We interpret these equations either in the sense of Young ($H>1/2$) or Stratonovich ($H=1/2$). In particular, fractional Young differential equations are well suited to modeling real-world phenomena as they capture memory effects, unlike other frameworks. Although it is very complex to solve them in high dimensions, model reduction schemes for Young or Stratonovich settings have not yet been much studied. To address this gap, we analyze important features of fundamental solutions associated with the underlying systems. We prove a weak type of semigroup property which is the foundation of studying system Gramians. From the Gramians introduced, a dominant subspace can be identified, which is shown in this paper as well. The difficulty for fractional drivers with $H>1/2$ is that there is no link between the corresponding Gramians and algebraic equations, making the computation very difficult. Therefore we further propose empirical Gramians that can be learned from simulation data. Subsequently, we introduce projection-based reduced-order models using the dominant subspace information. We point out that such projections are not always optimal for Stratonovich equations, as stability might not be preserved and since the error might be larger than expected. Therefore an improved reduced-order model is proposed for $H=1/2$. We validate our techniques conducting numerical experiments on some large-scale stochastic differential equations driven by fBm resulting from spatial discretizations of fractional stochastic PDEs. Overall, our study provides useful insights into the applicability and effectiveness of reduced-order methods for stochastic systems with fractional noise, which can potentially aid in the development of more efficient computational strategies for practical applications.
This article introduces a blockchain-based insurance scheme that integrates parametric and collaborative elements. A pool of investors, referred to as surplus providers, locks funds in a smart contract, enabling blockchain users to underwrite parametric insurance contracts. These contracts automatically trigger compensation when predefined conditions are met. The collaborative aspect is embodied in the generation of tokens, which are distributed to surplus providers. These tokens represent each participant’s share of the surplus and grant voting rights for management decisions. The smart contract is developed in Solidity, a high-level programming language for the Ethereum blockchain, and deployed on the Sepolia testnet, with data processing and analysis conducted using Python. In addition, open-source code is provided and main research challenges are identified, so that further research can be carried out to overcome limitations of this first proof of concept.
Detecting multiple structural breaks at unknown dates is a central challenge in time-series econometrics. Step-indicator saturation (SIS) addresses this challenge during model selection, and we develop its asymptotic theory for tuning parameter choice. We study its frequency gauge—the false detection rate—and show it is consistent and asymptotically normal. Simulations suggest that a smaller gauge minimizes bias in post-selection regression estimates. For the small gauge situation, we develop a complementary Poisson theory. We compare the local power of SIS to detect shifts with that of Andrews’ break test. We find that SIS excels when breaks are near the sample end or closely spaced. An application to U.K. labor productivity reveals a growth slowdown after the 2008 financial crisis.
Measure of uncertainty in past lifetime distribution plays an important role in the context of information theory, forensic science and other related fields. In the present work, we propose non-parametric kernel type estimator for generalized past entropy function, which was introduced by Gupta and Nanda [9], under $\alpha$-mixing sample. The resulting estimator is shown to be weak and strong consistent and asymptotically normally distributed under certain regularity conditions. The performance of the estimator is validated through simulation study and a real data set.
We study backward stochastic difference equations (BS$\Delta$Es) driven by a d-dimensional stochastic process on a lattice, whose increments take only $d+1$ possible values that generate the lattice. Interpreting the driving process as a d-dimensional asset price process, we provide applications to an optimal investment problem and to a market equilibrium analysis, where utility functionals are defined via BS$\Delta$Es.
We consider the problem of detecting whether a power-law inhomogeneous random graph contains a geometric community, and we frame this as a hypothesis-testing problem. More precisely, we assume that we are given a sample from an unknown distribution on the space of graphs on n vertices. Under the null hypothesis, the sample originates from the inhomogeneous random graph with a heavy-tailed degree sequence. Under the alternative hypothesis, $k=o(n)$ vertices are given spatial locations and connect following the geometric inhomogeneous random graph connection rule. The remaining $n-k$ vertices follow the inhomogeneous random graph connection rule. We propose a simple and efficient test based on counting normalized triangles to differentiate between the two hypotheses. We prove that our test correctly detects the presence of the community with high probability as $n\to\infty$, and identifies large-degree vertices of the community with high probability.
Course-prerequisite networks (CPNs) are directed acyclic graphs that model complex academic curricula by representing courses as nodes and dependencies between them as directed links. These networks are indispensable tools for visualizing, studying, and understanding curricula. For example, CPNs can be used to detect important courses, improve advising, guide curriculum design, analyze graduation time distributions, and quantify the strength of knowledge flow between different university departments. However, most CPN analyses to date have focused only on micro- and meso-scale properties. To fill this gap, we define and study three new global CPN measures: breadth, depth, and flux. All three measures are invariant under transitive reduction and are based on the concept of topological stratification, which generalizes topological ordering in directed acyclic graphs. These measures can be used for macro-scale comparison of different CPNs. We illustrate the new measures numerically by applying them to three real and synthetic CPNs from three universities: the Cyprus University of Technology, the California Institute of Technology, and Johns Hopkins University. The CPN data analyzed in this paper are publicly available in a GitHub repository.
The main goal of this paper is to introduce a new model of evolvement of beliefs on networks. It generalizes the DeGroot model and describes the iterative process of establishing the consensus in isolated social networks in the case of nonlinear aggregation functions. Our main tools come from mean theory and graph theory. The case, when the root set of the network (influencers, news agencies, etc.) is ergodic is fully discussed. The other possibility, when the root contains more than one component, is partially discussed and it could be a motivation for further research.
Quick and accurate forecasts of incidence and mortality trends for the near future are particularly useful for the immediate allocation of available public health resources, as well as for understanding the long-term course of the pandemic. The surveillance data used for predictions, however, may come with some reporting delays. Consequently, auxiliary data sources that are available immediately can provide valuable additional information for recent time periods for which surveillance data have not yet become fully available. In this work, a set of Google search queries by individual users related to COVID-19 incidence and mortality is collected and analyzed. The information from these queries aims to improve quick forecasts. Initially, the identified search query keywords were ranked according to their predictive abilities with reported incidence and mortality. After that, the ARIMA, Prophet, and XGBoost models were fitted to generate forecasts using only the available reported incidence and mortality (baseline model) or together with combinations of searched keywords identified based on their predictive abilities (predictors model). In summary, the inclusion of top-ranked keywords as predictors significantly enhanced prediction accuracy for the majority of scenarios in the range from 50% to 90% across all considered models and is recommended for future use. The inclusion of low-ranked keywords did not provide such an improvement. In general, the ranking of predictors and the corresponding forecast improvements were more pronounced for incidence, while the results were less pronounced for mortality.
A finite point set in $\mathbb{R}^d$ is in general position if no $d + 1$ points lie on a common hyperplane. Let $\alpha _d(N)$ be the largest integer such that any set of $N$ points in $\mathbb{R}^d$, with no $d + 2$ members on a common hyperplane, contains a subset of size $\alpha _d(N)$ in general position. Using the method of hypergraph containers, Balogh and Solymosi showed that $\alpha _2(N) \lt N^{5/6 + o(1)}$. In this paper, we also use the container method to obtain new upper bounds for $\alpha _d(N)$ when $d \geq 3$. More precisely, we show that if $d$ is odd, then $\alpha _d(N) \lt N^{\frac {1}{2} + \frac {1}{2d} + o(1)}$, and if $d$ is even, we have $\alpha _d(N) \lt N^{\frac {1}{2} + \frac {1}{d-1} + o(1)}$. We also study the classical problem of determining $a(d,k,n)$, the maximum number of points selected from the grid $[n]^d$ such that no $k + 2$ members lie on a $k$-flat, and improve the previously best known bound for $a(d,k,n)$, due to Lefmann in 2008, by a polynomial factor when $k$ = 2 or 3 (mod 4).
This research presents the design, pricing, and consumer testing results of a potential private financial product that integrates retirement savings with social care funding through contributions to a supplemental defined contribution pension scheme. With this product, some contributions will be earmarked specifically to cover social care expenses if needed post-retirement. Our research indicates that offering benefits that address both retirement income supplementation and social care funding in a combined approach is appealing to consumers and could help overcome behavioural barriers to planning for social care. As with established defined contribution schemes, this product is designed for distribution in the workplace. Employees can contribute a portion of their earnings to their pension accounts. Employers may partially or fully match these contributions, further incentivising participation. In addition to financial support, participants will gain access to social care coordination services designed to facilitate ageing at home. These services will help retirees navigate care options, coordinate necessary support, and optimise the use of their allocated social care funds, ultimately promoting independence and well-being in later life.
Zinc supplementation is a critical adjunct therapy for managing acute childhood diarrhoea, particularly in low-income countries (LICs) and lower middle-income countries (LMICs). However, adherence to the recommended zinc regimen remains a major challenge, limiting its effectiveness in real-world settings. This systematic review and meta-analysis aimed to estimate the pooled adherence rates to zinc supplementation for diarrhoea in children under 5 and identify key determinants of adherence. A comprehensive search of PubMed, Embase, Scopus, Google Scholar, ProQuest, and CINAHL was conducted between 2000 and 2024. A total of 10 observational studies were included, with pooled adherence of 63.45% (95% CI: 51.62–75.28) for 10 days regimen and 34.58% (95% CI: 7.08–62.09) for 14 days regimen, along with high heterogeneity. Sensitivity analysis confirmed robustness of these estimates. Key factors associated with adherence included caregiver education, provider counselling, medication acceptability, and economic constraints related to caregiver buying capacity. Doi plot asymmetry suggested possible publication bias for 10 and 14 days regimen. Overall, adherence to zinc therapy remains sub-optimal, particularly for 14 days regimen compared to 10 days regimen. Targeted interventions addressing behavioural, provider, and formulation related barriers are urgently needed to optimize zinc adherence and improve diarrhoea outcomes globally.
We study sequential optimal stopping with partial reversibility. The optimal stopping problem is subject to implementation delay, which is random and exponentially distributed. Once the stopping decision is made, the decision maker can, by incurring a cost, call the decision off and restart the stopping problem. The optimization criterion is to maximize the expected present value of the total payoff. We characterize the value function in terms of a Bellman principle for a wide class of payoff functions and potentially multidimensional strong Markov dynamics. We also analyse the case of linear diffusion dynamics and characterize the value function and the optimal decision rule for a wide class of payoff functions.
The generalised random graph contains n vertices with positive i.i.d. weights. The probability of adding an edge between two vertices is increasing in their weights. We require the weight distribution to have finite second moments, and study the point process $\mathcal{C}_n$ on $\{3,4,\ldots\}$, which counts how many cycles of the respective length are present in the graph. We establish convergence of $\mathcal{C}_n$ to a Poisson point process. Under the stronger assumption of the weights having finite fourth moments we provide the following results. When $\mathcal{C}_n$ is evaluated on a bounded set A, we provide a rate of convergence. If the graph is additionally subcritical, we extend this to unbounded sets A at the cost of a slower rate of convergence. From this we deduce the limiting distribution of the length of the shortest and longest cycles when the graph is subcritical, including rates of convergence. All mentioned results also apply to the Chung–Lu model and the Norros–Reittu model.