To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This article introduces a blockchain-based insurance scheme that integrates parametric and collaborative elements. A pool of investors, referred to as surplus providers, locks funds in a smart contract, enabling blockchain users to underwrite parametric insurance contracts. These contracts automatically trigger compensation when predefined conditions are met. The collaborative aspect is embodied in the generation of tokens, which are distributed to surplus providers. These tokens represent each participant’s share of the surplus and grant voting rights for management decisions. The smart contract is developed in Solidity, a high-level programming language for the Ethereum blockchain, and deployed on the Sepolia testnet, with data processing and analysis conducted using Python. In addition, open-source code is provided and main research challenges are identified, so that further research can be carried out to overcome limitations of this first proof of concept.
Detecting multiple structural breaks at unknown dates is a central challenge in time-series econometrics. Step-indicator saturation (SIS) addresses this challenge during model selection, and we develop its asymptotic theory for tuning parameter choice. We study its frequency gauge—the false detection rate—and show it is consistent and asymptotically normal. Simulations suggest that a smaller gauge minimizes bias in post-selection regression estimates. For the small gauge situation, we develop a complementary Poisson theory. We compare the local power of SIS to detect shifts with that of Andrews’ break test. We find that SIS excels when breaks are near the sample end or closely spaced. An application to U.K. labor productivity reveals a growth slowdown after the 2008 financial crisis.
Measure of uncertainty in past lifetime distribution plays an important role in the context of information theory, forensic science and other related fields. In the present work, we propose non-parametric kernel type estimator for generalized past entropy function, which was introduced by Gupta and Nanda [9], under $\alpha$-mixing sample. The resulting estimator is shown to be weak and strong consistent and asymptotically normally distributed under certain regularity conditions. The performance of the estimator is validated through simulation study and a real data set.
We study backward stochastic difference equations (BS$\Delta$Es) driven by a d-dimensional stochastic process on a lattice, whose increments take only $d+1$ possible values that generate the lattice. Interpreting the driving process as a d-dimensional asset price process, we provide applications to an optimal investment problem and to a market equilibrium analysis, where utility functionals are defined via BS$\Delta$Es.
We consider the problem of detecting whether a power-law inhomogeneous random graph contains a geometric community, and we frame this as a hypothesis-testing problem. More precisely, we assume that we are given a sample from an unknown distribution on the space of graphs on n vertices. Under the null hypothesis, the sample originates from the inhomogeneous random graph with a heavy-tailed degree sequence. Under the alternative hypothesis, $k=o(n)$ vertices are given spatial locations and connect following the geometric inhomogeneous random graph connection rule. The remaining $n-k$ vertices follow the inhomogeneous random graph connection rule. We propose a simple and efficient test based on counting normalized triangles to differentiate between the two hypotheses. We prove that our test correctly detects the presence of the community with high probability as $n\to\infty$, and identifies large-degree vertices of the community with high probability.
Course-prerequisite networks (CPNs) are directed acyclic graphs that model complex academic curricula by representing courses as nodes and dependencies between them as directed links. These networks are indispensable tools for visualizing, studying, and understanding curricula. For example, CPNs can be used to detect important courses, improve advising, guide curriculum design, analyze graduation time distributions, and quantify the strength of knowledge flow between different university departments. However, most CPN analyses to date have focused only on micro- and meso-scale properties. To fill this gap, we define and study three new global CPN measures: breadth, depth, and flux. All three measures are invariant under transitive reduction and are based on the concept of topological stratification, which generalizes topological ordering in directed acyclic graphs. These measures can be used for macro-scale comparison of different CPNs. We illustrate the new measures numerically by applying them to three real and synthetic CPNs from three universities: the Cyprus University of Technology, the California Institute of Technology, and Johns Hopkins University. The CPN data analyzed in this paper are publicly available in a GitHub repository.
The main goal of this paper is to introduce a new model of evolvement of beliefs on networks. It generalizes the DeGroot model and describes the iterative process of establishing the consensus in isolated social networks in the case of nonlinear aggregation functions. Our main tools come from mean theory and graph theory. The case, when the root set of the network (influencers, news agencies, etc.) is ergodic is fully discussed. The other possibility, when the root contains more than one component, is partially discussed and it could be a motivation for further research.
Quick and accurate forecasts of incidence and mortality trends for the near future are particularly useful for the immediate allocation of available public health resources, as well as for understanding the long-term course of the pandemic. The surveillance data used for predictions, however, may come with some reporting delays. Consequently, auxiliary data sources that are available immediately can provide valuable additional information for recent time periods for which surveillance data have not yet become fully available. In this work, a set of Google search queries by individual users related to COVID-19 incidence and mortality is collected and analyzed. The information from these queries aims to improve quick forecasts. Initially, the identified search query keywords were ranked according to their predictive abilities with reported incidence and mortality. After that, the ARIMA, Prophet, and XGBoost models were fitted to generate forecasts using only the available reported incidence and mortality (baseline model) or together with combinations of searched keywords identified based on their predictive abilities (predictors model). In summary, the inclusion of top-ranked keywords as predictors significantly enhanced prediction accuracy for the majority of scenarios in the range from 50% to 90% across all considered models and is recommended for future use. The inclusion of low-ranked keywords did not provide such an improvement. In general, the ranking of predictors and the corresponding forecast improvements were more pronounced for incidence, while the results were less pronounced for mortality.
A finite point set in $\mathbb{R}^d$ is in general position if no $d + 1$ points lie on a common hyperplane. Let $\alpha _d(N)$ be the largest integer such that any set of $N$ points in $\mathbb{R}^d$, with no $d + 2$ members on a common hyperplane, contains a subset of size $\alpha _d(N)$ in general position. Using the method of hypergraph containers, Balogh and Solymosi showed that $\alpha _2(N) \lt N^{5/6 + o(1)}$. In this paper, we also use the container method to obtain new upper bounds for $\alpha _d(N)$ when $d \geq 3$. More precisely, we show that if $d$ is odd, then $\alpha _d(N) \lt N^{\frac {1}{2} + \frac {1}{2d} + o(1)}$, and if $d$ is even, we have $\alpha _d(N) \lt N^{\frac {1}{2} + \frac {1}{d-1} + o(1)}$. We also study the classical problem of determining $a(d,k,n)$, the maximum number of points selected from the grid $[n]^d$ such that no $k + 2$ members lie on a $k$-flat, and improve the previously best known bound for $a(d,k,n)$, due to Lefmann in 2008, by a polynomial factor when $k$ = 2 or 3 (mod 4).
This research presents the design, pricing, and consumer testing results of a potential private financial product that integrates retirement savings with social care funding through contributions to a supplemental defined contribution pension scheme. With this product, some contributions will be earmarked specifically to cover social care expenses if needed post-retirement. Our research indicates that offering benefits that address both retirement income supplementation and social care funding in a combined approach is appealing to consumers and could help overcome behavioural barriers to planning for social care. As with established defined contribution schemes, this product is designed for distribution in the workplace. Employees can contribute a portion of their earnings to their pension accounts. Employers may partially or fully match these contributions, further incentivising participation. In addition to financial support, participants will gain access to social care coordination services designed to facilitate ageing at home. These services will help retirees navigate care options, coordinate necessary support, and optimise the use of their allocated social care funds, ultimately promoting independence and well-being in later life.
We study sequential optimal stopping with partial reversibility. The optimal stopping problem is subject to implementation delay, which is random and exponentially distributed. Once the stopping decision is made, the decision maker can, by incurring a cost, call the decision off and restart the stopping problem. The optimization criterion is to maximize the expected present value of the total payoff. We characterize the value function in terms of a Bellman principle for a wide class of payoff functions and potentially multidimensional strong Markov dynamics. We also analyse the case of linear diffusion dynamics and characterize the value function and the optimal decision rule for a wide class of payoff functions.
The generalised random graph contains n vertices with positive i.i.d. weights. The probability of adding an edge between two vertices is increasing in their weights. We require the weight distribution to have finite second moments, and study the point process $\mathcal{C}_n$ on $\{3,4,\ldots\}$, which counts how many cycles of the respective length are present in the graph. We establish convergence of $\mathcal{C}_n$ to a Poisson point process. Under the stronger assumption of the weights having finite fourth moments we provide the following results. When $\mathcal{C}_n$ is evaluated on a bounded set A, we provide a rate of convergence. If the graph is additionally subcritical, we extend this to unbounded sets A at the cost of a slower rate of convergence. From this we deduce the limiting distribution of the length of the shortest and longest cycles when the graph is subcritical, including rates of convergence. All mentioned results also apply to the Chung–Lu model and the Norros–Reittu model.
It is often desired to extract more information from a test score than is available in a single number. The almost universal response to such desires is to divide the overall test score into subcomponents/subscores (e.g., math and verbal scores, reading fluency and reading comprehension, etc.). We summarize the rules governing the legitimate use of subscores and report on the frequency, in modern practice, that it is done correctly. In short, dividing up a test into subscores reduces its reliability, and subsequently, its validity. Using the military’s ASVAB test as an example, we show that the overall score is the only good predictor of later performance and the nine subtests are not effective in differentiating types of skills and knowledge.
Zombie ideas are awful ideas that ought to be dead, but which keep getting revivified and so are still walking among us. Three prominent zombies which we discuss are:
1. coaching for admissions tests gives a large unfair advantage;
2. admitting strivers (kids from lower SES who score higher than expected) makes things fairer;
3. making tests optional makes things fairer.
Test coaching companies like Princeton Review and Kaplan often claim that they can increase a person’s SAT score by over 100 points. The evidence used to support such claims typically involves a pre-post design where the student takes the test, receives coaching, and then takes the test again. In rigorous studies where a control group is used where the student simply takes the test twice, gains for 80–90 points are typical. Thus, the gains from coaching are much less than claimed. Strivers are students who score higher than expected based on their family income. Some have claimed that a striver that scores 1,000 on the SAT is really more like 1,050 because they have overcome hardship. However, due to regression to the mean, such students typically perform in college more like a SAT score of 950 would predict. Finally, many colleges have chosen to give applicants the option of whether to include SAT or ACT scores in their materials. Unfortunately, data suggest that this is a bad idea.
A continued example of testing misuse involves standardized tests administered in K-12 education. The results of such tests have been used to not only evaluate students, but also the schools, school personnel (e.g., teachers, principals, superintendents) and programs. We focus on one highly touted methodology, value-added models (VAM), that have been advocated as a rigorous scientific solution to what was previously an area rife with subjectivity. Proponents of VAM claim that a better measure of teacher performance is the amount of academic growth students experience after receiving instruction from that teacher. We discuss both the technical and logical flaws of these models. First, claims that changes in student test scores are caused by teachers, administrators, or schools are extremely weak given zero control. Second, the assumptions that achievement tests given at the end of one grade compared to the others can be equally scaled are nothing short of heroic – and very weak. Finally, missing data and small sample sizes make yearly growth estimates anything but reliable or valid. VAM is simply a well-intentioned, very bad idea.
When decisions are made there is a cost to making a mistake. This cost is often different for an erroneous positive decision than it is for an erroneous negative one. Decisions based on test scores are no different. We discuss this issue and provide several evocative examples. For admissions testing, two kinds of errors can be made: accepting a student who should not have been accepted (i.e., won’t graduate) and rejecting a student that would have graduated. The latter type of error likely results in very few actual errors as the student can simply enroll elsewhere. But the former type of error can result in time and money wasted. The proper use of test scores reduces both types of errors. For licensing tests, passing someone who should not have been passed can have more serious consequences than time and money. An airline pilot’s lack of knowledge and skills can lead to a crash; a doctor’s inadequacies and incompetence can lead to deaths. Using test scores can save lives.