To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper studies a long-standing problem of risk exchange and optimal resource allocation among multiple entities in a continuous-time pure risk-exchange economy. We establish a novel risk exchange mechanism that allows entities to share and transfer risks dynamically over time. To achieve Pareto optimality, we formulate the problem as a stochastic control problem and derive explicit solutions for the optimal investment, consumption, and risk exchange strategies using a martingale method. To highlight practical applications of the solution to the proposed problem, we apply our results to a target benefit pension plan, featuring the potential benefits of risk sharing within this pension system. Numerical examples show the sensitivity of investment portfolios, the adjustment item, and allocation ratios to specific parameters. It is observed that an increase in the aggregate endowment process results in a rise in the adjustment item. Furthermore, the allocation ratios exhibit a positive correlation with the weights of the agents.
Robots need a sense of touch to handle objects effectively, and force sensors provide a straightforward way to measure touch or physical contact. However, contact force data are typically sparse and difficult to analyze, as it only appears during contact and is often affected by noise. Therefore, many researchers have consequently relied on vision-based methods for robotic manipulation. However, vision has limitations, such as occlusions that block the camera’s view, making it ineffective or insufficient for dexterous tasks involving contact. This article presents a method for robotic systems operating under quasi-static conditions to perform contact-rich manipulation using only force/torque measurements. First, the interaction forces/torques between the manipulated object and its environment are collected in advance. A potential function is then constructed from the collected force/torque data using Gaussian process regression with derivatives. Next, we develop haptic dynamic movement primitives (Haptic DMPs) to generate robot trajectories. Unlike conventional DMPs, which primarily focus on kinematic aspects, our Haptic DMPs incorporate force-based interactions by integrating the constructed potential energy. The effectiveness of the proposed method is demonstrated through numerical tasks, including the classical peg-in-hole problem.
Human interactions in the online world comprise a combination of positive and negative exchanges. These diverse interactions can be captured using signed network representations, where edges take positive or negative weights to indicate the sentiment of the interaction between individuals. Signed networks offer valuable insights into online political polarization by capturing antagonistic interactions and ideological divides on social media platforms. This study analyzes polarization on Menéame, a Spanish social media platform that facilitates engagement with news stories through comments and voting. Using a dual-method approach—Signed Hamiltonian Eigenvector Embedding for Proximity for signed networks and Correspondence Analysis for unsigned networks—we investigate how including negative ties enhances the understanding of structural polarization levels across different conversation topics on the platform. While the unsigned Menéame network effectively delineates ideological communities, only by incorporating negative ties can we identify ideologically extreme users who engage in antagonistic behaviors: without them, the most extreme users remain indistinguishable from their less confrontational ideological peers.
Disaster Risk Financing (DRF) presents a massive challenge to governments worldwide in protecting against catastrophic disaster losses. This study explores the development of a Disaster Fund that optimally integrates various DRF instruments, considering several real-world factors, including limited reserves, constrained risk horizons, risk aversion, risk tolerance, insurance structures, and premium pricing strategies. We demonstrate that the Value-at-Risk (VaR) and Tail VaR constraints are equivalent when the government has a limited risk horizon. Furthermore, we investigate the optimality of various insurance structures under different premium principles, conduct comparative statics on key parameters, and analyze the influence of a VaR constraint on the optimal mix of disaster financing instruments. Lastly, we apply our Disaster Fund model to the National Flood Insurance Program dataset to assess the optimal disaster financing strategy within the context of our framework.
Ponzi schemes are financial frauds that are pervasive throughout the world. Since they cause serious harm to society, it is of interest to study them so that they can be prevented. Typically, a Ponzi scheme is instigated by a promoter who promises above-average investment returns. He uses funds from the early investors to pay his later investors. These scams can occasionally last a long time, but they are ultimately unsustainable. This paper describes some well-known Ponzi schemes and identifies their common characteristics. We also review some of the approaches used to model Ponzi schemes.
Leptospirosis remains a significant occupational zoonosis in New Zealand, and emerging serovar shifts warrant a closer examination of climate-related transmission pathways. This study aimed to examine whether total monthly rainfall is associated with reported leptospirosis in humans in New Zealand. Poisson and negative binomial models were developed to examine the relationship between rainfall at 0-, 1-, 2-, and 3-month lags and the incidence of leptospirosis during the month of the report. Total monthly rainfall was positively associated with the occurrence of human leptospirosis in the following month by a factor of 1.017 (95% CI: 1.007–1.026), 1.023 at the 2-month lag (95% CI:1.013–1.032), and 1.018 at the 3-month lag (95% CI: 1.009–1.028) for every additional cm of rainfall. Variation was present in the magnitude of association for each of the individual serovars considered, suggesting different exposure pathways. Assuming that the observed associations are causal, this study supports that additional human cases are likely to occur associated with increased levels of rainfall. This provides the first evidence for including rainfall in a leptospirosis early warning system and to design targeted communication and prevention measures and provide resource allocation, particularly after heavy rainfall in New Zealand.
This study aims to estimate the prevalence of human papillomavirus (HPV) infection and describe its genotype distribution in MSM in Hong Kong. In this longitudinal study on Chinese MSM, multi-anatomic site self-sampling and testing for HPV, Chlamydia trachomatis (CT) and Neisseria gonorrhoeae (NG) were performed following survey completion at baseline and one-year follow-up. Overall, 41% (288/701) of MSM completed self-sampled HPV testing. HPV positivity was 29% (78/270) and 33% (42/127) at any anatomic site at baseline and follow-up timepoints, respectively. By anatomic site, HPV positivity was 26%-30%, 2%-4% and 0%-1% from rectal, penile, and pharyngeal specimens, respectively. The incidence of HPV infection was 21.2/100 and 18.9/100 person-years at any anatomic site and rectal site, respectively. Among 109 successfully genotyped samples, the most prevalent were HPV 6 (17%) and HPV 11 (16%), of which 60% of the genotyped samples were vaccine-preventable. Group sex engagement and less frequent condom use were positively associated with HPV infection (P<0.05). The HPV prevalence and incidence in MSM in this study is lower than in Western countries, and low-risk HPV genotypes are more prevalent. The high proportion of vaccine-preventable HPV subtypes underscores the importance of HPV vaccination in preventing infections in MSM.
The development of intelligent control-oriented solutions for building energy systems is a promising research field. The development of effective systems relies on seldom available large data sets or on simulation environments, either for training or execution phases. The creation of simulation environments based on thermal models is a challenging task, requiring the usage of third-party solutions and high levels of expertise in the energy engineering field, which poses relevant restrictions to the development of control-oriented research.
In this work, a training workbench is presented, integrating an accurate but lightweight lumped capacitance model with proven accuracy to represent the thermal dynamics of buildings, engineering models for energy systems in buildings, and user behavior models into an overall building energy performance forecasting model. It is developed in such a way that it can be easily integrated into control-oriented applications, with no requirements to use complex, third-party tools.
Bloodstream infections (BSIs) caused by Candida are a significant cause of morbidity and mortality. Geographical variations exist in the epidemiology of candidemia, with a paucity of data in the many low- and middle-income countries. We performed a retrospective study of candidemia from 2017 to 2022 at a 289-bed teaching hospital in the Dominican Republic (DR). A total of 197 cases were reviewed. Overall mortality rate was 49.2%. Age and vasopressor use were associated with mortality. The most prevalent Candida species were C. tropicalis and C. parapsilosis. C. albicans was 12% resistance to amphotericin B. These findings underscore the importance of understanding local epidemiology and may help inform empiric therapy and the development of treatment guidelines in the DR.
Scrub typhus is a mite-borne infection, largely affecting rural populations in many parts of Asia. This cohort study explored socio-demographic, behavioural, and spatial risk factors at different levels of endemicity. 2206 rural residents from 37 villages in Tamil Nadu, South India, underwent a questionnaire survey and blood sampling at baseline and annually over 2 years to detect sero-conversion. Satellite images were used for visual land use classification. Local sero-prevalence was estimated using 5602 baseline blood samples.
Two hundred and seventy cases of seroconversions occurred during 3629 person-years (incidence rate 78/1000, 95%CI 67, 91). Older age was associated with scrub typhus in crude but not in multivariable analysis adjusting for socio-economic factors. By contrast, the increased risk in females compared to males (RR 1.4) was unaffected by adjusting for confounders. In multivariable analysis, agricultural and related outdoor activities were only weakly associated with scrub typhus. However, agricultural activities were strongly associated with scrub typhus if local sero-prevalence was low, but not if it was high. Females were at a higher risk than males in high-prevalence areas but not in low-prevalence areas. To conclude, agricultural activities were not strongly associated with scrub typhus. Transmission within human settlements may predominate in highly endemic settings.
Tree-based methods are widely used in insurance pricing due to their simple and accurate splitting rules. However, there is no guarantee that the resulting premiums avoid indirect discrimination when features recorded in the database are correlated with the protected variable under consideration. This paper shows that splitting rules in regression trees and random forests can be adapted in order to avoid indirect discrimination related to a binary protected variable like gender. The new procedure is illustrated on motor third-party liability insurance claim data.
The rise of visually driven platforms like Instagram has reshaped how information is shared and understood. This study examines the role of social, cultural, and political (SCP) symbols in Instagram posts during Taiwan’s 2024 election, focusing on their influence in anti-misinformation efforts. Using large language models (LLMs)—GPT-4 Omni and Gemini Pro Vision—we analyzed thousands of posts to extract and classify symbolic elements, comparing model performance in consistency and interpretive depth. We evaluated how SCP symbols affect user engagement, perceptions of fairness, and content spread. Engagement was measured by likes, while diffusion patterns followed the SEIZ epidemiological model. Findings show that posts featuring SCP symbols consistently received more interaction, even when follower counts were equal. Although political content creators often had larger audiences, posts with cultural symbols drove the highest engagement, were perceived as more fair and trustworthy, and spread more rapidly across networks. Our results suggest that symbolic richness influences online interactions more than audience size. By integrating semiotic analysis, LLM-based interpretation, and diffusion modeling, this study offers a novel framework for understanding how symbolic communication shapes engagement on visual platforms. These insights can guide designers, policymakers, and strategists in developing culturally resonant, symbol-aware messaging to combat misinformation and promote credible narratives.
The capabilities of large language models (LLMs) have advanced to the point where entire textbooks can be queried using retrieval-augmented generation (RAG), enabling AI to integrate external, up-to-date information into its responses. This study evaluates the ability of two OpenAI models, GPT-3.5 Turbo and GPT-4 Turbo, to create and answer exam questions based on an undergraduate textbook. 14 exams were created with four true-false, four multiple-choice, and two short-answer questions derived from an open-source Pacific Studies textbook. Model performance was evaluated with and without access to the source material using text-similarity metrics such as ROUGE-1, cosine similarity, and word embeddings. Fifty-six exam scores were analyzed, revealing that RAG-assisted models significantly outperformed those relying solely on pre-trained knowledge. GPT-4 Turbo also consistently outperformed GPT-3.5 Turbo in accuracy and coherence, especially in short-answer responses. These findings demonstrate the potential of LLMs in automating exam generation while maintaining assessment quality. However, they also underscore the need for policy frameworks that promote fairness, transparency, and accessibility. Given regulatory considerations outlined in the European Union AI Act and the NIST AI Risk Management Framework, institutions using AI in education must establish governance protocols, bias mitigation strategies, and human oversight measures. The results of this study contribute to ongoing discussions on responsibly integrating AI in education, advocating for institutional policies that support AI-assisted assessment while preserving academic integrity. The empirical results suggest not only performance benefits but also actionable governance mechanisms, such as verifiable retrieval pipelines and oversight protocols, that can guide institutional policies.
Dengue is an arboviral infection that poses a substantial public health concern, with early diagnosis being a critical factor in effective management. However, limited diagnostic expertise in developing countries contributes to the under-reporting of dengue cases. This review compares the accuracy of rapid diagnostic tests (RDTs) and the tourniquet test (TT) in diagnosing dengue fever (DF) in non-laboratory-based settings. Relevant original articles on the use of RDTs and TT for dengue diagnosis were retrieved from PubMed, Scopus, and ScienceDirect. The STARD and QUADAS-2 tools were employed to evaluate the methodological quality of the included studies. Search terms included combinations of ‘fever’, ‘dengue’, and ‘“diagnosis’. In total, 23 articles were eligible for inclusion. The RDTs demonstrated mean sensitivities and specificities of 76.2% (SD = 13.8) and 91.5% (SD = 10.3), respectively, while the TT showed mean sensitivity and specificity values of 48.6% (SD = 24.9) and 79.5% (SD = 14.9), respectively. Overall, RDTs exhibited superior diagnostic performance compared to the TT. Our findings suggest that the TT is an inadequate stand-alone diagnostic tool for dengue. RDTs should be prioritized for dengue diagnosis in resource-limited settings. However, in situations where RDTs are unavailable, the TT may serve as a supplementary option.
This study investigates the incorporation of advanced heating, ventilation, and air conditioning (HVAC) systems with reinforcement learning (RL) control to enhance energy efficiency in low-energy buildings amid the extreme seasonal temperatures of Tehran. We conducted comprehensive simulation assessments using the EnergyPlus and HoneybeeGym platforms to evaluate two distinct reinforcement learning models: traditional Q-learning (Model A) and deep reinforcement learning (DRL) with neural networks (Model B). Model B consisted of a deep convolutional network architecture with 256 neurons in each hidden layer, employing rectified linear units as activation functions and the Adam optimizer at a learning rate of 0.001. The results demonstrated that the RL-managed systems resulted in a statistically significant reduction in energy-use intensity of 25 percent (p < 0.001), decreasing from 250 to 200 kWh/m² annually in comparison to the baseline scenario. The thermal comfort showed notable improvements, with the expected mean vote adjusting to 0.25, which falls within the ASHRAE Standard 55 comfort range, and the percentage of anticipated dissatisfaction reduced to 10%. Model B (DRL) demonstrated a 50 percent improvement in prediction accuracy over Model A, with a mean absolute error of 0.579366 compared to 1.140008 and a root mean square error of 0.689770 versus 1.408069. This indicates enhanced adaptability to consistent daily trends and irregular periodicities, such as weather patterns. The proposed reinforcement learning method achieved energy savings of 10–15 percent compared to both rule-based and model predictive control and approximately 10 percent improvement over rule-based control, while employing fewer building features than existing state-of-the-art control systems.
Let $\Sigma$ be an alphabet and $\mu$ be a distribution on $\Sigma ^k$ for some $k \geqslant 2$. Let $\alpha \gt 0$ be the minimum probability of a tuple in the support of $\mu$ (denoted $\mathsf{supp}(\mu )$). We treat the parameters $\Sigma , k, \mu , \alpha$ as fixed and constant. We say that the distribution $\mu$ has a linear embedding if there exist an Abelian group $G$ (with the identity element $0_G$) and mappings $\sigma _i : \Sigma \rightarrow G$, $1 \leqslant i \leqslant k$, such that at least one of the mappings is non-constant and for every $(a_1, a_2, \ldots , a_k)\in \mathsf{supp}(\mu )$, $\sum _{i=1}^k \sigma _i(a_i) = 0_G$. In [Bhangale-Khot-Minzer, STOC 2022], the authors asked the following analytical question. Let $f_i: \Sigma ^n\rightarrow [\!-1,1]$ be bounded functions, such that at least one of the functions $f_i$ essentially has degree at least $d$, meaning that the Fourier mass of $f_i$ on terms of degree less than $d$ is at most $\delta$. If $\mu$ has no linear embedding (over any Abelian group), then is it necessarily the case that
where the right hand side $\to 0$ as the degree $d \to \infty$ and $\delta \to 0$?
In this paper, we answer this analytical question fully and in the affirmative for $k=3$. We also show the following two applications of the result.
1. The first application is related to hardness of approximation. Using the reduction from [5], we show that for every $3$-ary predicate $P:\Sigma ^3 \to \{0,1\}$ such that $P$ has no linear embedding, an SDP (semi-definite programming) integrality gap instance of a $P$-Constraint Satisfaction Problem (CSP) instance with gap $(1,s)$ can be translated into a dictatorship test with completeness $1$ and soundness $s+o(1)$, under certain additional conditions on the instance.
2. The second application is related to additive combinatorics. We show that if the distribution $\mu$ on $\Sigma ^3$ has no linear embedding, marginals of $\mu$ are uniform on $\Sigma$, and $(a,a,a)\in \texttt{supp}(\mu )$ for every $a\in \Sigma$, then every large enough subset of $\Sigma ^n$ contains a triple $({\textbf {x}}_1, {\textbf {x}}_2,{\textbf {x}}_3)$ from $\mu ^{\otimes n}$ (and in fact a significant density of such triples).
Let $\pi$ be a probability distribution in $\mathbb{R}^d$ and f a test function, and consider the problem of variance reduction in estimating $\mathbb{E}_\pi(f)$. We first construct a sequence of estimators for $\mathbb{E}_\pi (f)$, say $({1}/{k})\sum_{i=0}^{k-1} g_n(X_i)$, where the $X_i$ are samples from $\pi$ generated by the Metropolized Hamiltonian Monte Carlo algorithm and $g_n$ is the approximate solution of the Poisson equation through the weak approximate scheme recently invented by Mijatović and Vogrinc (2018). Then we prove under some regularity assumptions that the estimation error variance $\sigma_\pi^2(g_n)$ can be as arbitrarily small as the approximation order parameter $n\rightarrow\infty$. To illustrate, we confirm that the assumptions are satisfied by two typical concrete models, a Bayesian linear inverse problem and a two-component mixture of Gaussian distributions.
Asymptotic properties of random graph sequences, like the occurrence of a giant component or full connectivity in Erdös–Rényi graphs, are usually derived with very specific choices for the defining parameters. The question arises as to what extent those parameter choices may be perturbed without losing the asymptotic property. For two sequences of graph distributions, asymptotic equivalence (convergence in total variation) and contiguity have been considered by Janson (2010) and others; here we use so-called remote contiguity to show that connectivity properties are preserved in more heavily perturbed Erdös–Rényi graphs. The techniques we demonstrate here with random graphs also extend to general asymptotic properties, e.g. in more complex large-graph limits, scaling limits, large-sample limits, etc.
This article studies the identification of complete economic models with testable assumptions. We start with a local average treatment effect ($LATE$) model where the “No Defiers,” the independent IV assumption, and the exclusion restrictions can be jointly refuted by some data distributions. We propose two relaxed assumptions that are not refutable, with one assumption focusing on relaxing the “No Defiers” assumption while the other relaxes the independent IV assumption. The identified set of $LATE$ under either of the two relaxed assumptions coincides with the classical $LATE$ Wald ratio expression whenever the original assumption is not refuted by the observed data distribution. We propose an estimator for the identified $LATE$ and derive the estimator’s limit distribution. We then develop a general method to relax a refutable assumption A. This relaxation method requires finding a function that measures the deviation of an econometric structure from the original assumption A, and a relaxed assumption $\tilde {A}$ is constructed using this measure of deviation. We characterize a condition to ensure the identified sets under $\tilde {A}$ and A coincide whenever A is not refuted by the observed data distribution and discuss the criteria to choose among different relaxed assumptions.