To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The AIMTB rapid test assay is an emerging test, which adopted a fluorescence immunochromatographic assay to measure interferon-γ (IFN-γ) production following stimulation of effector memory T cells in whole blood by mycobacterial proteins. The aim of this article was to explore the ability of AIMTB rapid test assay in detecting Mycobacterium tuberculosis (MTB) infection compared with the widely applied QuantiFERON-TB Gold Plus (QFT-Plus) test among rural doctors in China. In total, 511 participants were included in the survey. The concordance between the QFT-Plus test and the AIMTB rapid test assay was 94.47% with a Cohen’s kappa coefficient (κ) of 0.84 (95% CI, 0.79–0.90). Improved concordance between the two tests was observed in males and in participants with 26 or more years of service as rural doctors. The quantitative values of the QFT-Plus test was higher in individuals with a result of QFT-Plus-/AIMTB+ as compared to those with a result of QFT-Plus-/AIMTB- (p < 0.001). Overall, our study found that there was an excellent consistency between the AIMTB rapid test assay and the QFT-Plus test in a Chinese population. As the AIMTB rapid test assay is fast and easy to operate, it has the potential to improve latent tuberculosis infection testing and treatment at the community level in resource-limited settings.
Recent developments have indicated a potential association between tinnitus and COVID-19. The study aimed to understand tinnitus following COVID-19 by examining its severity, recovery prospects, and connection to other lasting COVID-19 effects. Involving 1331 former COVID-19 patients, the online survey assessed tinnitus severity, cognitive issues, and medical background. Of the participants, 27.9% reported tinnitus after infection. Findings showed that as tinnitus severity increased, the chances of natural recovery fell, with more individuals experiencing ongoing symptoms (p < 0.001). Those with the Grade II mild tinnitus (OR = 3.68; CI = 1.89–7.32; p = 0.002), Grade III tinnitus (OR = 3.70; CI = 1.94–7.22; p < 0.001), Grade IV (OR = 6.83; CI = 3.73–12.91; p < 0.001), and a history of tinnitus (OR = 1.96; CI = 1.08–3.64; p = 0.03) had poorer recovery outcomes. Grade IV cases were most common (33.2%), and severe tinnitus was strongly associated with the risk of developing long-term hearing loss, anxiety, and emotional disorders (p < 0.001). The study concludes that severe post-COVID tinnitus correlates with a worse prognosis and potential hearing loss, suggesting the need for attentive treatment and management of severe cases.
We derive an asymptotic expansion for the critical percolation density of the random connection model as the dimension of the encapsulating space tends to infinity. We calculate rigorously the first expansion terms for the Gilbert disk model, the hyper-cubic model, the Gaussian connection kernel, and a coordinate-wise Cauchy kernel.
In this article, we develop a novel large volatility matrix estimation procedure for analyzing global financial markets. Practitioners often use lower-frequency data, such as weekly or monthly returns, to address the issue of different trading hours in the international financial market. However, this approach can lead to inefficiency due to information loss. To mitigate this problem, our proposed method, called Structured Principal Orthogonal complEment Thresholding (S-POET), incorporates observation structural information for both global and national factor models. We establish the asymptotic properties of the S-POET estimator, and also demonstrate the drawbacks of conventional covariance matrix estimation procedures when using lower-frequency data. Finally, we apply the S-POET estimator to an out-of-sample portfolio allocation study using international stock market data.
As machine learning applications gain widespread adoption and integration in a variety of applications, including safety and mission-critical systems, the need for robust evaluation methods grows more urgent. This book compiles scattered information on the topic from research papers and blogs to provide a centralized resource that is accessible to students, practitioners, and researchers across the sciences. The book examines meaningful metrics for diverse types of learning paradigms and applications, unbiased estimation methods, rigorous statistical analysis, fair training sets, and meaningful explainability, all of which are essential to building robust and reliable machine learning products. In addition to standard classification, the book discusses unsupervised learning, regression, image segmentation, and anomaly detection. The book also covers topics such as industry-strength evaluation, fairness, and responsible AI. Implementations using Python and scikit-learn are available on the book's website.
The third edition of this highly regarded text provides a rigorous, yet entertaining, introduction to probability theory and the analytic ideas and tools on which the modern theory relies. The main changes are the inclusion of the Gaussian isoperimetric inequality plus many improvements and clarifications throughout the text. With more than 750 exercises, it is ideal for first-year graduate students with a good grasp of undergraduate probability theory and analysis. Starting with results about independent random variables, the author introduces weak convergence of measures and its application to the central limit theorem, and infinitely divisible laws and their associated stochastic processes. Conditional expectation and martingales follow before the context shifts to infinite dimensions, where Gaussian measures and weak convergence of measures are studied. The remainder is devoted to the mutually beneficial connection between probability theory and partial differential equations, culminating in an explanation of the relationship of Brownian motion to classical potential theory.
Functional linear regression has gained popularity as a statistical tool for studying the relationship between function-valued variables. However, in practice, it is hard to expect that the explanatory variables of interest are strictly exogenous, due to, for example, the presence of omitted variables and measurement error. This issue of endogeneity remains insufficiently explored, in spite of its empirical importance. To fill this gap, this article proposes new consistent FPCA-based instrumental variable estimators and develops their asymptotic properties in detail. Simulation experiments under a wide range of settings show that the proposed estimators perform considerably well. We apply our methodology to estimate the impact of immigration on native labor market outcomes in the US.
Datafication—the increase in data generation and advancements in data analysis—offers new possibilities for governing and tackling worldwide challenges such as climate change. However, employing data in policymaking carries various risks, such as exacerbating inequalities, introducing biases, and creating gaps in access. This paper articulates 10 core tensions related to climate data and its implications for climate data governance, ranging from the diversity of data sources and stakeholders to issues of quality, access, and the balancing act between local needs and global imperatives. Through examining these tensions, the article advocates for a paradigm shift towards multi-stakeholder governance, data stewardship, and equitable data practices to harness the potential of climate data for the public good. It underscores the critical role of data stewards in navigating these challenges, fostering a responsible data ecology, and ultimately contributing to a more sustainable and just approach to climate action and broader social issues.
As the field of migration studies evolves in the digital age, big data analytics emerge as a potential game-changer, promising unprecedented granularity, timeliness, and dynamism in understanding migration patterns. However, the epistemic value added by this data explosion remains an open question. This paper critically appraises the claim, investigating the extent to which big data augments, rather than merely replicates, traditional data insights in migration studies. Through a rigorous literature review of empirical research, complemented by a conceptual analysis, we aim to map out the methodological shifts and intellectual advancements brought forth by big data. The potential scientific impact of this study extends into the heart of the discipline, providing critical illumination on the actual knowledge contribution of big data to migration studies. This, in turn, delivers a clarified roadmap for navigating the intersections of data science, migration research, and policymaking.
Objective: The study aims to build a comprehensive network structure of psychopathology based on patient narratives by combining the merits of both qualitative and quantitative research methodologies. Research methods: The study web-scraped data from 10,933 people who disclosed a prior DSM/ICD11 diagnosed mental illness when discussing their lived experiences of mental ill health. The study then used Python 3 and its associated libraries to run network analyses and generate a network graph. Key findings: The results of the study revealed 672 unique experiences or symptoms that generated 30023 links or connections. The study also identified that of all 672 reported experiences/symptoms, five were deemed the most influential; “anxiety,” “fear,” “auditory hallucinations,” “sadness,” and “depressed mood and loss of interest.” Additionally, the study uncovered some unusual connections between the reported experiences/symptoms. Discussion and recommendations: The study demonstrates that applying a quantitative analytical framework to qualitative data at scale is a useful approach for understanding the nuances of psychopathological experiences that may be missed in studies relying solely on either a qualitative or a quantitative survey-based approach. The study discusses the clinical implications of its results and makes recommendations for potential future directions.
In this paper, we introduce the concept of a multilayer network game in a cooperative setup. We consider the notion of simultaneous contribution of individual players or links to two different networks (say, X and Z). Our model nests both classical network games and bi-cooperative network games. The calculation of the utility of players within a specific network in the presence of an additional/alternative network provides a broader spectrum of real-world decision dynamics. The subsequent challenge involves achieving an optimal distribution of payoffs among the players forming the networks. The link-based rule best fits to our model as it delves into the influence of the alternative links in the network. We have designed an extended Position value to address the complexities arising from scenarios where networks overlap. Further, it is shown that the Position value is uniquely characterized by the Efficiency and Balanced Link Contribution axioms.
This study aims to explore the dependencies on the cryptocurrency market using social network tools. We focus on the correlations observed in the cryptocurrency returns. Based on the sample of cryptocurrencies listed between January 2015 and December 2022 we examine which cryptos are central to the overall market and how often major players change. Static network analysis based on the whole sample shows that the network consists of several communities strongly connected and central, as well as a few that are disconnected and peripheral. Such a structure of the network implies high systemic risk. The day-by-day snapshots show that the network evolves rapidly. We construct the ranking of major cryptos based on centrality measures utilizing the TOPSIS method. We find that when single measures are considered, Bitcoin seems to have lost its first-mover advantage in late 2016. However, in the overall ranking, it still appears among the top positions. The collapse of any of the cryptocurrencies from the top of the rankings poses a serious threat to the entire market.
We revisit processes generated by iterated random functions driven by a stationary and ergodic sequence. Such a process is called strongly stable if a random initialization exists for which the process is stationary and ergodic, and for any other initialization the difference of the two processes converges to zero almost surely. Under some mild conditions on the corresponding recursive map, without any condition on the driving sequence we show the strong stability of iterations. Several applications are surveyed such as generalized autoregression and queuing. Furthermore, new results are deduced for Langevin-type iterations with dependent noise and for multitype branching processes.
We consider a robust optimal investment–reinsurance problem to minimize the goal-reaching probability that the value of the wealth process reaches a low barrier before a high goal for an ambiguity-averse insurer. The insurer invests its surplus in a constrained financial market, where the proportion of borrowed amount to the current wealth level is no more than a given constant, and short-selling is prohibited. We assume that the insurer purchases per-claim reinsurance to transfer its risk exposure to a reinsurer whose premium is computed according to the mean–variance premium principle. Using the stochastic control approach based on the Hamilton–Jacobi–Bellman equation, we derive robust optimal investment–reinsurance strategies and the corresponding value functions. We conclude that the behavior of borrowing typically occurs with a lower wealth level. Finally, numerical examples are given to illustrate our results.
Leptospirosis is a widespread zoonosis caused by bacteria of the genus Leptospira. Although crucial to mitigate the disease risk, basic epidemiological information is lacking, such as the identities of Leptospira maintenance hosts. The raccoon (Procyon lotor), an alien invasive species in France, could pose a public health risk if it carries pathogenic Leptospira. We investigated the rate and type (selective vs. unselective) of Leptospira carriage in the two main raccoon populations in France. Out of the 141 raccoons collected, seven (5%) tested quantitative PCR positive, targeting lfb1 gene, based on kidney, lung, and urine samples. Phylogenetic analysis revealed the presence of three different L. interrogans clusters. The results suggest that raccoons were more likely accidental hosts and made only a limited contribution to Leptospira maintenance.
To maximize its value, the design, development and implementation of structural health monitoring (SHM) should focus on its role in facilitating decision support. In this position paper, we offer perspectives on the synergy between SHM and decision-making. We propose a classification of SHM use cases aligning with various dimensions that are closely linked to the respective decision contexts. The types of decisions that have to be supported by the SHM system within these settings are discussed along with the corresponding challenges. We provide an overview of different classes of models that are required for integrating SHM in the decision-making process to support the operation and maintenance of structures and infrastructure systems. Fundamental decision-theoretic principles and state-of-the-art methods for optimizing maintenance and operational decision-making under uncertainty are briefly discussed. Finally, we offer a viewpoint on the appropriate course of action for quantifying, validating, and maximizing the added value generated by SHM. This work aspires to synthesize the different perspectives of the SHM, Prognostic Health Management, and reliability communities, and provide directions to researchers and practitioners working towards more pervasive monitoring-based decision-support.