To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Engineering machines are becoming increasingly complex and possess more control variables, increasing the complexity and versatility of the control systems. Different configurations of the control system, named a policy, can result in similar output behavior but with different resource or component life usage. There is therefore an opportunity to find optimal policies with respect to economic decisions. While many solutions have been proposed to find such economic policy decisions at the asset level, we consider this problem at the fleet level. In this case, the optimal operation of each asset is affected by the state of all other assets in the fleet. Challenges introduced by considering multiple assets include the construction of economic multi-objective optimization criteria, handling rare events such as failures, application of fleet-level constraints, and scalability. The proposed solution presents a framework for economic fleet optimization. The framework is demonstrated for economic criteria relating to resource usage, component lifing, and maintenance scheduling, but is generically extensible. Direct optimization of lifetime distributions is considered in order to avoid the computational burden of discrete event simulation of rare events. Results are provided for a real-world case study targeting the optimal economic operation of a fleet of aerospace gas turbine engines.
This paper proposes to solve the vortex gust mitigation problem on a 2D, thin flat plate using onboard measurements. The objective is to solve the discrete-time optimal control problem of finding the pitch rate sequence that minimizes the lift perturbation, that is, the criterion where is the lift coefficient obtained by the unsteady vortex lattice method. The controller is modeled as an artificial neural network, and it is trained to minimize using deep reinforcement learning (DRL). To be optimal, we show that the controller must take as inputs the locations and circulations of the gust vortices, but these quantities are not directly observable from the onboard sensors. We therefore propose to use a Kalman particle filter (KPF) to estimate the gust vortices online from the onboard measurements. The reconstructed input is then used by the controller to calculate the appropriate pitch rate. We evaluate the performance of this method for gusts composed of one to five vortices. Our results show that (i) controllers deployed with full knowledge of the vortices are able to mitigate efficiently the lift disturbance induced by the gusts, (ii) the KPF performs well in reconstructing gusts composed of less than three vortices, but shows more contrasted results in the reconstruction of gusts composed of more vortices, and (iii) adding a KPF to the controller recovers a significant part of the performance loss due to the unobservable gust vortices.
A recent outbreak of cryptosporidiosis (Cryptosporidium parvum, subtype IIdA23G1) among veterinary students associated with extracurricular activities concerned with lambs is described from Norway. Although cryptosporidiosis outbreaks among veterinary students have been frequently reported, this is among the first from lamb contact. Cryptosporidium oocysts were detected in samples from two students and three lambs. A questionnaire distributed immediately after the outbreak was recognized, identified an assumed attack rate of 50% based on exposure and illness among exposed students (28 of 56), despite most reporting good or very good hygiene measures. Laboratory diagnostics confirmed infection in two of these. The illness lasted over a week in most students (up to 15 days), but contact with health services was negligible. In addition to implementing measures to reduce the likelihood of further such outbreaks among veterinary students, it is recommended that future outbreaks of diarrhoea among ruminants on the farm should be investigated for aetiological agents.
Recently, Kurisu and Otsu (2022b, Econometric Theory 38(1), 172–193) derived the uniform convergence rates for the nonparametric deconvolution estimators proposed by Li and Vuong (1998, Journal of Multivariate Analysis 65(2), 139–165). This article shows that faster uniform convergence rates can be established for their estimators under the same assumptions. In addition, a new class of deconvolution estimators based on a variant of Kotlarski’s identity is also proposed. It is shown that in some cases, these new estimators can have faster uniform convergence rates than the existing estimators.
We establish theoretical results about the low frequency contamination (i.e., long memory effects) induced by general nonstationarity for estimates such as the sample autocovariance and the periodogram, and deduce consequences for heteroskedasticity and autocorrelation robust (HAR) inference. We present explicit expressions for the asymptotic bias of these estimates. We show theoretically that nonparametric smoothing over time is robust to low frequency contamination. Nonstationarity can have consequences for both the size and power of HAR tests. Under the null hypothesis there are larger size distortions than when data are stationary. Under the alternative hypothesis, existing LRV estimators tend to be inflated and HAR tests can exhibit dramatic power losses. Our theory indicates that long bandwidths or fixed-b HAR tests suffer more from low frequency contamination relative to HAR tests based on HAC estimators, whereas recently introduced double kernel HAC estimators do not suffer from this problem. We present second-order Edgeworth expansions under nonstationarity about the distribution of HAC and DK-HAC estimators and about the corresponding t-test in the regression model. The results show that the distortions in the rejection rates can be induced by time variation in the second moments even when there is no break in the mean.
In practice, nondestructive testing (NDT) procedures tend to consider experiments (and their respective models) as distinct, conducted in isolation, and associated with independent data. In contrast, this work looks to capture the interdependencies between acoustic emission (AE) experiments (as meta-models) and then use the resulting functions to predict the model hyperparameters for previously unobserved systems. We utilize a Bayesian multilevel approach (similar to deep Gaussian Processes) where a higher-level meta-model captures the inter-task relationships. Our key contribution is how knowledge of the experimental campaign can be encoded between tasks as well as within tasks. We present an example of AE time-of-arrival mapping for source localization, to illustrate how multilevel models naturally lend themselves to representing aggregate systems in engineering. We constrain the meta-model based on domain knowledge, then use the inter-task functions for transfer learning, predicting hyperparameters for models of previously unobserved experiments (for a specific design).
In this paper, we analyze a polling system on a circle. Random batches of customers arrive at a circle, where each customer, independently, obtains a location that is uniformly distributed on the circle. A single server cyclically traverses the circle to serve all customers. Using mean value analysis, we derive the expected number of waiting customers within a given distance of the server. We exploit this to obtain closed-form expressions for both the mean batch sojourn time and the mean time to delivery.
In May 2017, whole-genome sequencing (WGS) became the primary subtyping method for Salmonella in Canada. As a result of the increased discriminatory power provided by WGS, 16 multi-jurisdictional outbreaks of Salmonella associated with frozen raw breaded chicken products were identified between 2017 and 2019. The majority (15/16) were associated with S. enteritidis, while the remaining outbreak was associated with S. Heidelberg. The 16 outbreaks included a total of 487 cases with ages ranging from 0 to 98 years (median: 24 years); 79 hospitalizations and two deaths were reported. Over the course of the outbreak investigations, 14 frozen raw breaded chicken products were recalled, and one was voluntarily withdrawn from the market. After previous changes to labelling and the issuance of public communication for these products proved ineffective at reducing illnesses, new industry requirements were issued in 2019, which required the implementation of measures at the manufacturing/processing level to reduce Salmonella to below detectable amounts in frozen raw breaded chicken products. Since implementation, no further outbreaks of Salmonella associated with frozen breaded chicken have been identified in Canada, a testament to the effectiveness of these risk mitigation measures.
This commentary explores MENA”s AI governance, addressing gaps, showcasing successful strategies, and comparing national approaches. It emphasizes current deficiencies, highlights regional contributions to global AI governance, and offers insights into effective frameworks. The study reveals distinctions and trends in MENA”s national AI strategies, serving as a concise resource for policymakers and industry stakeholders.
Shiga toxin-producing Escherichia coli (STEC) is a group of bacteria that causes gastrointestinal illness and occasionally causes large foodborne outbreaks. It represents a major public health concern due to its ability to cause severe illness which can sometimes be fatal. This study was undertaken as part of a rapid investigation into a national foodborne outbreak of STEC O145. On 22 May 2024, United Kingdom (UK) public health agencies and laboratories identified an increase in stool specimens submissions and patients testing positive for Shiga toxin-producing E. coli (STEC). Whole genome sequencing (WGS) identified serotype O145:H28 stx2a/eae belonging to the same five single nucleotide polymorphism (SNP) single linkage cluster as the causative agent. By 3 July 2024, 288 cases had been linked to the cluster. Most cases were adults (87%) and females (57%), 49% were hospitalized with a further 10% attending emergency care. Descriptive epidemiology and analytical studies were conducted which identified consumption of nationally distributed pre-packed sandwiches as a common food exposure. The implicated food business operators voluntarily recalled ready-to-eat sandwiches and wraps containing lettuce on 14 June 2024.
The financial burden of hospitalization from life-threatening infectious diseases on the U.S. healthcare system is substantial and continues to increase. The purpose of this study was to identify key predictors of high hospital charges for infective endocarditis at a major university-affiliated cardiac care centre in West Virginia.
A retrospective electronic medical records’ review was undertaken of all adult patients admitted for endocarditis between 2014–2018. Multiple linear regression analysis assessed the total charges billed to the patient account for their endocarditis hospitalization in the medical record.
Hospital charges have increased 12-fold during 2014–2018. Among the 486 patients, the median hospital charge was $198 678. About 47% of the patients underwent surgery incurring 70% of the total charges. Patients with hospital stays of ≥50 days accounted for a third of all charges. The multiple linear regression model accounted for 85% of the linear variance in the hospital charges. Median charges increased by 30.87% for patients with ≥9 consultations, 60.32% for those who died in the hospital, and 81.85% for those who underwent surgical intervention.
The study findings showed that complex care requiring multiple consultations, surgical interventions, and longer hospital stays were significantly associated with higher hospital charges for endocarditis treatment.
Data is the foundation of any scientific, industrial, or commercial process. Its journey flows from collection to transport, storage, and processing. While best practices and regulations guide its management and protection, recent events have underscored their vulnerabilities. Academic research and commercial data handling have been marred by scandals, revealing the brittleness of data management. Data is susceptible to undue disclosures, leaks, losses, manipulation, or fabrication. These incidents often occur without visibility or accountability, necessitating a systematic structure for safe, honest, and auditable data management. We introduce the concept of Honest Computing as the practice and approach that emphasizes transparency, integrity, and ethical behaviour within the realm of computing and technology. It ensures that computer systems and software operate honestly and reliably without hidden agendas, biases, or unethical practices. It enables privacy and confidentiality of data and code by design and default. We also introduce a reference framework to achieve demonstrable data lineage and provenance, contrasting it with Secure Computing, a related but differently orientated form of computing. At its core, Honest Computing leverages Trustless Computing, Confidential Computing, Distributed Computing, Cryptography, and AAA security concepts. Honest Computing opens new ways of creating technology-based processes and workflows which permit the migration of regulatory frameworks for data protection from principle-based approaches to rule-based ones. Addressing use cases in many fields, from AI model protection and ethical layering to digital currency formation for finance and banking, trading, and healthcare, this foundational layer approach can help define new standards for appropriate data custody and processing.
The condition assessment of underground infrastructure (UI) is critical for maintaining the safety, functionality, and longevity of subsurface assets like tunnels and pipelines. This article reviews various data acquisition techniques, comparing their strengths and limitations in UI condition assessment. In collecting structured data, traditional methods like strain gauge can only obtain relatively low volumes of data due to low sampling frequency, manual data collection, and transmission, whereas more advanced and automatic methods like distributed fiber optic sensing can gather relatively larger volumes of data due to automatic data collection, continuous sampling, or comprehensive monitoring. Upon comparison, unstructured data acquisition methods can provide more detailed visual information that complements structured data. Methods like closed-circuit television and unmanned aerial vehicle produce large volumes of data due to their continuous video recording and high-resolution imaging, posing great challenges to data storage, transmission, and processing, while ground penetration radar and infrared thermography produce smaller volumes of image data that are more manageable. The acquisition of large volumes of UI data is the first step in its condition assessment. To enable more efficient, accurate, and reliable assessment, it is recommended to (1) integrate data analytics like artificial intelligence to automate the analysis and interpretation of collected data, (2) to develop robust big data management platforms capable of handling large volumes of data storage, processing and analysis, (3) to couple different data acquisition technologies to leverage the strengths of each technique, and (4) to continuously improve data acquisition methods to ensure efficient and reliable data acquisition.
The course of COVID-19 is highly variable, with genetics playing a significant role. Through large-scale genetic association studies, a link between single nucleotide polymorphisms and disease susceptibility and severity was established. However, individual single nucleotide polymorphisms identified thus far have shown modest effects, indicating a polygenic nature of this trait, and individually have limited predictive performance. To address this limitation, we investigated the performance of a polygenic risk score model in the context of COVID-19 severity in a Russian population. A genome-wide polygenic risk score model including information from over a million common single nucleotide polymorphisms was developed using summary statistics from the COVID-19 Host Genetics Initiative consortium. Low-coverage sequencing (5x) was performed for ~1000 participants, and polygenic risk score values were calculated for each individual. A multivariate logistic regression model was used to analyse the association between polygenic risk score and COVID-19 outcomes. We found that individuals in the top 10% of the polygenic risk score distribution had a markedly elevated risk of severe COVID-19, with adjusted odds ratio of 2.9 (95% confidence interval: 1.8–4.6, p-value = 4e-06), and more than four times higher risk of mortality from COVID-19 (adjusted odds ratio = 4.3, p-value = 2e-05). This study highlights the potential of polygenic risk score as a valuable tool for identifying individuals at increased risk of severe COVID-19 based on their genetic profile.
Early investigation of Pólya urns considered drawing balls one at a time. In the last two decades, several authors have considered multiple drawing in each step, but mostly for schemes involving two colors. In this manuscript, we consider multiple drawing from urns of balls of multiple colors, formulating asymptotic theory for specific urn classes and addressing more applications. The class we consider is affine and tenable, built around a ‘core’ square matrix. We examine cases where the urn is irreducible and demonstrate its relationship to matrix irreducibility for its core matrix, with examples provided. An index for the drawing schema is derived from the eigenvalues of the core. We identify three regimes: small, critical, and large index. In the small-index regime, we find an asymptotic Gaussian law. In the critical-index regime, we also find an asymptotic Gaussian law, albeit with a difference in the scale factor, which involves logarithmic terms. In both of these regimes, we have explicit forms for the structure of the mean and the covariance matrix of the composition vector (both exact and asymptotic). In all three regimes we have strong laws.
Structural convergence is a framework for the convergence of graphs by Nešetřil and Ossona de Mendez that unifies the dense (left) graph convergence and Benjamini-Schramm convergence. They posed a problem asking whether for a given sequence of graphs $(G_n)$ converging to a limit $L$ and a vertex $r$ of $L$, it is possible to find a sequence of vertices $(r_n)$ such that $L$ rooted at $r$ is the limit of the graphs $G_n$ rooted at $r_n$. A counterexample was found by Christofides and Král’, but they showed that the statement holds for almost all vertices $r$ of $L$. We offer another perspective on the original problem by considering the size of definable sets to which the root $r$ belongs. We prove that if $r$ is an algebraic vertex (i.e. belongs to a finite definable set), the sequence of roots $(r_n)$ always exists.
Group A streptococcal or Streptococcus pyogenes infections have been increasing post-COVID-19 pandemic. We describe the epidemiology of S. pyogenes pharyngitis and invasive disease in Alberta, Canada 2018–2023. Positive pharyngitis specimens were identified from throat swabs collected from pharyngitis patients. Invasive S. pyogenes was defined as the isolation of S. pyogenes from a normally sterile site or severe skin infection. S. pyogenes isolates were emm typed. Pharyngitis and invasive disease displayed seasonal trends preceding the COVID-19 pandemic followed by a sharp decrease during COVID-19 intervention measures. After the lifting of interventions, rates of pharyngitis and invasive disease rose. There were 182 983 positive pharyngitis specimens between 2018 and 2023 for a positivity rate of 17.6%. The highest rates occurred in the 0–9 age group in 2023 (41.5%). Invasive disease increased in 2022–2023 driven by emm1 and 12 types. M1UK strain was the most frequent M1 type associated with invasive disease (59% of M1 isolates sequenced). Notably, out of 182 983 pharyngitis cases, there were 111 cases of invasive S. pyogenes detected for an invasive disease rate of 0.06%. This descriptive epidemiology of S. pyogenes pharyngitis and invasive S. pyogenes disease highlights the rapid increase in cases of S. pyogenes occurring in western Canada and illustrates the critical need for a vaccine.
We consider the task completion time of a repairable server system in which a server experiences randomly occurring service interruptions during which the server works slowly. Every service-state change preempts the task that is being processed. The server may then resume the interrupted task, it may replace the task with a different one, or it may restart the same task from the beginning, under the new service-state. The total time that the server takes to complete a task of random size including interruptions is called completion time. We study the completion time of a task under the last two cases as a function of the task size distribution, the service interruption frequency/severity, and the repair frequency. We derive closed form expressions for the completion time distribution in Laplace domain under replace and restart recovery disciplines and present their asymptotic behavior. In general, the heavy tailed behavior of completion times arises due to the heavy tailedness of the task time. However, in the preempt-restart service discipline, even in the case that the server still serves during interruptions albeit at a slower rate, completion times may demonstrate power tail behavior for exponential tail task time distributions. Furthermore, we present an $M/G/\infty$ queue with exponential service time and Markovian service interruptions. Our results reveal that the stationary first order moments, that is, expected system time and expected number in the system are insensitive to the way the service modulation affects the servers; system-wide modulation affecting every server simultaneously vs identical modulation affecting each server independently.