To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Guillain-Barré syndrome (GBS) has been previously associated with Zika virus infection. We analysed the data from all the patients with GBS diagnosis that were admitted to a referral hospital, in Tapachula City during the period from January 2013 to August 2016, comparing the incidence of GBS according to the temporality of the Zika outbreak in Southern Mexico. Additionally, we described the clinical and epidemiological characteristics of the GBS patients admitted before or after the Zika outbreak. We observed a sharp increase in the number of patients hospitalised due to GBS from the time the first confirmed Zika cases appeared in Mexico. Clinically we observed GBS cases before zika outbreak had more frequently history of respiratory/gastrointestinal symptoms and GBS during zika outbreak had significantly more frequently recent history of rash/conjunctivitis. Although we cannot affirm that the increased cases of GBS have a specific aetiologic association with Zika, our results suggest that this observed outbreak of in Tapachula, might have been associated to the emerging Zika epidemic, locally and suggests that rare complications associated with acute infections (such as GBS) might be useful in the surveillance systems for emerging infections.
Digital image correlation (DIC) techniques were used to evaluate strain distributions along tensile gage lengths immediately after yielding of a medium manganese steel (7 wt% Mn) in samples cold rolled in the range of 1–6 pct. With an increase in cold work, DIC confirmed that the yielding behavior transitioned from nucleation and propagation of a single localized deformation zone (Lüders band) to uniform deformation, that is, no evidence of strain localization. At intermediate amounts of cold work, a unique yielding behavior was evident where the initially-low positive strain hardening rate increased with tensile strain until conventional strain hardening (i.e., decrease in strain hardening rate with strain). The intermediate yielding behavior was associated with the development of multiple non‑propagating regions of strain localization, an observation not previously evident without the use of DIC.
This paper investigates how socio-spatial aspects of creativity, operationalized as the causal relations between the built environment and perceived creativity in university campuses’ public spaces, are currently applied in practice. Moreover, it discusses practitioners’ perceptions regarding research-generated evidence on socio-spatial aspects of creativity according to three effectiveness aspects: credibility, relevance, and applicability. The “research-generated evidence” is herein derived from data-driven knowledge generated by multi-disciplinary methodologies (e.g., self-reported perceptions, participatory tools, geospatial analysis, observations). Through a thematic analysis of interviews with practitioners involved in the (re)development of campuses public spaces of inner-city campuses and science parks in Amsterdam, Utrecht, and Groningen. We concluded that socio-spatial aspects of creativity concepts were addressed only at the decision-making level for Utrecht Science Park. Correspondingly, while presented evidence was considered by most practitioners as relevant for practice, perceptions of credibility and applicability vary according to institutional goals, practitioners’ habits in practice, and their involvement in projects’ roles and phases. The newfound interrelationships between the three effectiveness aspects highlighted (a) the institutional fragmentation issues in campuses and public spaces projects, (b) the research-practice gap related to such projects, which occur beyond the university campuses’ context, and (c) insights on the relationship between evidence generated through research-based data-driven knowledge and urban planning practice, policy, and governance related to knowledge environments. We concluded that if research-generated evidence on socio-spatial aspects of creativity is to be integrated into the evidence-based practice of campuses’ public spaces, an alignment between researchers, multiple actors involved, policy framing, and goal achievements are fundamental.
Let $V$ be a finite-dimensional vector space over $\mathbb{F}_p$. We say that a multilinear form $\alpha \colon V^k \to \mathbb{F}_p$ in $k$ variables is $d$-approximately symmetric if the partition rank of difference $\alpha (x_1, \ldots, x_k) - \alpha (x_{\pi (1)}, \ldots, x_{\pi (k)})$ is at most $d$ for every permutation $\pi \in \textrm{Sym}_k$. In a work concerning the inverse theorem for the Gowers uniformity $\|\!\cdot\! \|_{\mathsf{U}^4}$ norm in the case of low characteristic, Tidor conjectured that any $d$-approximately symmetric multilinear form $\alpha \colon V^k \to \mathbb{F}_p$ differs from a symmetric multilinear form by a multilinear form of partition rank at most $O_{p,k,d}(1)$ and proved this conjecture in the case of trilinear forms. In this paper, somewhat surprisingly, we show that this conjecture is false. In fact, we show that approximately symmetric forms can be quite far from the symmetric ones, by constructing a multilinear form $\alpha \colon \mathbb{F}_2^n \times \mathbb{F}_2^n \times \mathbb{F}_2^n \times \mathbb{F}_2^n \to \mathbb{F}_2$ which is 3-approximately symmetric, while the difference between $\alpha$ and any symmetric multilinear form is of partition rank at least $\Omega (\sqrt [3]{n})$.
Let $A \subseteq \{0,1\}^n$ be a set of size $2^{n-1}$, and let $\phi \,:\, \{0,1\}^{n-1} \to A$ be a bijection. We define the average stretch of $\phi$ as
where the expectation is taken over uniformly random $x,x' \in \{0,1\}^{n-1}$ that differ in exactly one coordinate.
In this paper, we continue the line of research studying mappings on the discrete hypercube with small average stretch. We prove the following results.
For any set $A \subseteq \{0,1\}^n$ of density $1/2$ there exists a bijection $\phi _A \,:\, \{0,1\}^{n-1} \to A$ such that ${\sf avgStretch}(\phi _A) = O\left(\sqrt{n}\right)$.
For $n = 3^k$ let ${A_{\textsf{rec-maj}}} = \{x \in \{0,1\}^n \,:\,{\textsf{rec-maj}}(x) = 1\}$, where ${\textsf{rec-maj}} \,:\, \{0,1\}^n \to \{0,1\}$ is the function recursive majority of 3’s. There exists a bijection $\phi _{{\textsf{rec-maj}}} \,:\, \{0,1\}^{n-1} \to{A_{\textsf{rec-maj}}}$ such that ${\sf avgStretch}(\phi _{{\textsf{rec-maj}}}) = O(1)$.
Let ${A_{{\sf tribes}}} = \{x \in \{0,1\}^n \,:\,{\sf tribes}(x) = 1\}$. There exists a bijection $\phi _{{\sf tribes}} \,:\, \{0,1\}^{n-1} \to{A_{{\sf tribes}}}$ such that ${\sf avgStretch}(\phi _{{\sf tribes}}) = O(\!\log (n))$.
These results answer the questions raised by Benjamini, Cohen, and Shinkar (Isr. J. Math 2016).
We revisit the problem of estimating the spot volatility of an Itô semimartingale using a kernel estimator. A central limit theorem (CLT) with an optimal convergence rate is established for a general two-sided kernel. A new pre-averaging/kernel estimator for spot volatility is also introduced to handle the microstructure noise of ultra high-frequency observations. A CLT for the estimation error of the new estimator is obtained, and the optimal selection of the bandwidth and kernel function is subsequently studied. It is shown that the pre-averaging/kernel estimator’s asymptotic variance is minimal for two-sided exponential kernels, hence justifying the need of working with kernels of unbounded support. Feasible implementation of the proposed estimators with optimal bandwidth is developed as well. Monte Carlo experiments confirm the superior performance of the new method.
Regularized quantile regression (QR) is a useful technique for analyzing heterogeneous data under potentially heavy-tailed error contamination in high dimensions. This paper provides a new analysis of the estimation/prediction error bounds of the global solution of $L_1$-regularized QR (QR-LASSO) and the local solutions of nonconvex regularized QR (QR-NCP) when the number of covariates is greater than the sample size. Our results build upon and significantly generalize the earlier work in the literature. For certain heavy-tailed error distributions and a general class of design matrices, the least-squares-based LASSO cannot achieve the near-oracle rate derived under the normality assumption no matter the choice of the tuning parameter. In contrast, we establish that QR-LASSO achieves the near-oracle estimation error rate for a broad class of models under conditions weaker than those in the literature. For QR-NCP, we establish the novel results that all local optima within a feasible region have desirable estimation accuracy. Our analysis applies to not just the hard sparsity setting commonly used in the literature, but also to the soft sparsity setting which permits many small coefficients. Our approach relies on a unified characterization of the global/local solutions of regularized QR via subgradients using a generalized Karush–Kuhn–Tucker condition. The theory of the paper establishes a key property of the subdifferential of the quantile loss function in high dimensions, which is of independent interest for analyzing other high-dimensional nonsmooth problems.
Here, we are sharing our second report about children affected by Multisystem Inflammatory Syndrome in Children (MIS-C). The aim of the present study was to update our knowledge about children with MIS-C. Furthermore, we tried to compare clinical manifestations, laboratory features and final outcome of patients based on disease severity, in order to better understanding of the nature of this novel syndrome.
Methods
This retrospective study was conducted at Children's Medical Center Hospital, the hub of excellence in paediatrics in Iran, located in Tehran, Iran. We reviewed medical records of children admitted to the hospital with the diagnosis of MIS-C from July 2020 to October 2021.
Results
One hundred and twenty-two patients enrolled the study. Ninety-seven (79.5%) patients had mild to moderate MIS-C (MIS-C without overlap with KD (n = 80); MIS-C overlapping with KD (n = 17)) and 25 (20.5%) patients showed severe MIS-C. The mean age of all patients was 6.4 ± 4.0 years. Nausea and vomiting (53.3%), skin rash (49.6%), abdominal pain (46.7%) and conjunctivitis (41.8%) were also frequently seen Headache, chest pain, tachypnea and respiratory distress were significantly more common in patients with severe MIS-C (P < 0.0001, P = 0.021, P < 0.0001 and P < 0.0001, respectively). Positive anti-N severe acute respiratory syndrome coronavirus 2 IgM and IgG were detected in 14 (33.3%) and 23 (46.9%) tested patients, respectively. Albumin, and vitamin D levels in children with severe MISC were significantly lower than children with mild to moderate MIS-C (P < 0.0001, P = 0.05). Unfortunately, 2 (1.6%) of 122 patients died and both had severe MIS-C.
Conclusion
Patients with MIS-C in our region suffer from wide range of signs and symptoms. Among laboratory parameters, hypoalbuminemia and low vitamin D levels may predict a more severe course of the disease. Coronary artery dilation is frequently seen among all patients, regardless of disease severity.
The monitoring of infrastructure assets using sensor networks is becoming increasingly prevalent. A digital twin in the form of a finite element (FE) model, as commonly used in design and construction, can help make sense of the copious amount of collected sensor data. This paper demonstrates the application of the statistical finite element method (statFEM), which provides a principled means of synthesizing data and physics-based models, in developing a digital twin of a self-sensing structure. As a case study, an instrumented steel railway bridge of $ 27.34\hskip1.5pt \mathrm{m} $ length located along the West Coast Mainline near Staffordshire in the UK is considered. Using strain data captured from fiber Bragg grating sensors at 108 locations along the bridge superstructure, statFEM can predict the “true” system response while taking into account the uncertainties in sensor readings, applied loading, and FE model misspecification errors. Longitudinal strain distributions along the two main I-beams are both measured and modeled during the passage of a passenger train. The statFEM digital twin is able to generate reasonable strain distribution predictions at locations where no measurement data are available, including at several points along the main I-beams and on structural elements on which sensors are not even installed. The implications for long-term structural health monitoring and assessment include optimization of sensor placement and performing more reliable what-if analyses at locations and under loading scenarios for which no measurement data are available.
One complication in mortality modelling is capturing the impact of risk factors that contribute to mortality differentials between different populations. Evidence has suggested that mortality differentials tend to diminish over age. Classical methods such as the Gompertz law attempt to capture mortality patterns over age using intercept and slope parameters, possibly causing an unjustified mortality crossover at advanced ages when applied independently to different populations. In recent research, Richards (Scandinavian Actuarial Journal2020(2), 110–127) proposed a Hermite spline (HS) model that describes the age pattern of mortality differentials using one parameter and circumvents an unreasonable crossover by default. The original HS model was applied to pension data at individual level in the age dimension only. This paper extends the method to model population mortality in both age and period dimensions. Our results indicate that in addition to possessing desirable fitting properties, the HS approach can produce accurate mortality forecasts, compared with the Gompertz and P-splines models.
Let X be a continuous-time strongly mixing or weakly dependent process and let T be a renewal process independent of X. We show general conditions under which the sampled process $(X_{T_i},T_i-T_{i-1})^{\top}$ is strongly mixing or weakly dependent. Moreover, we explicitly compute the strong mixing or weak dependence coefficients of the renewal sampled process and show that exponential or power decay of the coefficients of X is preserved (at least asymptotically). Our results imply that essentially all central limit theorems available in the literature for strongly mixing or weakly dependent processes can be applied when renewal sampled observations of the process X are at our disposal.
Financial inclusion depends on providing adjusted services for citizens with disclosed vulnerabilities. At the same time, the financial industry needs to adhere to a strict regulatory framework, which is often in conflict with the desire for inclusive, adaptive, and privacy-preserving services. In this article we study how this tension impacts the deployment of privacy-sensitive technologies aimed at financial inclusion. We conduct a qualitative study with banking experts to understand their perspectives on service development for financial inclusion. We build and demonstrate a prototype solution based on open source decentralized identifiers and verifiable credentials software and report on feedback from the banking experts on this system. The technology is promising thanks to its selective disclosure of vulnerabilities to the full control of the individual. This supports GDPR requirements, but at the same time, there is a clear tension between introducing these technologies and fulfilling other regulatory requirements, particularly with respect to “Know Your Customer.” We consider the policy implications stemming from these tensions and provide guidelines for the further design of related technologies.
We study large-deviation probabilities of Telecom processes appearing as limits in a critical regime of the infinite-source Poisson model elaborated by I. Kaj and M. Taqqu. We examine three different regimes of large deviations (LD) depending on the deviation level. A Telecom process $(Y_t)_{t \ge 0}$ scales as $t^{1/\gamma}$, where t denotes time and $\gamma\in(1,2)$ is the key parameter of Y. We must distinguish moderate LD ${\mathbb P}(Y_t\ge y_t)$ with $t^{1/\gamma} \ll y_t \ll t$, intermediate LD with $ y_t \approx t$, and ultralarge LD with $ y_t \gg t$. The results we obtain essentially depend on another parameter of Y, namely the resource distribution. We solve completely the cases of moderate and intermediate LD (the latter being the most technical one), whereas the ultralarge deviation asymptotics is found for the case of regularly varying distribution tails. In all the cases considered, the large-deviation level is essentially reached by the minimal necessary number of ‘service processes’.
In this short note we introduce two notions of dispersion-type variability orders, namely expected shortfall-dispersive (ES-dispersive) order and expectile-dispersive (ex-dispersive) order, which are defined by two classes of popular risk measures, the expected shortfall and the expectiles. These new orders can be used to compare the variability of two risk random variables. It is shown that either the ES-dispersive order or the ex-dispersive order is the same as the dilation order. This gives us some insight into parametric measures of variability induced by risk measures in the literature.
Chronic food insecurity remains a challenge globally, exacerbated by climate change-driven shocks such as droughts and floods. Forecasting food insecurity levels and targeting vulnerable households is apriority for humanitarian programming to ensure timely delivery of assistance. In this study, we propose to harness a machine learning approach trained on high-frequency household survey data to infer the predictors of food insecurity and forecast household level outcomes in near real-time. Our empirical analyses leverage the Measurement Indicators for Resilience Analysis (MIRA) data collection protocol implemented by Catholic Relief Services (CRS) in southern Malawi, a series of sentinel sites collecting household data monthly. When focusing on predictors of community-level vulnerability, we show that a random forest model outperforms other algorithms and that location and self-reported welfare are the best predictors of food insecurity. We also show performance results across several neural networks and classical models for various data modeling scenarios to forecast food security. We pose that problem as binary classification via dichotomization of the food security score based on two different thresholds, which results in two different positive class to negative class ratios. Our best performing model has an F1 of 81% and an accuracy of 83% in predicting food security outcomes when the outcome is dichotomized based on threshold 16 and predictor features consist of historical food security score along with 20 variables selected by artificial intelligence explainability frameworks. These results showcase the value of combining high-frequency sentinel site data with machine learning algorithms to predict future food insecurity outcomes.
Scale-free percolation is a stochastic model for complex networks. In this spatial random graph model, vertices $x,y\in\mathbb{Z}^d$ are linked by an edge with probability depending on independent and identically distributed vertex weights and the Euclidean distance $|x-y|$. Depending on the various parameters involved, we get a rich phase diagram. We study graph distance and compare it to the Euclidean distance of the vertices. Our main attention is on a regime where graph distances are (poly-)logarithmic in the Euclidean distance. We obtain improved bounds on the logarithmic exponents. In the light tail regime, the correct exponent is identified.
We propose a series-based nonparametric specification test for a regression function when data are spatially dependent, the “space” being of a general economic or social nature. Dependence can be parametric, parametric with increasing dimension, semiparametric or any combination thereof, thus covering a vast variety of settings. These include spatial error models of varying types and levels of complexity. Under a new smooth spatial dependence condition, our test statistic is asymptotically standard normal. To prove the latter property, we establish a central limit theorem for quadratic forms in linear processes in an increasing dimension setting. Finite sample performance is investigated in a simulation study, with a bootstrap method also justified and illustrated. Empirical examples illustrate the test with real-world data.
The distribution of human leukocyte antigens in the population assists in matching solid organ donors and recipients when the typing methods used do not provide sufficiently precise information. This is made possible by linkage disequilibrium (LD), where alleles co-occur more often than random chance would suggest. There is a trade-off between the high bias and low variance of a broad sample from the population and the low bias but high variance of a focused sample. Some of this trade-off could be alleviated if sub-populations shared LD despite having different allele frequencies. These experiments show that Bayesian estimation can balance bias and variance by tuning the effective sample size of the reference panel, but the LD as represented by an additive or multiplicative copula is not shared.
A neural network framework is used to design a new Ni-based superalloy that surpasses the performance of IN718 for laser-blown-powder directed-energy-deposition repair applications. The framework utilized a large database comprising physical and thermodynamic properties for different alloy compositions to learn both composition to property and also property to property relationships. The alloy composition space was based on IN718, although, W was additionally included and the limiting Al and Co content were allowed to increase compared standard IN718, thereby allowing the alloy to approach the composition of ATI 718Plus® (718Plus). The composition with the highest probability of satisfying target properties including phase stability, solidification strain, and tensile strength was identified. The alloy was fabricated, and the properties were experimentally investigated. The testing confirms that this alloy offers advantages for additive repair applications over standard IN718.