To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Many applications in geosciences require solving inverse problems to estimate the state of a physical system. Data assimilation provides a strong framework to do so when the system is partially observed and its underlying dynamics are known to some extent. In the variational flavor, it can be seen as an optimal control problem where initial conditions are the control parameters. Such problems are often ill-posed, regularization may be needed using explicit prior knowledge to enforce a satisfying solution. In this work, we propose to use a deep prior, a neural architecture that generates potential solutions and acts as implicit regularization. The architecture is trained in a fully-unsupervised manner using the variational data assimilation cost so that gradients are backpropagated through the dynamical model and then through the neural network. To demonstrate its use, we set a twin experiment using a shallow-water toy model, where we test various variational assimilation algorithms on an ocean-like circulation estimation.
In this article, the performance parameters of the electric vehicle were investigated, and its operating point was defined using the core components (Battery, Inverter, and Motor). The test vehicle 2023 Cadillac Lyriq, provided by General Motors Inc., was driven on specified road segments, and the real-time data were retrieved using the integrated controller area network architecture. The neoVI–Fire 2 tool was connected to the vehicle system, which records the dynamic data, and Vehicle Spy software was used to convert the data into a readable format. Finally, the vector electric vehicle operating point was proposed, and the corresponding behavior was interpreted. This methodology could assist researchers in understanding the dynamic behavior of electric vehicle parameters to develop integrated techniques which augment the performance in real time.
This paper gives a new proof of confluence for Carraro and Guerrieri’s call-by-value lambda calculus λvσ with permutation rules. We adapt the compositional Z theorem to λvσ.
The uptake of electric vehicles (EVs) and renewable energy technologies is changing the magnitude, variability, and direction of power flows in electricity networks. To ensure a successful transition to a net zero energy system, it will be necessary for a wide range of stakeholders to understand the impacts of these changing flows on networks. However, there is a gap between those with the data and capabilities to understand electricity networks, such as network operators, and those working on adjacent parts of the energy transition jigsaw, such as electricity suppliers and EV charging infrastructure operators. This paper describes the electric vehicle network analysis tool (EVENT), developed to help make network analysis accessible to a wider range of stakeholders in the energy ecosystem who might not have the bandwidth to curate and integrate disparate datasets and carry out electricity network simulations. EVENT analyses the potential impacts of low-carbon technologies on congestion in electricity networks, helping to inform the design of products and services. To demonstrate EVENT’s potential, we use an extensive smart meter dataset provided by an energy supplier to assess the impacts of electricity smart tariffs on networks. Results suggest both network operators and energy suppliers will have to work much more closely together to ensure that the flexibility of customers to support the energy system can be maximized, while respecting safety and security constraints within networks. EVENT’s modular and open-source approach enables integration of new methods and data, future-proofing the tool for long-term impact.
Recent years have seen a surge in interest in building deep learning-based fully data-driven models for weather prediction. Such deep learning models, if trained on observations can mitigate certain biases in current state-of-the-art weather models, some of which stem from inaccurate representation of subgrid-scale processes. However, these data-driven models, being over-parameterized, require a lot of training data which may not be available from reanalysis (observational data) products. Moreover, an accurate, noise-free, initial condition to start forecasting with a data-driven weather model is not available in realistic scenarios. Finally, deterministic data-driven forecasting models suffer from issues with long-term stability and unphysical climate drift, which makes these data-driven models unsuitable for computing climate statistics. Given these challenges, previous studies have tried to pre-train deep learning-based weather forecasting models on a large amount of imperfect long-term climate model simulations and then re-train them on available observational data. In this article, we propose a convolutional variational autoencoder (VAE)-based stochastic data-driven model that is pre-trained on an imperfect climate model simulation from a two-layer quasi-geostrophic flow and re-trained, using transfer learning, on a small number of noisy observations from a perfect simulation. This re-trained model then performs stochastic forecasting with a noisy initial condition sampled from the perfect simulation. We show that our ensemble-based stochastic data-driven model outperforms a baseline deterministic encoder–decoder-based convolutional model in terms of short-term skills, while remaining stable for long-term climate simulations yielding accurate climatology.
This paper extends the standard double-exponential jump-diffusion (DEJD) model to allow for successive jumps to bring about different effects on the asset price process. The double-exponentially distributed jump sizes are no longer assumed to have the same parameters; instead, we assume that these parameters may take a series of different values to reflect growing or diminishing effects from these jumps. The mathematical analysis of the stock price requires an introduction of a number of distributions that are extended from the hypoexponential (HE) distribution. Under such a generalized setting, the European option price is derived in closed-form which ensures its computational convenience. Through our numerical examples, we examine the effects on the return distributions from the growing and diminishing severity of the upcoming jumps expected in the near future, and investigate how the option prices and the shapes of the implied volatility smiles are influenced by the varying severity of jumps. These results demonstrate the benefits of the modeling flexibility provided by our extension.
Despite the sine wave’s close links to the birth of electronic music in the mid-twentieth century, it has been little studied aesthetically, and no systematic review of its artistic usages exists. This article presents a brief literature review, followed by the results of a survey on the principles guiding sine wave-based works. This allows to put forward a typological framework contributing to an understanding of the application of the sine wave in music.
Open government and open data are often presented as the Asterix and Obelix of modern government—one cannot discuss one, without involving the other. Modern government, in this narrative, should open itself up, be more transparent, and allow the governed to have a say in their governance. The usage of technologies, and especially the communication of governmental data, is then thought to be one of the crucial instruments helping governments achieving these goals. Much open government data research, hence, focuses on the publication of open government data, their reuse, and re-users. Recent research trends, by contrast, divert from this focus on data and emphasize the importance of studying open government data in practice, in interaction with practitioners, while simultaneously paying attention to their political character. This commentary looks more closely at the implications of emphasizing the practical and political dimensions of open government data. It argues that researchers should explicate how and in what way open government data policies present solutions to what kind of problems. Such explications should be based on a detailed empirical analysis of how different actors do or do not do open data. The key question to be continuously asked and answered when studying and implementing open government data is how the solutions openness present latch onto the problem they aim to solve.
The promised merits of data-driven innovation in general and algorithmic systems in particular hardly need enumeration. However, as decision-making tasks are increasingly delegated to algorithmic systems, this raises questions about accountability. These pressing questions of algorithmic accountability, particularly with regard to data-driven innovation in the public sector, deserve ample scholarly attention. Therefore, this paper brings together perspectives from governance studies and critical algorithm studies to assess how algorithmic accountability succeeds or falls short in practice and analyses the Dutch System Risk Indication (SyRI) as an empirical case. Dissecting a concrete case teases out to which degree archetypical accountability practices and processes function in relation to algorithmic decision-making processes, and which new questions concerning algorithmic accountability emerge therein. The case is approached through the analysis of “scavenged” material. It was found that while these archetypical accountability processes and practices can be incredibly productive in dealing with algorithmic systems they are simultaneously at risk. The current accountability configurations hinge predominantly on the ex ante sensitivity and responsiveness of the political fora. When these prove insufficient, mitigation in medias res/ex post is very difficult for other actants. In part, this is not a new phenomenon, but it is amplified in relation to algorithmic systems. Different fora ask different kinds of medium-specific questions to the actor, from different perspectives with varying power relations. These algorithm-specific considerations relate to the decision-making around an algorithmic system, their functionality, and their deployment. Strengthening ex ante political accountability fora to these algorithm-specific considerations could help mitigate this.
Under the assumption that sequences of graphs equipped with resistances, associated measures, walks and local times converge in a suitable Gromov-Hausdorff topology, we establish asymptotic bounds on the distribution of the $\varepsilon$-blanket times of the random walks in the sequence. The precise nature of these bounds ensures convergence of the $\varepsilon$-blanket times of the random walks if the $\varepsilon$-blanket time of the limiting diffusion is continuous at $\varepsilon$ with probability 1. This result enables us to prove annealed convergence in various examples of critical random graphs, including critical Galton-Watson trees and the Erdős-Rényi random graph in the critical window. We highlight that proving continuity of the $\varepsilon$-blanket time of the limiting diffusion relies on the scale invariance of a finite measure that gives rise to realizations of the limiting compact random metric space, and therefore we expect our results to hold for other examples of random graphs with a similar scale invariance property.
We introduce an approach for damage detection in gearboxes based on the analysis of sensor data with the multi-resolution dynamic mode decomposition (mrDMD). The application focus is the condition monitoring of wind turbine gearboxes under varying load conditions, in particular irregular and stochastic wind fluctuations. We analyze data stemming from a simulated vibration response of a simple nonlinear gearbox model in a healthy and damaged scenario and under different wind conditions. With mrDMD applied on time-delay snapshots of the sensor data, we can extract components in these vibration signals that highlight features related to damage and enable its identification. A comparison with Fourier analysis, time synchronous averaging, and empirical mode decomposition shows the advantages of the proposed mrDMD-based data analysis approach for damage detection.
In the context of mortality forecasting, “rotation” refers to the phenomenon that mortality decline accelerates at older ages but decelerates at younger ages. Since rotation is typically subtle, it is difficult to be confirmed and modeled in a statistical, data-driven manner. In this paper, we attempt to overcome this challenge by proposing an alternative modeling approach. The approach encompasses a new model structure, which includes a component that is devoted to measuring rotation. It also features a modeling technique known as ANCOVA, which allows us to statistically detect rotation and extrapolate the phenomenon into the future. Our proposed approach yields plausible mortality forecasts that are similar to those produced by Li et al. [Extending the Lee-Carter method to model the rotation of age patterns of mortality decline for long-term projections. Demography 50 (6), 2037–205, and may be considered more advantageous than the approach of Li et al. in the sense that it is able to generate not only static but also stochastic forecasts.
Model order reduction (MOR) can provide low-dimensional numerical models for fast simulation. Unlike intrusive methods, nonintrusive methods are attractive because they can be applied even without access to full order models (FOMs). Since nonintrusive MOR methods strongly rely on snapshots of the FOMs, constructing good snapshot sets becomes crucial. In this work, we propose a novel active-learning-based approach for use in conjunction with nonintrusive MOR methods. It is based on two crucial novelties. First, our approach uses joint space sampling to prepare a data pool of the training data. The training data are selected from the data pool using a greedy strategy supported by an error estimator based on Gaussian process regression. Second, we introduce a case-independent validation strategy based on probably approximately correct learning. While the methods proposed here can be applied to different MOR methods, we test them here with artificial neural networks and operator inference.
The book graph $B_n ^{(k)}$ consists of $n$ copies of $K_{k+1}$ joined along a common $K_k$. In the prequel to this paper, we studied the diagonal Ramsey number $r(B_n ^{(k)}, B_n ^{(k)})$. Here we consider the natural off-diagonal variant $r(B_{cn} ^{(k)}, B_n^{(k)})$ for fixed $c \in (0,1]$. In this more general setting, we show that an interesting dichotomy emerges: for very small $c$, a simple $k$-partite construction dictates the Ramsey function and all nearly-extremal colourings are close to being $k$-partite, while, for $c$ bounded away from $0$, random colourings of an appropriate density are asymptotically optimal and all nearly-extremal colourings are quasirandom. Our investigations also open up a range of questions about what happens for intermediate values of $c$.
This paper’s content focuses on designing and prototyping a robotic brace dedicated to treating scoliosis. Scoliosis is an abnormal spinal curvature affecting 1–3% of children and constitutes a major therapeutic problem. In moderate cases of deformity, passive brace treatment is performed. However, this approach can lead to important patient discomfort. So, we propose a robotic solution providing greater mobility and the possibility of adapting the procedure to each patient. The robotic brace we built and tested is composed of three specific rings adapted to the patient’s torso. Each independent module of two consecutive rings is movable through a Stewart–Gough platform-type mechanism. As the robotic brace is lightweight, it brings better portability and improves the patient’s comfort.
The first part of the paper shows the state of the art of bracing techniques: from passive to active orthoses. Next, the mechatronics of the device is detailed, and the robot’s kinematic models are developed. The motion control principle is given. In the last part, motion tests were administered with a healthy human to validate the brace architecture choice and its position and motion control strategies.
This paper develops the estimation method of mean and covariance functions of functional data with additional covariate information. With the strength of both local linear smoothing modeling and general weighing scheme, we are able to explicitly characterize the mean and covariance functions with incorporating covariate for irregularly spaced and sparsely observed longitudinal data, as typically encountered in engineering technology or biomedical studies, as well as for functional data which are densely measured. Theoretically, we establish the uniform convergence rates of the estimators in the general weighing scheme. Monte Carlo simulation is conducted to investigate the finite-sample performance of the proposed approach. Two applications including the children growth data and white matter tract dataset obtained from Alzheimer's Disease Neuroimaging Initiative study are also provided.
For decades, proponents of the Internet have promised that it would one day provide a seamless way for everyone in the world to communicate with each other, without introducing new boundaries, gatekeepers, or power structures. What happened? This article explores the system-level characteristics of a key design feature of the Internet that helped it to achieve widespread adoption, as well as the system-level implications of certain patterns of use that have emerged over the years as a result of that feature. Such patterns include the system-level acceptance of particular authorities, mechanisms that promote and enforce the concentration of power, and network effects that implicitly penalize those who do not comply with decisions taken by privileged actors. We provide examples of these patterns and offer some key observations, toward the development of a general theory of why they emerged despite our best efforts, and we conclude with some suggestions on how we might mitigate the worst outcomes and avoid similar experiences in the future.
In this paper, we consider a mixed dividend strategy in a dual risk model. The mixed dividend strategy is the combination of a threshold dividend and a Parisian implementation delays dividend under periodic observation. Given a series of discrete observation points, when the surplus level is larger than the predetermined bonus barrier at observation point, the Parisian implementation delays dividend is immediately carried out, and the threshold dividend is performed continuously during the delayed period. We study the Gerber-Shiu expected discounted penalty function and the expected discounted dividend payments before ruin in such a dual risk model. Numerical illustrations are given to study the influence of relevant parameters on the ruin-related quantities and the selection of the optimal dividend barrier for a given initial surplus level.