To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper studies an optimal deterministic investment problem for a DC pension plan member with inflation risk. We describe the price processes of the inflation-indexed bond and the stock by a continuous diffusion process and a jump diffusion process with random parameters, respectively. The contribution rate linked to the income of the DC plan member is assumed to be a non-Markovian adapted process. Under the mean-variance criterion, we use Malliavin calculus to derive a characterization for the optimal deterministic investment strategy. In some special cases, we obtain the explicit expressions for the optimal deterministic strategies.
Generative neural networks (GNNs) have successfully used human-created designs to generate novel 3D models that combine concepts from disparate known solutions, which is an important aspect of design exploration. GNNs automatically learn a parameterization (or latent space) of a design space, as opposed to alternative methods that manually define a parameterization. However, GNNs are typically not evaluated using an explicit notion of physical performance, which is a critical capability needed for design. This work bridges this gap by proposing a method to extract a set of functional designs from the latent space of a point cloud generating GNN, without sacrificing the aforementioned aspects of a GNN that are appealing for design exploration. We introduce a sparsity preserving cost function and initialization strategy for a genetic algorithm (GA) to optimize over the latent space of a point cloud generating autoencoder GNN. We examine two test cases, an example of generating ellipsoid point clouds subject to a simple performance criterion and a more complex example of extracting 3D designs with a low coefficient of drag. Our experiments show that the modified GA results in a diverse set of functionally superior designs while maintaining similarity to human-generated designs in the training data set.
In this process-based study, we introduced a collocation tool with a new interface and advanced search features and examined how a class of EFL college students interacted with it. To elicit their tool consultation behaviors, a vocabulary test with collocation questions was designed. The students’ use of the tool to answer the vocabulary questions was screen-recorded for further analysis, serving as the major data source. One-on-one interviews with selected students were then conducted to clarify issues related to the study and their experience in using the tool. The findings indicated that the pattern-based tool was efficient in helping students solve collocation problems. This paper concludes with some pedagogical implications and suggestions for further research.
Answer set programming (ASP) is a paradigm for modeling knowledge-intensive domains and solving challenging reasoning problems. In ASP solving, a typical strategy is to preprocess problem instances by rewriting complex rules into simpler ones. Normalization is a rewriting process that removes extended rule types altogether in favor of normal rules. Recently, such techniques led to optimization rewriting in ASP, where the goal is to boost answer set optimization by refactoring the optimization criteria of interest. In this paper, we present a novel, general, and effective technique for optimization rewriting based on comparator networks which are specific kinds of circuits for reordering the elements of vectors. The idea is to connect an ASP encoding of a comparator network to the literals being optimized and to redistribute the weights of these literals over the structure of the network. The encoding captures information about the weight of an answer set in auxiliary atoms in a structured way that is proven to yield exponential improvements during branch-and-bound optimization on an infinite family of example programs. The used comparator network can be tuned freely, for example, to find the best size for a given benchmark class. Experiments show accelerated optimization performance on several benchmark problems.
Algebraic effects are computational effects that can be described with a set of basic operations and equations between them. As many interesting effect handlers do not respect these equations, most approaches assume a trivial theory, sacrificing both reasoning power and safety. We present an alternative approach where the type system tracks equations that are observed in subparts of the program, yielding a sound and flexible logic, and paving a way for practical optimisations and reasoning tools.
While network research often focuses on social integration as a predictor of health, a less-explored idea is that connections to dissimilar others may benefit well-being. As such, this study investigates whether network diversity is associated with changes in four health outcomes over a 3-year period of time in the U.S.A. Specifically, we focus on how an underexplored measure of network diversity—educational attainment assortativity—is associated with common self-reported outcomes: propensity to exercise, body-mass index, mental health, and physical health. We extend prior research by conducting multilevel analyses using this measure of diversity while adjusting for a range of socio-demographic and network confounders. Data are drawn from a longitudinal probability sample of U.S. adults (n=10.679) in which respondents reported information about themselves and eight possible alters during three yearly surveys (2013–2015). We find, first, that higher educational attainment is associated with more educationally insular networks, while less-educated adults have more educationally diverse networks. Results further suggest that having educationally similar networks is associated with higher body-mass index among the less educated. Further exploration of the relationship between ego network diversity, tie strength, and health is warranted.
Individuals filling specialized, interdependent organizational roles achieve coordinated task execution through effective communication channels. Such channels enable regular access to information, opportunities, and assistance that may enhance one’s understanding of the task environment. However, the time and effort devoted to maintaining those channels may detract from one’s duties by turning attention away from the task environment. Disrupted task environments increase information requirements, thus creating a dilemma in which individuals must sustain benefits offered by important communication channels and relieve burdens imposed by ineffective channels. Using separable temporal exponential random graph models (STERGMs), this paper examines the relationship between situational awareness (SA) and the propensity to sustain or dissolve preexisting communication channels during 10 disruptive events experienced sequentially by a large, multifaceted military organization during a 2-week training exercise. Results provide limited evidence that increased SA detracts from tie preservation; instead SA begins to predict tie preservation during the second week of the exercise. Patterns of organizational adaptation reveal that, over time, improvised coordinative roles increasingly fall upon those with elevated SA. These results suggest that over successive disruptions, the benefits of information provided by communication channels within interdependent, role-specialized organizations begin to outweigh the costs of sustaining those channels.
This third edition capitalizes on the success of the previous editions and leverages the important advancements in visualization, data analysis, and sharing capabilities that have emerged in recent years. It serves as an accelerated guide to decision support designs for consultants, service professionals and students. This 'fast track' enables a ramping up of skills in Excel for those who may have never used it to reach a level of mastery that will allow them to integrate Excel with widely available associated applications, make use of intelligent data visualization and analysis techniques, automate activity through basic VBA designs, and develop easy-to-use interfaces for customizing use. The content of this edition has been completely restructured and revised, with updates that correspond with the latest versions of software and references to contemporary add-in development across platforms. It also features best practices in design and analytical consideration, including methodical discussions of problem structuring and evaluation, as well as numerous case examples from practice.
This paper investigates the volatility in regime-switching models formulated based on the geometric Brownian motion with its drift and volatility factors randomized with Markov chains. By developing explicit formulas about occupation time of Markov chains, we analysis the difference between global volatility of this model and the volatility caused by Brownian randomness, in order to measure the volatility caused by regime-switching after justifying its existence. Utilizing this structure of volatility, we optimize the methods of volatility parameters estimation.
We present a new model of guarded dependent type theory (GDTT), a type theory with guarded recursion and multiple clocks in which one can program with and reason about coinductive types. Productivity of recursively defined coinductive programs and proofs is encoded in types using guarded recursion and can therefore be checked modularly, unlike the syntactic checks implemented in modern proof assistants. The model is based on a category of covariant presheaves over a category of time objects, and quantification over clocks is modelled using a presheaf of clocks. To model the clock irrelevance axiom, crucial for programming with coinductive types, types must be interpreted as presheaves internally right orthogonal to the object of clocks. In the case of dependent types, this translates to a lifting condition similar to the one found in homotopy theoretic models of type theory, but here with an additional requirement of uniqueness of lifts. Since the universes defined by the standard Hofmann–Streicher construction in this model do not satisfy this property, the universes in GDTT must be indexed by contexts of clock variables. We show how to model these universes in such a way that inclusions of clock contexts give rise to inclusions of universes commuting with type operations on the nose.
The Internet of Things (IoT) has recently emerged as an innovative technology capable of empowering various areas such as healthcare, agriculture, smart cities, smart homes and supply chain with real-time and state-of-the-art sensing capabilities. Due to the underlying potential of this technology, it already saw exponential growth in a wide variety of use-cases in multiple application domains. As researchers around the globe continue to investigate its aptitudes, a collective agreement is that to get the best out of this technology and to harness its full potential, IoT needs to sit upon a flexible network architecture with strong support for security, privacy and trust. On the other hand, blockchain (BC) technology has recently come into prominence as a breakthrough technology with the potential to deliver some valuable properties such as resiliency, support for integrity, anonymity, decentralization and autonomous control. Several BC platforms are proposed that may be suitable for different use-cases, including IoT applications. In such, the possibility to integrate the IoT and BC technology is seen as a potential solution to address some crucial issues. However, to achieve this, there must be a clear understanding of the requirements of different IoT applications and the suitability of a BC platform for a particular application satisfying its underlying requirements. This paper aims to achieve this goal by describing an evaluation framework which can be utilized to select a suitable BC platform for a given IoT application.
The last two decades have seen a wave of exciting new developments in the theory of algorithmic randomness and its applications to other areas of mathematics. This volume surveys much of the recent work that has not been included in published volumes until now. It contains a range of articles on algorithmic randomness and its interactions with closely related topics such as computability theory and computational complexity, as well as wider applications in areas of mathematics including analysis, probability, and ergodic theory. In addition to being an indispensable reference for researchers in algorithmic randomness, the unified view of the theory presented here makes this an excellent entry point for graduate students and other newcomers to the field.
We present an overview of higher randomness and its recent developments. After an introduction, we provide in the second section some background on higher computability, presenting in particular $\Pi^1_1$ and $\Sigma^1_1$ sets from the viewpoint of the computability theorist. In the third section we give an overview of the different higher randomness classes: $\Delta^1_1$-randomness, $\Pi^1_1$-Martin-Löf randomness, higher weak-2 randomness, higher difference randomness, and $\Pi^1_1$-randomness. We then move on to study each of these classes, separating them and inspecting their respective lowness classes. We put more attention on $\Pi^1_1$-Martin-Löf randomness and $\Pi^1_1$-randomness: The former is the higher analogue of the most well-known and studied class in classical algorithmic randomness. We show in particular how to lift the main classical randomness theorems to the higher settings by putting continuity in higher reductions and relativisations. The latter presents, as we will see, many remarkable properties and does not have any analogue in classical randomness. Finally in the eighth section we study randomness along with a higher hierarchy of complexity of sets, motivated by the notion of higher weak-2 randomness. We show that this hierarchy collapses eventually.
Highly critical application domains, like medicine and aerospace, require the use of strict design, implementation, and validation techniques. Functional languages have been used in these domains to develop synchronous dataflow programming languages for reactive systems. Causal stream functions and functional reactive programming (FRP) capture the essence of those languages in a way that is both elegant and robust. To guarantee that critical systems can operate under high stress over long periods of time, these applications require clear specifications of possible faults and hazards, and how they are being handled. Modeling failure is straightforward in functional languages, and many functional reactive abstractions incorporate support for failure or termination. However, handling unknown types of faults, and incorporating fault tolerance into FRP, requires a different construction and remains an open problem. This work demonstrates how to extend an existing functional reactive framework with fault tolerance features. At value level, we tag faulty signals with reliability and probability information and use random testing to inject faults and validate system properties encoded in temporal logic. At type level, we tag components with the kinds of faults they may exhibit and use type-level programming to obtain compile-time guarantees of key aspects of fault tolerance. Our approach is powerful enough to be used in systems with realistic complexity, and flexible enough to be used to guide system analysis and design, validate system properties in the presence of faults, perform runtime monitoring, and study the effects of different fault tolerance mechanisms.
This chapter explains functions and modules. Functions perform some operation. They can operate directly on data or indirectly via other functions. All the niceties of different kinds of functions are explained. In addition, the topic treats the concept of modules, bits of code that are available to you or other programmers. The chapter shows how you can use existing modules in your own code and how you can write your own modules for others. Finally, the chapter goes into the proper way to document your functions and modules.
In this introductory survey, we provide an overview of the major developments of algorithmic randomness with an eye towards the historical development of the discipline. First we give a brief introduction to computability theory and the underlying mathematical concepts that later appear in the survey. Next we selectively cover four broad periods in which the primary developments in algorithmic randomness occurred: (1) the mid-1960s to mid-1970s, in which the main definitions of algorithmic randomness were laid out and the basic properties of random sequences were established; (2) the 1980s through the 1990s, which featured intermittent and important work from a handful of researchers; (3) the 2000s, during which there was an explosion of results as the discipline matured into a fully-fledged subbranch of computability theory; and (4) the early 2010s, in which ties between algorithmic randomness and other subfields of mathematics were discovered. The aim of this survey is to provide a point of entry for newcomers to the field and a useful reference for practitioners.