To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We devise schemes for producing, in the least possible time, p identical objects with n agents that work at differing speeds. This involves halting the process to transfer production across agent types. For the case of two types of agent, we construct schemes based on the Euclidean algorithm that seeks to minimize the number of pauses in production.
For microscale heterogeneous partial differential equations (PDEs), this article further develops novel theory and methodology for their macroscale mathematical/asymptotic homogenization. This article specifically encompasses the case of quasi-periodic heterogeneity with finite scale separation: no scale separation limit is required. A key innovation herein is to analyse the ensemble of all phase-shifts of the heterogeneity. Dynamical systems theory then frames the homogenization as a slow manifold of the ensemble. Depending upon any perceived scale separation within the quasi-periodic heterogeneity, the homogenization may be done in either one step or two sequential steps: the results are equivalent. The theory not only assures us of the existence and emergence of an exact homogenization at finite scale separation, it also provides a practical systematic method to construct the homogenization to any specified order. For a class of heterogeneities, we show that the macroscale homogenization is potentially valid down to lengths which are just twice that of the microscale heterogeneity! This methodology complements existing well-established results by providing a new rigorous and flexible approach to homogenization that potentially also provides correct macroscale initial and boundary conditions, treatment of forcing and control, and analysis of uncertainty.
We construct a new stochastic interest rate model with two stochastic factors, by introducing a stochastic long-run equilibrium level into the Vasicek interest rate model which follows another Ornstein–Uhlenbeck process. With the interest rate under the Black–Scholes model being assumed to follow the newly proposed model, a closed-form representation of European option prices is successfully presented, when the analytical characteristic function of the underlying log-price under a forward measure is derived. To assess the model performance, a preliminary empirical study is conducted using S&P 500 index and its options, with the Vasicek model and an alternative two-factor Vasicek model taken as benchmarks.
Clustering is a method of allocating data points in various groups, known as clusters, based on similarity. The notion of expressing similarity mathematically and then maximizing it (minimize dissimilarity) can be formulated as an optimization problem. Spectral clustering is an example of such an approach to clustering, and it has been successfully applied to visualization of clustering and mapping of points into clusters in two and three dimensions. Higher dimension problems remained untouched due to complexity and, most importantly, lack of understanding what “similarity” means in higher dimensions. In this paper, we apply spectral clustering to long timeseries EEG (electroencephalogram) data. We developed several models, based on different similarity functions and different approaches for spectral clustering itself. The results of the numerical experiment demonstrate that the created models are accurate and can be used for timeseries classification.
We find solutions that describe the levelling of a thin fluid film, comprising a non-Newtonian power-law fluid, that coats a substrate and evolves under the influence of surface tension. We consider the evolution from periodic and localized initial conditions as separate cases. The particular (similarity) solutions in each of these two cases exhibit the generic property that the profiles are weakly singular (that is, higher-order derivatives do not exist) at points where the pressure gradient vanishes. Numerical simulations of the thin film equation, with either periodic or localized initial condition, are shown to approach the appropriate particular solution.
Many industrial design problems are characterized by a lack of an analytical expression defining the relationship between design variables and chosen quality metrics. Evaluating the quality of new designs is therefore restricted to running a predetermined process such as physical testing of prototypes. When these processes carry a high cost, choosing how to gather further data can be very challenging, whether the end goal is to accurately predict the quality of future designs or to find an optimal design. In the multi-fidelity setting, one or more approximations of a design’s performance are available at varying costs and accuracies. Surrogate modelling methods have long been applied to problems of this type, combining data from multiple sources into a model which guides further sampling. Many challenges still exist; however, the foremost among them is choosing when and how to rely on available low-fidelity sources. This tutorial-style paper presents an introduction to the field of surrogate modelling for multi-fidelity expensive black-box problems, including classical approaches and open questions in the field. An illustrative example using Australian elevation data is provided to show the potential downfalls in blindly trusting or ignoring low-fidelity sources, a question that has recently gained much interest in the community.
We continue discussion of row operations to solve linear systems. In particular, we see how to characterise when a system has no solutions (is inconsistent) and, if consistent, we show how the method can be used to find all (possibly infinitely many) solutions, and to express these in vector notation. Here, the notion of the rank of the system, which determines the number of free parameters in the general solutions, is shown to be important. Continuing the earlier discussion of portfolios, we explain how the existence of an arbitrage portfolio is determined by the existence or otherwise of state prices.
We start the chapter with a mathematical model of how consumers might anticipate market trends and what effect this will have on the evolution of prices. This leads us to second-order differential equations. We then embark on describing how to solve linear constant-coefficient second-order differential equations. The general solution is the sum of the solution of a corresponding homogeneous equation and a particular solution. In an analogous way to the way in which second-order recurrence equations are solved, there is a general method for determining the solution of the homogeneous equation, involving the solution of a corresponding quadratic equation known as the auxiliary equation. We explain how to find particular solutions and how to use initial conditions. We also discuss the behaviour of the solutions obtained.
The derivative is introduced as an instantaneous rate of change and it is shown how this can be determined from first principles. Techniques (sum, product, quotient and composite function rules) are then explained and the connection with small changes is illustrated. Economic interpretations via marginals are given.
Elasticity of demand is introduced and it is shown how this characterises how revenue will change upon a price increase (the two distinct possibilities being represented by elastic and inelastic demand). Profit maximisation is considered in general, and it is shown that, when maximising profit, marginal revenue and marginal cost must be equal. Two very different cases are then studied: that in which the firm is a monopoly and that of perfect competition.
Optimisation of two-variable functions is motivated via the example of a firm producing two goods, where the concepts of complementary and substitute goods are discussed. The general idea of a critical point is discussed and it is explained that there can be different types of critical point: maxima, minima and saddle points. It is explained how the second partial derivatives may be used to determine what type of critical point one has. This is shown explicitly for the case in which the two-variable function is quadratic, and then stated in general. Examples involving profit-maximisation are given.
The determinant of a 2 × 2 and a 3 × 3 matrix are defined explicitly, and a more general way of (defining and) calculating determinants of larger matrices is described, involving the use of row operations to transform a matrix to upper-triangular form. It is then explained that a non-zero determinant is equivalent to invertibility. Cramer's rule is presented and a general method (based on the co-factor matrix) is given for inverting 3 × 3 matrices, an alternative to the row operations procedure described in .
Continuing from the previous chapter, this chapter explores the powerful applications of diagonalisation. We demonstrate first how it can be used to determine the powers of a matrix, which can then be applied to solve a coupled system of recurrence equations. An alternative approach to solving such systems also uses diagonalisation, but uses it to effect a change of variable so that the corresponding system in the new variables is much simpler to solve (and can then be used to revert to the solution in the original variables). We show how an analogous approach can be used to solve coupled systems of differential equations. This closing chapter provides an interesting link between the calculus and linear algebra aspects of the course.
This chapter involves the input--output model. Here, there are several goods under production, and some of each is needed to meet the production of the others, and there is also an external demand for each good. The model involves a matrix known as the technology matrix and a related matrix know as the Leontief matrix. It is shown how to solve such problems and it is explained that, in general (under very reasonable conditions), there always will be a solution. It is also shown how to approximate the solution using powers of the technology matrix.
This chapter studies the case of a small efficient firm in a perfectly competitive market. Breakeven and startup points are defined. Relationships between marginal cost, average cost and average variable cost at breakeven and startup points are investigated, and it is shown how to derive the supply set of such firms.
The chapter starts by discussing how we can determine the long-term qualitative behaviour of the solutions to second-order recurrence equations. In particular, in some cases, it can be seen that the solution is oscillatory. In the context of the multiplier-accelerator model, this corresponds to what are known as business cycles. The chapter concludes with an analysis of a dynamic macroeconomic model that is more realistic than the multiplier-accelerator one.