We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Observations in psychiatric in-patient settings are used to reduce suicide, self-harm, violence and absconding risk. The study aims were to describe the characteristics of in-patients who died by suicide under observation and examine their service-related antecedents.
Method
A national consecutive case series in England and Wales (2006–2012) was examined.
Results
There were 113 suicides by in-patients under observation, an average of 16 per year. Most were under intermittent observation. Five deaths occurred while patients were under constant observation. Patient deaths were linked with the use of less experienced staff or staff unfamiliar with the patient, deviation from procedures and absconding.
Conclusions
We identified key elements of observation that could improve safety, including only using experienced and skilled staff for the intervention and using observation levels determined by clinical need not resources.
Rancho La Brea is unquestionably the most famous fossil deposit in North America, and its fauna occupies a key position in vertebrate paleontology as the basis for the Rancholabrean Land Mammal Age. The 23 acres now known as Hancock Park yielded the bulk of fossil specimens from this locality. A minimum of 518 taxa have been identified to date including 79 algae (all but one of which are diatoms), 72 vascular plants, 9 nonmarine mollusks (13 taxa of marine mollusks were carried to the site by early Indians), 135 arthropods, 27 lower vertebrates, 138 birds, and 58 mammals. A detailed list of the biota, too lengthy for inclusion here, is available from the authors on request.
Complex Analysis with Mathematica offers a way of learning and teaching a subject that lies at the heart of many areas of pure and applied mathematics, physics, engineering and even art. This book offers teachers and students an opportunity to learn about complex numbers in a state-of-the-art computational environment. The innovative approach also offers insights into many areas too often neglected in a student treatment, including complex chaos and mathematical art. Thus readers can also use the book for self-study and for enrichment. The use of Mathematica enables the author to cover several topics that are often absent from a traditional treatment. Students are also led, optionally, into cubic or quartic equations, investigations of symmetric chaos and advanced conformal mapping. A CD is included which contains a live version of the book: in particular all the Mathematica code enables the user to run computer experiments.
With financial modelling requiring a better understanding of model risk, it is helpful to be able to vary assumptions about underlying probability distributions in an efficient manner, preferably without the noise induced by resampling distributions managed by Monte Carlo methods. This paper presents differential equations and solution methods for the functions of the form Q(x) = F−1(G(x)), where F and G are cumulative distribution functions. Such functions allow the direct recycling of Monte Carlo samples from one distribution into samples from another. The method may be developed analytically for certain special cases, and illuminate the idea that it is a more precise form of the traditional Cornish–Fisher expansion. In this manner the model risk of distributional risk may be assessed free of the Monte Carlo noise associated with resampling. The method may also be regarded as providing both analytical and numerical bases for doing more precise Cornish–Fisher transformations. Examples are given of equations for converting normal samples to Student t, and converting exponential to normal. In the case of the normal distribution, the change of variables employed allows the sampling to take place to good accuracy based on a single rational approximation over a very wide range of sample space. The avoidance of branching statements is of use in optimal graphics processing unit (GPU) computations as it avoids the effect of branch divergence. We give a branch-free normal quantile that offers performance improvements in a GPU environment while retaining the best precision characteristics of well-known methods. We also offer models with low probability branch divergence. Comparisons of new and existing forms are made on Nvidia GeForce GTX Titan and Tesla C2050 GPUs. We argue that in both single- and double-precisions, the change-of-variables approach offers the most GPU-optimal Gaussian quantile yet, working faster than the Cuda 5.5 built-in function.
The Working Group FITS (WG-FITS) is the international control authority for the Flexible Image Transport System (FITS) data format. The WG-FITS was formed in 1988 by a formal resolution of the IAU XX General Assembly in Baltimore (MD, USA), 1988, to maintain the existing FITS standards and to approve future extensions to FITS.
In both modern stochastic analysis and more traditional probability and statistics, one way of characterizing a static or dynamic probability distribution is through its quantile function. This paper is focused on obtaining a direct understanding of this function via the classical approach of establishing and then solving differential equations for the function. We establish ordinary differential equations and power series for the quantile functions of several common distributions. We then develop the partial differential equation for the evolution of the quantile function associated with the solution of a class of stochastic differential equations, by a transformation of the Fokker–Planck equation. We are able to utilize the static formulation to provide elementary time-dependent and equilibrium solutions.
Such a direct understanding is important because quantile functions find important uses in the simulation of physical and financial systems. The simplest way of simulating any non-uniform random variable is by applying its quantile function to uniform deviates. Modern methods of Monte–Carlo simulation, techniques based on low-discrepancy sequences and copula methods all call for the use of quantile functions of marginal distributions. We provide web resources for prototype implementations in computer code. These implementations may variously be used directly in live sampling models or in a high-precision benchmarking mode for developing fast rational approximations also for use in simulation.
The business meeting began with a brief review of the current rules and procedures of the WG, which are documented on the WG web page. Four regional FITS committees have been established by the WG, covering North American, Europe, Japan, and Australian/New Zealand, to provide advice to the WG on pending proposals. While it is recognized that this committee structure might need to be revised to provide representation to other regions, the current system is working well, and there were no motions to make any changes at this time.
In this chapter we introduce the methods by which certain types of definite integral may be evaluated. Similar methods may be used to sum certain types of infinite series. The approach has many applications, and will be considered again in Chapter 16, in applications to Fourier transforms, and in Chapter 17, on Laplace transforms. We begin by establishing the Residue theorem, which relates a contour integral to the residues of the integrand at its various singularities. Then we explore how various types of real integral can be transformed into contour integrals, and then evaluated by an analysis of their singularities. Finally we take a brief look at the summation of series by residue methods.
Mathematica can play various roles in this part of the theory related to the evaluation of integrals by the calculus of residues. It can just be there to help with the algebra in calculating residues. You can use the functions Residue and NResidue to work out the residues directly. Finally you can use Integrate and NIntegrate to do a direct calculation of the answer. In this last case considerable care is required. The symbolic treatment of general integrals is an evolving (black) art and the results, mostly in the way they are displayed and the full details of conditions for the results to hold, will vary from version to version of the software. This matters particularly when the integrand contains parameters.
In this section we give a more precise characterization of complex functions and review their basic properties. We also introduce some formal concepts, such as neighbourhoods and open sets, in order to lay the foundations for a discussion of continuity and differentiability. We shall then make a first definition of basic functions such as the exponential and trigonometric functions, and their inverses, by referring back to real definitions. This will be revisited in Chapter 9 from a power series perspective. We shall also look at the concept of branch points, and the extended complex plane or ‘Riemann sphere’.
We shall also explore various ways of visualizing complex functions using Mathematica. We can build various routines for looking at functions. The first one we will consider takes a two-dimensional point of view, where functions are regarded as mappings taking one region of the complex plane to another. The second regards the function as a pair of functions of two real variables, and we show how to use Mathematica's three-dimensional plotting routines to view simultaneously both the modulus and argument of complex functions. Then we shall develop some plot routines tailored to bring out the folded structure of certain complex functions. Note that, in this chapter, the output of all Mathematica computations is set to appear in TraditionalForm. If you are using Mathematica technology beyond version 5.2, you should explore the options provided in your current version for managing graphics. See also the on-line supplement and enclosed CD.
Since 1985, I have been fortunate to have taught the theory of complex variables for several courses in both the USA and the UK. In the USA I lectured a course on advanced calculus for engineers and scientists at MIT, and in the UK I have given tutorials on the subject to undergraduate students in mathematics at both Cambridge and Oxford. Indeed, draft versions of this text have been inflicted on my students at Balliol and, more recently, at St. Catherine's over the last fourteen years. Few topics have given me such pleasure to teach, given the rich yet highly accessible structure of the subject, and it has at times formed the subject of my research, notably in its development into twistor theory, and latterly in its applications to financial mathematics. A parallel thread of my work has been in the applications of computer algebra and calculus systems, and in particular Mathematica®, to diverse topics in applied mathematics. This book is in part an attempt to use Mathematica to illuminate the topic of complex analysis, and draws on both these threads of my experience.
The book attempts also to inject some new mathematical themes into the topic and the teaching of it. These themes I feel are, if not actually missing, under-emphasized in most traditional treatments. It is perfectly possible for students to have had a formal training in mathematics that leaves them unaware of many key and/or beautiful topics.
You have already read about how to motivate the introduction of complex numbers by the need to solve quadratic equations, and have seen how to solve higher order polynomial equations both through ‘pen and paper’ analysis and with the help of Mathematica. In the previous chapter you looked at Newton—Raphson iteration. This is not the only way of defining an iterative solution method, and there is another approach called ‘cobwebbing’ which is the subject of this chapter.
You are now in a position to perform a basic investigation of some of the most fascinating topics in modern mathematics: period doubling and transitions to chaos. This topic can be introduced by considering simple quadratic or cubic functions. However, rather than solving a simple quadratic or cubic equation, you are now going to be concerned with applying a function over and over again, given a starting value. Under certain circumstances, this has the effect of finding the solutions to the original equation, but in other situations you will be led to the solutions of other polynomial equations. Hence the need for a complex view.
There are many good reasons for you to investigate these topics. First, you should appreciate the emergence of complexity and beauty from the iteration (repeated application) of a simple quadratic or cubic map. Second, you should appreciate that there is some value in doing ‘experimental mathematics’. However, here and elsewhere in this book we shall be concerned with appreciating the special role that complex numbers play.
Complex functions have an elegant interpretation in terms of mappings of the complex plane into itself. We explored this briefly in Chapter 8. Now we wish to study the geometrical aspects in rather more detail. Our plan is as follows. First, we shall literally play with Mathematica to get a feel for what some simple mappings do to simple regions. Next we shall look at the property of ‘conformality’ – that holormorphic functions, when interpreted as mappings, preserve angles between curves at most points. Then we shall explore the relationship between the geometry of circles and lines and a special class of mappings called Möbius transforms.
This chapter is the foundation for several that follow. In particular, in Chapter 19 we shall explore the application of conformal mapping to problems in physics in 2-dimensional regions. Chapter 23 will explore how some of this material may be generalized to higher dimensions. Chapter 21 will look at how conformal maps, and the Schwarz—Christ-offel transformation in particular, can be managed numerically. Chapter 23 will also reveal the real physics underlying the Möbius transform when it is seen in terms of Einstein's theory of special relativity.
Recall of visualization tools
Our first goal is to use Mathematica to explore some simple mappings. We shall do so by loading the ComplexMap Package and making a pair of additional functions, CartesianMap and PolarMap.
In our studies so far we have been concerned with the complex plane interpreted as a two-dimensional Euclidean plane – when the concept of distance has been needed, it has always been the standard Euclidean notion expressed by Pythagoras' theorem. There are concepts of distance other than the standard Euclidean one. Indeed, this notion is at the heart of modern geometrical physics, and finds expression in both the non-positive-definite notions of distance of special relativity, and the non-flat metrics of general relativity.
In this chapter we shall meet the hyperbolic plane, which is perhaps the simplest non-Euclidean geometry. We shall not be able, in one chapter, to do full justice to this concept – indeed, excellent entire books have already been written about it (Coxeter, 1965; Stahl, 1993). What we shall do is explore a little of the geometry through the process of tiling the hyperbolic plane (see the Bibliography for papers by Coxeter and Levy on this particular matter also).
This chapter is based substantially (the sections on triangles and the ‘ghosts and birdies’ tiling) on a project carried out by a former colleague, V. Thomas, for the BBC Open University Production Centre. Gratitude is expressed to A.M. Gallen and, latterly, the Open University for permission to use this material, to Professor R. Penrose F.R.S. (‘R. Penrose’ for short) for several helpful suggestions, and to V.
In the first chapter you saw why you need imaginary and complex numbers, by considering the solution of simple quadratic equations. In this chapter you will see how we set up complex numbers in general, and establish their basic algebraic and geometrical properties.
We shall assume that you have some understanding of what is meant by a real number. The exact nature and depth of this understanding will not materially affect the discussion thoughout most of this book, and this is not a book about the fundamentals of real analysis. We should, however, take a moment to remind ourselves what a ‘real’ number is, before we start defining ‘imaginary’ and ‘complex’ numbers. Students of pure mathematics should remind themselves of the details of these matters — there is really nothing for it but to go for a proper mathematical definition, and experience has shown that one needs to be slightly abstract in order to get it right, in the sense that the resulting definition contains all the numbers ‘we need’. For a full exposition, complete with proofs, you should consult a text on real analysis, such as that by Rudin (1976). For our purposes it will mostly be sufficient to regard real numbers as being all the points on a line (which we call the real axis) extending to infinity in both directions.