To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Large oil refineries emit heat, vapor, and cloud condensation nuclei (CCN), all of which can affect the formation of cloud and precipitation. The Factor Separation (FS) technique is applied to isolate the net contributions of waste heat, vapor, and CCN to the rainfall of a cumulus developing in the industrial plume. The mutualinteractive contributions of two or three of the factors are also computed. The model simulations indicate that the sensible heat provides the major stimulus for cloud development and rain formation. The pure contribution of the industrial CCN is to enhance the condensation causing an increase in the mass of total cloud water. The contributions arising from mutual interactions among two or three factors are quite large and should not be neglected. Particularly, the synergistic interaction of the sensible heat and pollution effects contribute towards the accumulated rainfall.
Introduction
There is considerable interest in the effects of large electrical power plants and oil refineries on meteorological phenomena. Preferential cumulus formation has been observed above electrical power plants and oil refineries (Auer, 1976). Hobbs et al. (1970) reported that in regions adjacent to or downwind of the Port Townsend paper mill (Washington State, USA) the annual rainfall recorded was 30% greater than the rainfall from nearby stations. This dramatic increase in annual precipitation is likely caused by the presence of the paper mill. Hobbs et al. speculated that the enhanced rainfall might be attributed to the large and giant CCN emitted from the paper mill into the pollution plume. Support for this hypothesis came from Eagan et al.'s (1974) study.
Here, we present three examples of the Alpert–Stein Factor Separation Methodology (hereafter, FS in short) on a medium scale in the atmosphere, often referred to as meso-scale or, in general, meso-meteorology. The first example is that of a deep Genoa cyclogenesis (Alpert et al., 1996a, b) that was observed during the Alpine Experiment (ALPEX) in March 1982, and then studied intensively by several research groups. The second example is that of a small-scale shallow short-lived meso-beta-scale – only tens of kilometers in diameter – cyclone over the Gulf of Antalya, Eastern Mediterranean (Alpert et al., 1999). The third example is that of a much smaller scale, of orographic wind, following Alpert and Tsidulko (1994). In each of these three examples, some factors relevant to the specific problem are selected, and special focus is given to the role played by the synergies as revealed by the FS approach.
A multi-stage evolution of an ALPEX cyclone: meso-alpha scale
A relatively large number of studies have been devoted to cyclogenesis, with particular attention given to the processes responsible for the lee cyclone generation. Early studies of lee cyclogenesis (henceforth LC) focused on observations, and indicated the regions with the highest frequencies (Petterssen, 1956).
More recently, several theories have been advanced to explain the LC features, and they are frequently separated for convenience into two groups, as follows: the modified (by the lower boundary layer) baroclinic instability approach, as reviewed by Tibaldi et al. (1990) and Pierrehumbert (1985), and the directional wind shear suggested by Smith (1984).
We introduce a ‘limiting Frobenius structure’ attached to any degeneration of projective varieties over a finite field of characteristic p which satisfies a p-adic lifting assumption. Our limiting Frobenius structure is shown to be effectively computable in an appropriate sense for a degeneration of projective hypersurfaces. We conjecture that the limiting Frobenius structure relates to the rigid cohomology of a semistable limit of the degeneration through an analogue of the Clemens–Schmidt exact sequence. Our construction is illustrated, and conjecture supported, by a selection of explicit examples.
The computation of growth series for the higher Baumslag–Solitar groups is an open problem first posed by de la Harpe and Grigorchuk. We study the growth of the horocyclic subgroup as the key to the overall growth of these Baumslag–Solitar groups BS(p,q), where 1<p<q. In fact, the overall growth series can be represented as a modified convolution product with one of the factors being based on the series for the horocyclic subgroup. We exhibit two distinct algorithms that compute the growth of the horocyclic subgroup and discuss the time and space complexity of these algorithms. We show that when p divides q, the horocyclic subgroup has a geodesic combing whose words form a context-free (in fact, one-counter) language. A theorem of Chomsky–Schützenberger allows us to compute the growth series for this subgroup, which is rational. When p does not divide q, we show that no geodesic combing for the horocyclic subgroup forms a context-free language, although there is a context-sensitive geodesic combing. We exhibit a specific linearly bounded Turing machine that accepts this language (with quadratic time complexity) in the case of BS(2,3) and outline the Turing machine construction in the general case.
Bipartivity is an important network concept that can be applied to nodes, edges and communities. Here we focus on directed networks and look for subnetworks made up of two distinct groups of nodes, connected by ‘one-way’ links. We show that a spectral approach can be used to find hidden substructures of this form. Theoretical support is given for the idealized case where there is limited overlap between subnetworks. Numerical experiments show that the approach is robust to spurious and missing edges. A key application of this work is in the analysis of high-throughput gene expression data, and we give an example where a biologically meaningful directed bipartite subnetwork is found from a cancer microarray dataset.
Sure to be influential, this book lays the foundations for the use of algebraic geometry in statistical learning theory. Many widely used statistical models and learning machines applied to information science have a parameter space that is singular: mixture models, neural networks, HMMs, Bayesian networks, and stochastic context-free grammars are major examples. Algebraic geometry and singularity theory provide the necessary tools for studying such non-smooth models. Four main formulas are established: 1. the log likelihood function can be given a common standard form using resolution of singularities, even applied to more complex models; 2. the asymptotic behaviour of the marginal likelihood or 'the evidence' is derived based on zeta function theory; 3. new methods are derived to estimate the generalization errors in Bayes and Gibbs estimations from training errors; 4. the generalization errors of maximum likelihood and a posteriori methods are clarified by empirical process theory on algebraic varieties.
In this paper, we define the geometric median for a probability measure on a Riemannian manifold, give its characterization and a natural condition to ensure its uniqueness. In order to compute the geometric median in practical cases, we also propose a subgradient algorithm and prove its convergence as well as estimating the error of approximation and the rate of convergence. The convergence property of this subgradient algorithm, which is a generalization of the classical Weiszfeld algorithm in Euclidean spaces to the context of Riemannian manifolds, also improves a recent result of P. T. Fletcher et al. [NeuroImage 45 (2009) S143–S152].
Here we present a non-exhaustive list of software packages that (in most cases) the authors have tried, together with some other useful pointers. Of course, we cannot accept any responsibility for bugs/errors/omissions in any of the software or documentation mentioned here – caveat emptor!
Websites change. If any of the websites mentioned here disappear in the future, you may be able to find the new site using a search engine with appropriate keywords.
Software tools
CLN
CLN (Class Library for Numbers, http://www.ginac.de/CLN/) is a library for efficient computations with all kinds of numbers in arbitrary precision. It was written by Bruno Haible, and is currently maintained by Richard Kreckel. It is written in C++ and distributed under the GNU General Public License (GPL). CLN provides some elementary and special functions, and fast arithmetic on large numbers, in particular it implements Schönhage–Strassen multiplication, and the binary splitting algorithm. CLN can be configured to use GMP low-level MPN routines, which improves its performance.
GNU MP (GMP)
The GNU MP library is the main reference for arbitrary-precision arithmetic. It has been developed since 1991 by Torbjörn Granlund and several other contributors. GNU MP (GMP for short) implements several of the algorithms described in this book. In particular, we recommend reading the “Algorithms” chapter of the GMP reference manual.
This is a book about algorithms for performing arithmetic, and their implementation on modern computers. We are concerned with software more than hardware – we do not cover computer architecture or the design of computer hardware since good books are already available on these topics. Instead, we focus on algorithms for efficiently performing arithmetic operations such as addition, multiplication, and division, and their connections to topics such as modular arithmetic, greatest common divisors, the fast Fourier transform (FFT), and the computation of special functions.
The algorithms that we present are mainly intended for arbitrary-precision arithmetic. That is, they are not limited by the computer wordsize of 32 or 64 bits, only by the memory and time available for the computation. We consider both integer and real (floating-point) computations.
The book is divided into four main chapters, plus one short chapter (essentially an appendix). Chapter 1 covers integer arithmetic. This has, of course, been considered in many other books and papers. However, there has been much recent progress, inspired in part by the application to public key cryptography, so most of the published books are now partly out of date or incomplete. Our aim is to present the latest developments in a concise manner. At the same time, we provide a self-contained introduction for the reader who is not an expert in the field.
Chapter 2 is concerned with modular arithmetic and the FFT, and their applications to computer arithmetic.