To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is conventional to speak of a “Sunni revival” in the eleventh and twelfth centuries. According to this view, militantly Sunni regimes such as that of the Saljuqs responded to the challenge of the “Shici century,” that period between the mid-tenth and mid-eleventh centuries when much of the central Muslim world was dominated by Shici regimes (the Fatimids, the Buyids) of varying stripes, by vigorously re-asserting – reviving – Sunni identity and claims to dominance. Like many grand historical themes, this one is perhaps a bit too neat and simple. On a political level, for example, the Saljuq seizure of power in Baghdad was not a restoration of a pre-Buyid political patterns. It is true that the Buyid amirs, whom the Saljuqs replaced, were Shicis, but their power had been in decline for some time previously. Moreover, relations between them and the cAbbasid caliphs, still the symbol of Sunni legitimacy, were often cordial; indeed, as the Saljuq armies approached Baghdad in 1055, the caliph intervened with the Saljuq leader, Toghril Beg, seeking protection for the Buyid amir al-Malik al-Rahim. Relations between the Saljuq leader and the cAbbasid caliph were hardly warm at the outset: Toghril Beg had been in Baghdad for thirteen months before he met the caliph.
If the notion of a Sunni “revival” is in some ways misleading, there were nonetheless extremely important developments at work that shaped the character that Sunni Islam would carry into the modern period.
Religious knowledge (cilm) was perhaps the central cultural lynchpin of the Islamic tradition and of the social patterns in which that tradition was experienced in the Middle Period. This knowledge was embedded in the rich and inter-related body of texts – principally the Koran, collections of hadith, legal treatises and textbooks, and commentaries on them – which formed the substantive basis for the training of those scholars who were known as the ulama. Our principal concern here, however, is less with the intellectual parameters of this cilm than with the social uses to which it was put, and with the way in which these uses helped to define Muslim identities and the nature of the ulama's authority.
For the ulama, it was the active process of transmitting religious knowledge that was critical. As we have seen, the ulama were in fact socially quite diverse, and the only thing that marked them as a distinctive group was their command of these highly valued texts, and their control of access to them. In part this was simply a matter of education, that is, of transmitting to students a familiarity with essential texts which was necessary to the proper discharge of the responsibilities they might incur upon appointment to a range of offices – for example, that of the qadi, or that of professor (mudarris) in the myriad religious institutions which sprang up in medieval Islamic cities.
Virtually all accounts of the rise of an Islamic state and then empire in the seventh century stress its extraordinary character, the suddenness of the appearance on the scene of the Muslim Arabs and the wholly unexpected nature of their success – what Marshall Hodgson referred to as “a breach in cultural continuity unparalleled among the great civilizations.” Explanatory models for the Muslim success – at least those which do not focus upon the Arabs themselves, on the demographic, economic, or religious factors propelling them forward – tend to look for causes in the chaotic developments in the Near East in the late sixth and early seventh centuries. In this, of course, there lies the danger of an easy retrospective teleology, of the assumption that the Near Eastern civilizations experienced on the eve of the Muslim conquests a crisis which weakened them fatally, and so rendered those conquests (or something like them) virtually inevitable. The cautious historian should eschew such a dramatic viewpoint, tempting as it may be. On the other hand, conditions in the Near East in the early seventh century were indeed highly charged and unstable. From a broader perspective, they demonstrate, not the inevitability of the Muslim conquests, but the degree to which those events marked a stage in a longer-term process by which the Arabs were drawn into the cultural orbit of the Fertile Crescent and surrounding territories and, in their Muslim guise, contributed to its evolution.
Notes written from memory by Anders Persson (ECMWF) on 16 September 1999. The reader is encouraged to read P. D. Thompson's paper “Charney and the Revival of Numerical Weather Prediction”, reproduced, together with Charney's letters to Thompson in Lindzen et al., (1990).
History of NWP
In late 1945 Vladimir Zworykin, the “Father of Television”, who worked at RCA, joined with John von Neumann, the “Father of the Computer”, to suggest the use of the computer in meteorology. Zworykin's interest was in weather modification, and von Neumann's was in fluid dynamics. They also had the dream of connecting the TV and the computer into something we today know as a PC or Workstation. Their dream came partially true in Sweden in around 1955 when for the first time a forecast map that was made directly and automatically without any human intervention was produced on a screen (oscilloscope) (see Bergthorsson and Döös, (1955), Bergthorsson et al., (1955), also the Rossby Memorial Volume).
In early 1946 von Neumann contacted Rossby's group. They told von Neumann why a zonally averaged dynamical model would not work, and instead suggested a barotropic model which had been manually tested by Victor Starr in his 1941 book on weather forecasting for a 72-h forecast at 700 hPa. Von Neumann was not satisfied with the simple barotropic approach and in speeches in the spring of 1946 presented more ambitious plans. Von Neumann and Zworykin also appeared at the annual meeting of the AMS (see Bulletin of AMS (1946)).
If the numerical model forecasts are skillful, the forecast variables should be strongly related to the weather parameters of interest to the “person in the street” and for other important applications. These include precipitation (amount and type), surface wind, and surface temperature, visibility, cloud amount and type, etc. However, the model output variables are not optimal direct estimates of local weather forecasts. This is because models have biases, the bottom surface of the models is not a good representation of the actual orography, and models may not represent well the effect of local forcings important for local weather forecasts. In addition, models do not forecast some required parameters, such as visibility and probability of thunderstorms.
In order to optimize the use of numerical weather forecasts as guidance for human forecasters, it has been customary to use statistical methods to “post-process” the model forecasts and adapt them to produce local forecasts. In this appendix we discuss three of the methods that have been used for this purpose.
Model Output Statistics (MOS)
This method, when applied under ideal circumstances, is the gold standard of NWP model output post-processing (Glahn and Lowry, 1972, Carter et al., 1989). MOS is essentially multiple linear regression, where the predictors hnj are model forecast variables (e.g., temperature, humidity, or wind at any grid point, either near the surface or in the upper levels), and may also include other astronomical or geographical parameters (such as latitude, longitude and time of the year) valid at time tn.
In general, the public is not aware that our daily weather forecasts start out as initial-value problems on the major national weather services supercomputers. Numerical weather prediction provides the basic guidance for weather forecasting beyond the first few hours. For example, in the USA, computer weather forecasts issued by the National Center for Environmental Prediction (NCEP) in Washington, DC, guide forecasts from the US National Weather Service (NWS). NCEP forecasts are performed by running (integrating in time) computer models of the atmosphere that can simulate, given one day's weather observations, the evolution of the atmosphere in the next few days. Because the time integration of an atmospheric model is an initial-value problem, the ability to make a skillful forecast requires both that the computer model be a realistic representation of the atmosphere, and that the initial conditions be known accurately.
NCEP (formerly the National Meteorological Center or NMC) has performed operational computer weather forecasts since the 1950s. From 1955 to 1973, the forecasts included only the Northern Hemisphere; they have been global since 1973. Over the years, the quality of the models and methods for using atmospheric observations has improved continuously, resulting in major forecast improvements.
Figure 1.1.1(a) shows the longest available record of the skill of numerical weather prediction. The “S1” score (Teweles and Wobus, 1954) measures the relative error in the horizontal gradient of the height of the constant pressure surface of 500 hPa (in the middle of the atmosphere, since the surface pressure is about 1000 hPa) for 36-h forecasts over North America.
In Chapter 2 we derived the equations that govern the evolution of the atmosphere, and in Chapter 3 we discussed the numerical discretizations that allow the numerical integration of those equations on a computer. The discretization of the continuous governing equation is limited by the model resolution, i.e., by the size of the smallest resolvable scale. We have seen that in a finite difference scheme, the smallest scales of motion that can be (poorly) resolved are those which have a wavelength of two grid sizes. In spectral models, the motion of the smallest wave present in the solution is more accurately computed, but for these and for any type of numerical discretization there is always a minimum resolvable scale. Current climate models typically have a horizontal resolution of the order of several hundred kilometers, global weather forecast models have resolutions of 50–100 km, and regional mesoscale models of 10–50 km. Storm-scale models have even higher resolution, with grid sizes of the order of 1–10 km. In the vertical direction, model resolution and vertical extent have also been increased substantially, with current models having typically between 10 and 50 vertical levels, and extending from the surface to the stratosphere or even the mesosphere. As computer power continues to increase, so does the resolution of atmospheric models.
Despite the continued increase of horizontal and vertical resolution, it is obvious that there are many important processes and scales of motion in the atmosphere that cannot be explicitly resolved with present or future models.
In his 1951 paper on NWP, Charney indicated that he expected that even as models improved there would still be a limited range to skillful atmospheric predictions, but he attributed this to inevitable model deficiencies and finite errors in the initial conditions. Lorenz (1963a, b) discovered the fact that the atmosphere, like any dynamical system with instabilities, has a finite limit of predictability (which he estimated to be about two weeks) even if the model is perfect, and even if the initial conditions are known almost perfectly. He did so by performing what is now denoted an “identical twin” experiment: he compared two runs made with the same model but with initial conditions that differed only very slightly. Just from round-off errors, he found that after a few weeks the two solutions were as different from each other as two random trajectories of the model.
Lorenz (1993) described how this fundamental discovery took place: His original goal had been to show that statistical prediction could not match the accuracy attainable with a nonlinear dynamical model, and therefore that NWP had a potential for predictive skill beyond that attainable purely through statistical methods. He had acquired a Royal-McBee LGP-30 computer, with a memory of 4K words and a speed of 60 multiplications per second, which for the late 1950s was very powerful.