To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The current chapter summarizes various topics related to the numerical solution of the model equations, for resolvable scales of motion. This part of an atmospheric model that treats the resolvable scales is called the dynamical core, and is distinct from the representations of the subgrid-scale, parameterized physical processes. An especially important topic is how the numerical approximations that are used to solve the equations can affect the model solution. These nonphysical effects should be thoroughly understood by all model users. Even though basic concepts are described here, and examples provided, this presentation of numerical methods is far from exhaustive. A comprehensive text on this subject, such as Durran (1999), should be consulted if more depth is needed. Step-by-step derivations are frequently left to the reader.
Numerical methods used for solving the equations have naturally evolved over the last few decades, partly because of the results of research and partly because of changes in the available computational resources. Various factors are involved in the decision about the numerical methods to use for a particular modeling application, including computational efficiency (speed), accuracy, memory requirements, and code-structure simplicity. The last factor is especially important if the model is going to be used for research, especially by students. Simple methods that are not typically used in current operational models are sometimes described here for pedagogical purposes.
The following brief overview of concepts will help the reader to better understand the specialized material in later sections.
The statistical post processing, or calibration, of operational NWP-model output is common because it can result in skill metrics that are equivalent to many years of improvement to the basic model. And, the greater skill is achieved at relatively little day-to-day expense, compared to other traditional approaches of trying to improve skill, such as through increasing the model resolution.
Historically, statistical post-processing methods were used to diagnose variables that could not be predicted directly by the low-resolution, early-generation NWP models. Standard model dependent variables associated with the large-scale conditions were statistically related to other poorly predicted or unpredicted weather variables such as freezing rain, fog, and cloud cover. However, many current-generation, high-resolution models can explicitly forecast such variables, and statistical correction methods are primarily employed to reduce systematic errors.
There is a variety of ways of classifying statistical post-processing methods. They may be categorized in terms of the statistical techniques used, as well as by the types of predictor data that are used for development of the statistical relationships. And, distinctions are made between static and dynamic methods. With static methods, statistical algorithms are developed for removing systematic error using a long training period that is based on the same version of the model, and the algorithms are applied without change for a significant period of time. Because of the computational expense associated with the calculation of the statistical relationships, models cannot be upgraded frequently because doing so requires recalculation of the relationships.
The aim of this chapter is to provide a few examples of some common methods for using models in research studies. Other chapters also discuss experimental designs in the context of the specific subject being discussed. For example, there are many places in Chapter 16 describing experimental methods related to modeling studies of climate change. The summary here is far from complete because experimental methods are obviously closely tied to the objectives of a research project, which can vary widely. Nevertheless, the methods summarized are in wide use, and their strengths and limitations should be understood.
Case studies for physical-process analysis
Model simulations, generally for short time periods, are often used to study some aspect of a meteorological phenomenon. Sometimes the purpose is to better understand the predictability of a process, in terms of the necessary physical-process parameterizations or initial conditions. This is treated in Section 10.7 on predictive-skill studies. More often, the purpose is to use the model to help better understand the dynamics or kinematics of a physical process. The model is integrated from an initialization that is based on observations at the beginning of the study period. A next step in the process is to confirm that a good correspondence exists between the model simulation and the observations that are available during the simulation period. Good verification of the model skill at these observation locations is typically considered to be justification for believing the simulation in the space and time gaps between the observations.
As we have seen in Chapter 3, solving the equations that govern the physical systems that we are modeling is an initial- and boundary-value problem. The lateral, upper, and lower boundary conditions are discussed in Chapters 3 and 5. In this chapter will be described the procedure by which observations are processed to define initial conditions for the model dependent variables, from which the model integration begins. This process is called model initialization. There are essentially two requirements for the initialization. First, the dependent variables defined on the model grid must faithfully represent conditions in the real atmosphere (e.g., fronts should be in the correct location), and second, the gridded mass-field variables (temperature, pressure) and momentum-field variables (velocity components) should be dynamically consistent, as defined by the model equations. An example of the mass–momentum consistency requirement is that, on the synoptic scale, the gridded initial conditions should be in approximate hydrostatic and geostrophic balance. If they are not, the model will generate potentially large-amplitude inertia–gravity waves after the initialization shock, and these nonphysical waves will be overlaid on the meteorological part of the model solution until the adjustment process is complete. The final adjusted condition will prevail after the inertia–gravity waves have been damped, or have propagated off the grid of a LAM. However, the model solution will be typically unusable during this adjustment period, which is one reason for the common, historical recommendation that model output not be used for about the first 12 h of the integration.
The application of models for operational NWP has much in common with their use for answering physical-process questions, and for satisfying practical needs related to the assessment of air quality, evaluating the potential utility of new observing systems with OSSEs, and testing new numerical methods and physical-process parameterizations. Nevertheless, there are some issues that are unique to operational modeling. These will be addressed in this chapter.
It could be argued that the student of NWP should not need this kind of operationsoriented information because only large national modeling centers with experienced staff and large, fast computers are involved in operational prediction. However, there is a rapid growth in the use of operational regional models by consulting companies, universities, and regional governments to satisfy specialized needs. Thus, the student should become familiar with some of the concepts associated with the operational use of models.
Figure 12.1 illustrates the various components of a very simple operational modeling system. It should be kept in mind that the modeling systems that are operated by national weather services have very large infrastructures, and that the one summarized here is more consistent with the many modest-sized, specialized, operational-modeling systems that exist throughout the world. Some of these system components have been discussed before in earlier chapters, for example related to model initialization. To begin with, the system must have real-time connectivity to operational observational-data networks (top box in the figure), where this generally involves separate access to a number of different data providers.
The term climate modeling, as used here, includes (1) forecasts of climate with global AOGCMs that simulate the physical system's response to radiative-forcing scenarios that assume a specific trajectory for anthropogenic and natural gas and aerosol emissions, (2) initial-value simulations on seasonal to annual time scales, (3) the production of model-based analyses of the present climate, and (4) model experiments that evaluate the response of the climate system to anthropogenic changes in the landscape, say associated with continued urbanization or the expansion of agriculture. Thus, the term climate modeling refers to the use of a model to define the state of Earth's physical system on time scales of seasons to centuries. As we will see, the specifics of the modeling process depend on the time scale. Typically not included are monthly forecasts (e.g., Vitart 2004), which bridge the gap between medium-range forecasting and seasonal forecasting. If the AOGCM forecasts or the global-reanalysis data sets are used as input to a regional (mesoscale) model or a statistical procedure for correlating the large- and small-scale climate of a region, the process is called climate downscaling.
The material about the modeling of weather that has been presented so far in this book also has direct application to the problem of climate modeling. The climate is, after all, just the aggregate behavior of many thousands of individual weather events.
When Phillip Thompson began to write the first widely read textbook on numerical weather prediction (NWP), the subject was in its infancy, even though an earlier book, Weather Prediction by Numerical Process by L. F. Richardson (1922), presaged what was to come later in the century after the advent of electronic computers. The availability of computers increased greatly in the 1960s, and universities began to offer courses in atmospheric modeling, but most modelers had to also be model developers because the untested codes had many errors, the numerical schemes for solving the equations and the physical-process representations were not well tested and understood, lateralboundary conditions for limited-area models produced noisy solutions, and codes for defining the initial conditions needed to be further developed. These early practitioners learned the basics of atmospheric modeling from each other, through journal articles, in seminars and conferences, and from early courses on the subject. During the last 30 years of the twentieth century, graduate-level courses in atmospheric modeling flourished at many universities. And because computer modeling of the atmosphere was increasingly becoming an important tool in research and operational weather prediction, these courses were typically filled. Nevertheless, atmospheric modeling was still somewhat of a specialty, and models were not very accessible beyond national centers and a few research universities. Smagorinsky (1983), Thompson (1983), Shuman (1989), Persson (2005), Lynch (2007), and Harper (2008) should be consulted for additional history on atmospheric modeling.
This textbook provides a general introduction to atmospheric modeling for those using models for either operational forecasting or research. It is motivated by the fact that all those who use such models should be aware of their strengths and limitations. Unlike the many other books that specialize in particular aspects of atmospheric modeling, the aim here is to offer a general treatment of the subject that can be used for self study or in conjunction with a course on the subject. Even though there is considerable space devoted here to numerical methods, this is not intended to be the major focus. As the reader will see, there are many other subjects associated with the modeling process that must be understood well in order for models to be used effectively for research or operations. For those who need more information on particular topics, each chapter includes references to specialized resources. It is assumed that the reader has a Bachelors Degree in atmospheric sciences, with mathematics through differential equations.
Abbreviations or acronyms, as well as symbols, will be defined in the text the first time that they appear, and for future reference they are also defined in the lists that appear before Chapter 1. Even though the student should focus on concepts rather than jargon, a technical vocabulary is still necessary in order to discuss these subjects. Thus, commonly used, important terms will appear in italics the first time, in order to identify them as worth remembering.
The term atmospheric predictability may be defined as the time required for solutions from two models that are initialized with slightly different initial conditions to diverge to the point where the objective (e.g., RMS) difference is the same as that between two randomly chosen observed states of the atmosphere. In the practical context of a forecast, the no-skill limit that defines the predictability may be the forecast lead time when the model-simulated state has no greater resemblance to the observed state of the atmosphere than does a reference forecast based on persistence or climatology. Many of the other chapters in this text address the various components of the modeling process that limit predictability, from dataassimilation systems to numerical methods to physical-process parameterizations, as well as metrics for quantifying it. This chapter will review the general concept of theoretical and practical limits to forecasting skill.
Model error and initial-condition error
As shown in the previous chapter, error that limits predictability originates in both the model and the initial conditions. Refer to Section 7.3 for more information, especially about the various sources of error associated with the model. Often the concept of predictability is discussed in the context of the system's response to infinitesimally small perturbations in the model initial conditions. This predictability is an inherent property of the fluid system and not of the model. Indeed, it is sometimes assumed in this hypothetical discussion that the model is perfect.
Many NWP courses involve the coding by students of one- or two-dimensional shallow-fluid models, and the use of these models in experiments to evaluate the influence of different numerical methods on model solutions (described in Chapter 3). This allows students to become familiar with the structural components of models, to gain experience in debugging model code, and to conduct experiments to confirm concepts discussed in the text.
This appendix suggests an overall framework for coding the shallow-fluid equations that are described in Section 2.3.3, as well as some experiments that can be part of a laboratory component of an NWP course. Because the specific programming language used will determine the details of the model code, only a high-level outline will be provided here. The best approach is to start with the development of a one-dimensional model. Figure A.1 shows a schematic of the procedure for solving such a system, using an advection equation as a simple example. The abscissa is the space dimension and the ordinate is time. A predictive equation would of course be required for u, unless a constant mean speed is employed.
The parameterization problem involves algorithmically or statistically relating the effects of physical processes that cannot be represented directly in a model to variables that are included. Physical processes are parameterized for a few reasons.
The small scales involved make it too computationally expensive to represent a process directly.
The complexity of a process makes it too computationally expensive to represent directly.
There is insufficient knowledge about how a process works to explicitly represent it mathematically.
The representation of atmospheric processes in models takes place within the dynamical core as well as through the so-called model “physics”. The dynamic processes include the propagation of various types of waves (e.g., advective, Rossby, inertia–gravity). Even though the physics processes are parameterized to a large degree, their correct rendering by a model is nevertheless essential for the prediction of virtually all of the dependent variables. The parameterized processes that are discussed in this chapter include cumulus convection, cloud microphysics, turbulence, and radiation. Land-surface processes are also parameterized because they occur on too small a scale to be represented directly, but they are discussed separately in Chapter 5.
Even though parameterizations are typically developed and discussed independently from each other, and from the dynamical core, this is artificial and should be avoided. This is because parameterizations do interact, and the realism of this interaction determines the accuracy of the model. For example, the parameterized spectral solar radiation represents an energy flux at the land surface, and the land-surface parameterization partitions some of it to the sensible heating of the ground.
As we have seen in previous chapters, there is a variety of generally unavoidable sources of model error, including
initial conditions,
lateral-boundary conditions for LAMs,
land/water-surface conditions,
numerical approximations used in the dynamical core, and
parameterizations of physical processes.
Each of these input data sets or modeling approaches introduces some error in the modeling process, and ensemble prediction involves performing parallel forecasts or simulations using different arbitrary choices for the above imperfect data or methods. The objective of defining the different conditions for each model integration is to sample the uncertainty space associated with the modeling process in order to define how this uncertainty projects onto the uncertainty in the forecasts. As a preliminary example of the sensitivity of model forecasts to the above factors, Fig. 7.1 illustrates an ensemble of 5-day track predictions for hurricane Katrina in 2005. The forecasts are based on the ECMWF ensemble prediction system. The tracks are strongly dependent on the specific errors in the input observations as well as the model configurations employed.
An ensemble of forecasts is more useful than an individual, deterministic forecast for the following reasons.
The mean of the ensemble of forecasts is generally more accurate than the forecast from an individual ensemble member, when the statistics are computed over a number of forecasts.
The difference (spread, variance) among the ensemble members can be an indication of the flow-dependent quantitative uncertainty in the ensemble-mean forecast, given a proper calibration.
The expression Computational Fluid Dynamics (CFD) modeling comes from engineering, and refers to methods that can be used for the simulation of very-fine scales of motion. The terminology is confusing in the context that weather and climate modeling also involves the use of computational methods to solve the dynamic equations for a fluid. When the term CFD modeling is used in its conventional way in the atmospheric sciences, it refers to the simulation of motions that can synonymously be referred to as occurring on the sub-mesogamma scale, the microscale, or the turbulence scale.
Because we are revisiting the concept of the scales of motion that are represented by a model solution, a reminder of the pertinent discussions in Chapter 3 is appropriate. There is a tendency to think of the 2Δx length scale as the resolution limit of a model, although it has been shown by Skamarock (2004) (e.g., Fig. 3.36) and others that spatial filters associated with the finite-differencing scheme and the explicit diffusion in a model can cause the effective resolution to be quite different from this limit. Motions unresolved by the model can generally be referred to as the subfilter-scale (SFS).
Types of CFD models
There are three general categories of CFD models, although there are myriad methods for solving the equations, just as with larger-scale models.
This chapter describes methods for (1) the graphical display and interpretation of model output, and observations; (2) the calculation of derived variables from model output, which can help in the analysis of processes; and (3) the mathematical processing of model output, which can reveal properties and patterns that are not apparent from the dependent variables themselves. The comparison of the model output with observations is a type of analysis of course, but Chapter 9 on model verification is devoted to this subject. Also, the application of post-processing algorithms, for example to remove systematic error, is a special type of mathematical processing of the output, and this subject is treated in Chapter 13.
Graphical methods for displaying and interpreting model output and observations
Much of the material in this section is covered in courses on meteorological analysis; however, it is provided here because many students of NWP have not had such a course available to them. More in-depth material can be found in texts such as Saucier (1955) and Bluestein (1992a,b).
There have been so many creative ways of displaying model output, and comparing it with observations, that it is impossible to present a thorough treatment here. Nevertheless, some examples will be provided and the student is encouraged to review the literature and become familiar with typical techniques (see chapter Problems 1 and 3). This subject is important because successfully publishing research, whether it is model-based or not, depends on displaying the results in an easily and quickly understood format.
Turbulence is a huge subject of ongoing research. This book bridges the modern development in dynamical systems theory and the theory of fully developed turbulence. Many solved and unsolved problems in turbulence have equivalencies in simple dynamical models, which are much easier to handle analytically and numerically. This book gives a modern view of the subject by first giving the essentials of the theory of turbulence before moving on to shell models. These show much of the same complex behaviour as fluid turbulence, but are much easier to handle analytically and numerically. Any necessary maths is explained and self-contained, making this book ideal for advanced undergraduates and graduate students, as well as researchers and professionals, wanting to understand the basics of fully developed turbulence.