To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Three areas where machine learning (ML) and physics have been merging: (a) Physical models can have computationally expensive components replaced by inexpensive ML models, giving rise to hybrid models. (b) In physics-informed machine learning, ML models can be solved satisfying the laws of physics (e.g. conservation of energy, mass, etc.) either approximately or exactly. (c) In forecasting, ML models can be combined with numerical/dynamical models under data assimilation.
A good model aims to learn the underlying signal without overfitting (i.e. fitting to the noise in the data). This chapter has four main parts: The first part covers objective functions and errors. The second part covers various regularization techniques (weight penalty/decay, early stopping, ensemble, dropout, etc.) to prevent overfitting. The third part covers the Bayesian approach to model selection and model averaging. The fourth part covers the recent development of interpretable machine learning.
Kernel methods provide an alternative family of non-linear methods to neural networks, with support vector machine being the best known among kernel methods. Almost all linear statistical methods have been non-linearly generalized by the kernel approach, including ridge regression, linear discriminant analysis, principal component analysis, canonical correlation analysis, and so on. The kernel method has also been extended to probabilisitic models, for example Gaussian processes.
Under unsupervised learning, clustering or cluster analysis is first studied. Clustering methods are grouped into non-hierarchical (including K-means clustering) and hierarchical clustering. Self-organizing maps can be used as a clustering method or as a discrete non-linear principal component analysis method. Autoencoders are neural network models that can be used for non-linear principal component analysis. Non-linear canonical correlation analysis can also be performed using neural network models.
NN models with more hidden layers than the traditional NN are referred to as deep neural network (DNN) or deep learning (DL) models, which are now widely used in environmental science. For image data, the convolutional neural network (CNN) has been developed, where in convolutional layers, a neuron is only connected to a small patch of neurons in the preceding layer, thereby greatly reducing the number of model weights. Popular architectures of DNN include the encoder-decoder and U-net models. For time series modelling, the long short-term memory (LSTM) network and temporal convolutional network have been developed. Generative adversarial network (GAN) produces highly realistic fake data.
Principal component analysis (PCA), a classical method for reducing the dimensionality of multivariate datasets, linearly combines the variables to generate new uncorrelated variables that maximize the amount of variance captured. Rotation of the PCA modes is commonly performed to provide more meaningful interpretation. Canonical correlation analysis (CCA) is a generalization of correlation (for two variables) to two groups of variables, with CCA finding modes of maximum correlation between the two groups. Instead of maximum correlation, maximum covariance analysis extracts modes with maximum covariance.
Forecast verification evaluates the quality of the forecasts made by a model, using a variety of forecast scores developed for binary classes, multiple classes, continuous variables and probabilistic forecasts. Skill scores estimate a model’s skill relative to a reference model or benchmark. Problems such as spurious skills and extrapolation with new data are discussed. Model bias in the output predicted by numerical models is alleviated by post-processing methods, while output from numerical models with low spatial resolution is enhanced by downscaling methods, especially in climate change studies.
Many machine learning methods require non-linear optimization, performed by the backward propagation of model errors, with the process complicated by the presence of multiple minima and saddle points. Numerous gradient descent algorithms are available for optimization, including stochastic gradient descent, conjugate gradient, quasi-Newton and non-linear least squares such as Levenberg-Marquardt. In contrast to deterministic optimization, stochastic optimization methods repeatedly introduce randomness during the search process to avoid getting trapped in a local minimum. Evolutionary algorithms, borrowing concepts from evolution to solve optimization problems, include genetic algorithm and differential evolution.
Under supervised learning, when the output variable is discrete or categorical instead of continuous, one has a classification problem instead of a regression problem. Several classification methods are covered: linear discriminant analysis, logistic regression, naive Bayes classifier, K-nearest neighbours, extreme learning machine classifier and multi-layer perceptron classifier. In classification, the cross-entropy objective function is often used in place of the mean squared error function.
A decision tree is a tree-like model of decisions and their consequences, with classification and regression tree (CART) being the most commonly used. Being simple models, decision trees are considered ’weak learners’ relative to more complex and more accurate models. By using a large ensemble of weak learners, methods such as random forest can compete well against strong learners such as neural networks. An alternative to random forest is boosting. While random forest constructs all the trees independently, boosting constructs one tree at a time. At each step, boosting tries to a build a weak learner that improves on the previous one.
Under time series analysis, one proceeds from Fourier analysis to the design of windows, then spectral analysis (e.g. computing the spectrum, the cross-spectrum between two time series, wavelets, etc.) and the filtering of frequency signals. The principal component analysis method can be turned into a spectral method known as singular spectrum analysis. Auto-regressive processes and Box-Jenkins models are also covered.
As probability distributions form the cornerstone of statistics, a survey is made of the common families of distributions, including the binomial distribution, Poisson distribution, multinomial distribution, Gaussian distribution, gamma distribution, beta distribution, von Mises distribution, extreme value distributions, t-distribution and chi-squared distribution. Other topics include maximum likelihood estimation, Gaussian mixtures and kernel density estimation.
Inspired by the human brain, neural network (NN) models have emerged as the dominant branch of machine learning, with the multi-layer perceptron (MLP) model being the most popular. Non-linear optimization and the presence of local minima during optimization led to interests in other NN architectures that only require linear least squares optimization, e.g. extreme learning machines (ELM) and radial basis functions (RBF). Such models readily adapt to online learning, where a model can be updated inexpensively as new data arrive continually. Applications of NN to predict conditional distributions (by the conditional density network and the mixture density network) and to perform quantile regression are also covered.
A review of basic probability theory – probability density, expectation, mean, variance/covariance, median, median absolute deviation, quantiles, skewness/kurtosis and correlation – is first given. Exploratory data analysis methods (histograms, quantile-quantile plots and boxplots) are then introduced. Finally, topics including Mahalanobis distance, Bayes theorem, classification, clustering and information theory are covered.
Simple linear regression is extended to multiple linear regression (for multiple predictor variables) and to multivariate linear regression for (multiple response variables). Regression with circular data and/or categorical data is covered. How to select predictors and how to avoid overfitting with techniques such as ridge regression and lasso are followed by quantile regression. The assumption of Gaussian noise or residual is removed in generalized least squares, with applications to optimal fingerprinting in climate change.
The historical development of statistics and artificial intelligence (AI) is outlined, with machine learning (ML) emerging as the dominant branch of AI. Data science is viewed as being composed of a yin part (ML) and a yang part (statistics), and environmental data science is the intersection between data science and environmental science. Supervised learning and unsupervised learning are compared. Basic concepts of underfitting/overfitting and the curse of dimensionality are introduced.