Hostname: page-component-77f85d65b8-2tv5m Total loading time: 0 Render date: 2026-03-29T10:59:56.193Z Has data issue: false hasContentIssue false

Deep kernel learning approach to engine emissions modeling

Published online by Cambridge University Press:  18 June 2020

Changmin Yu
Affiliation:
Department of Chemical Engineering and Biotechnology, University of Cambridge, Cambridge, United Kingdom
Marko Seslija
Affiliation:
Department of Chemical Engineering and Biotechnology, University of Cambridge, Cambridge, United Kingdom
George Brownbridge
Affiliation:
CMCL Innovations, Sheraton House, Castle Park, Cambridge, United Kingdom
Sebastian Mosbach
Affiliation:
Department of Chemical Engineering and Biotechnology, University of Cambridge, Cambridge, United Kingdom CMCL Innovations, Sheraton House, Castle Park, Cambridge, United Kingdom Cambridge Center for Advanced Research and Education in Singapore (CARES), Singapore, Singapore
Markus Kraft*
Affiliation:
Department of Chemical Engineering and Biotechnology, University of Cambridge, Cambridge, United Kingdom CMCL Innovations, Sheraton House, Castle Park, Cambridge, United Kingdom Cambridge Center for Advanced Research and Education in Singapore (CARES), Singapore, Singapore School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, Singapore
Mohammad Parsi
Affiliation:
Perkins Engines Co. Ltd., Frank Perkins Way, Peterborough, United Kingdom
Mark Davis
Affiliation:
Perkins Engines Co. Ltd., Frank Perkins Way, Peterborough, United Kingdom
Vivian Page
Affiliation:
Perkins Engines Co. Ltd., Frank Perkins Way, Peterborough, United Kingdom
Amit Bhave
Affiliation:
CMCL Innovations, Sheraton House, Castle Park, Cambridge, United Kingdom
*
*Corresponding author. E-mail: mk306@cam.ac.uk

Abstract

We apply deep kernel learning (DKL), which can be viewed as a combination of a Gaussian process (GP) and a deep neural network (DNN), to compression ignition engine emissions and compare its performance to a selection of other surrogate models on the same dataset. Surrogate models are a class of computationally cheaper alternatives to physics-based models. High-dimensional model representation (HDMR) is also briefly discussed and acts as a benchmark model for comparison. We apply the considered methods to a dataset, which was obtained from a compression ignition engine and includes as outputs soot and NOx emissions as functions of 14 engine operating condition variables. We combine a quasi-random global search with a conventional grid-optimization method in order to identify suitable values for several DKL hyperparameters, which include network architecture, kernel, and learning parameters. The performance of DKL, HDMR, plain GPs, and plain DNNs is compared in terms of the root mean squared error (RMSE) of the predictions as well as computational expense of training and evaluation. It is shown that DKL performs best in terms of RMSE in the predictions whilst maintaining the computational cost at a reasonable level, and DKL predictions are in good agreement with the experimental emissions data.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s) 2020. Published by Cambridge University Press
Figure 0

Figure 1. Forward propagation in a three-layer feedforward neural network. For each unit in the layers other than the input layer, the output of the unit equals the inner product between all the outputs from the previous layer and the weights followed by a nonlinearity (e.g., the ReLU function).

Figure 1

Figure 2. Backpropagation in a three-layer feedforward neural network. Computing the derivatives of the cost function with respect to the weight parameters using chain rules, then the parameters are updated using gradient descent with the computed derivatives.

Figure 2

Figure 3. Deep kernel learning: input data is propagated in a forward fashion through the hidden layers of the neural network parameterized by the weight parameters. Then, the low-dimensional high-level feature vector as the output of the neural network is fed into a GP with a base kernel function $ {k}_{\theta}\left(\cdot, \cdot \right) $for regression. The posterior mean of the Gaussian regression model is taken as the prediction given the input data (Adapted from Wilson et al., 2016).

Figure 3

Table 1. Specification of the turbocharged four-stroke diesel-fueled compression ignition engine used in this work.

Figure 4

Table 2. Deep kernel learning hyperparameters considered for optimization.

Figure 5

Figure 4. Combined training and blind-test objective function value (Equation 15) for 1,000 Sobol points in the space of hyperparameters of Table 2. The point with the lowest objective value overall is circled.

Figure 6

Table 3. Best values found for the hyperparameters in deep kernel learning through optimization.

Figure 7

Figure 5. Modeled $ {\mathsf{NO}}_x $and soot responses against experimental ones using high-dimensional model representation (HDMR) and two sets of deep kernel learning (DKL) architectures and hyperparameter values. Soot values are logarithmic. For confidentiality reasons, no values are shown on the axes.

Figure 8

Table 4. Percentage of predictions within 20% of the experimental value and root mean square errors (RSME) of $ {\mathsf{NO}}_x $and soot regressions for high-dimensional model representation (HDMR) and deep kernel learning (DKL).

Figure 9

Figure 6. Densities of prediction values relative to experiment using high-dimensional model representation (HDMR) and deep kernel learning (DKL) regression for $ {\mathsf{NO}}_x $and soot emissions, respectively. DKL generates better predictions in terms of the number of points within 20% of the experimental values, and the difference is greater for soot.

Figure 10

Table 5. CPU-time comparison between deep kernel learning (DKL) and high-dimensional model representation (HDMR).

Figure 11

Table 6. Root mean squared error (RMSE) performance of the considered surrogates on the diesel dataset.

Figure 12

Figure 7. Loss history of training deep kernel learning (DKL) as well as a plain deep neural network (DNN) for $ {\mathsf{NO}}_x $regression.

Submit a response

Comments

No Comments have been published for this article.