Hostname: page-component-89b8bd64d-n8gtw Total loading time: 0 Render date: 2026-05-05T17:28:52.048Z Has data issue: false hasContentIssue false

Model order reduction based on Runge–Kutta neural networks

Published online by Cambridge University Press:  08 September 2021

Qinyu Zhuang*
Affiliation:
Technology, Siemens AG, Bayern, Germany
Juan Manuel Lorenzi
Affiliation:
Technology, Siemens AG, Bayern, Germany
Hans-Joachim Bungartz
Affiliation:
Chair of Scientific Computing, Technical University of Munich, Bayern, Germany
Dirk Hartmann
Affiliation:
Technology, Siemens AG, Bayern, Germany
*
*Corresponding author. E-mail: qinyu.zhuang@siemens.com

Abstract

Model order reduction (MOR) methods enable the generation of real-time-capable digital twins, with the potential to unlock various novel value streams in industry. While traditional projection-based methods are robust and accurate for linear problems, incorporating machine learning to deal with nonlinearity becomes a new choice for reducing complex problems. These kinds of methods are independent to the numerical solver for the full order model and keep the nonintrusiveness of the whole workflow. Such methods usually consist of two steps. The first step is the dimension reduction by a projection-based method, and the second is the model reconstruction by a neural network (NN). In this work, we apply some modifications for both steps respectively and investigate how they are impacted by testing with three different simulation models. In all cases Proper orthogonal decomposition is used for dimension reduction. For this step, the effects of generating the snapshot database with constant input parameters is compared with time-dependent input parameters. For the model reconstruction step, three types of NN architectures are compared: multilayer perceptron (MLP), explicit Euler NN (EENN), and Runge–Kutta NN (RKNN). The MLPs learn the system state directly, whereas EENNs and RKNNs learn the derivative of system state and predict the new state as a numerical integrator. In the tests, RKNNs show their advantage as the network architecture informed by higher-order numerical strategy.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press
Figure 0

Figure 1. An example showing the relation and difference between static-parameter sampling (SPS) and dynamic-parameter sampling (DPS). We assume the parameter space is designed as $ \left[\mathrm{0,100}\right]\times \left[\mathrm{0,100}\right] $. Orange: the parameter configuration $ \boldsymbol{\mu} =\left[80,40\right] $ is selected for running one snapshot simulation. Blue: the parameter configuration $ \boldsymbol{\mu} (t)=\left[80|\mathit{\sin}\left(4\pi /100t\right)|,40|\mathit{\sin}\left(4\pi /100t\right)|\right] $ is selected for running one snapshot simulation.

Figure 1

Figure 2. Picture of the structure of multilayer perceptron (MLP). MLPs approximate the new state $ {\boldsymbol{y}}_{r,i}\left({t}_{j+1}\right) $ based on the given input $ {\boldsymbol{y}}_{r,i}\left({t}_j\right) $ and $ \boldsymbol{\mu} \left({t}_j\right) $.

Figure 2

Figure 3. Picture of an explicit Euler neural network (EENN). EENN uses a multilayer perceptron (MLP) to approximate the R.H.S. of Equation (8). The output of the MLP is assembled as described in Equation (15).

Figure 3

Figure 4. Picture of a Runge–Kutta neural network (RKNN). An RKNN has a multilayer perceptron (MLP) as the core network which approximates the R.H.S. of Equation (8). The output of the network is assembled as described in Equation (16).

Figure 4

Figure 5. Test case: heat sink model.

Figure 5

Figure 6. Test case: gap-radiation model.

Figure 6

Figure 7. Test case: heat exchanger model.

Figure 7

Figure 8. Heat sink model: singular values of static-parameter sampling (SPS) snapshots and dynamic-parameter sampling (DPS) snapshots.

Figure 8

Figure 9. Gap-radiation model: singular values of static-parameter sampling (SPS) snapshots and dynamic-parameter sampling (DPS) snapshots.

Figure 9

Figure 10. Heat exchanger model: singular values of static-parameter sampling (SPS) snapshots and dynamic-parameter sampling (DPS) snapshots.

Figure 10

Table 1. SPS versus DPS: heat sink model, relative error(%).

Figure 11

Table 2. SPS versus DPS: gap-radiation model, relative error(%).

Figure 12

Table 3. SPS versus DPS: heat exchanger model, relative error(%).

Figure 13

Table 4. MLP versus RKNN: heat sink model, relative error(%).

Figure 14

Table 5. MLP versus RKNN: gap-radiation model, relative error(%).

Figure 15

Table 6. MLP versus RKNN: heat exchanger model, relative error(%).

Figure 16

Figure 11. Time grid for collecting snapshots and time grid for prediction with reduced order model (ROM).

Figure 17

Figure 12. The graph from the left to the right belongs to the heat sink model, the gap-radiation model and the heat exchanger model respectively. The reduced order model (ROMs) are generated from the coarse and the fine time grids respectively. The prediction is made with time steps which are always finer than the step sizes used in training data. The original step sizes used in two training snapshots are marked with green (dataset A) and black (dataset B) vertical lines.

Figure 18

Figure 13. The graph from the left to the right belongs to the heat sink model, the gap-radiation model and the heat exchanger model respectively. The tests aim to evaluate the accuracy of the learned reduced order model (ROMs) in.

Figure 19

Figure A1. Projection error: heat sink model.

Figure 20

Figure A2. Projection error: gap-radiation model.

Figure 21

Figure A3. Projection error: heat exchanger model.

Submit a response

Comments

No Comments have been published for this article.