Hostname: page-component-89b8bd64d-shngb Total loading time: 0 Render date: 2026-05-07T14:05:52.051Z Has data issue: false hasContentIssue false

Performance and accuracy assessments of an incompressible fluid solver coupled with a deep convolutional neural network

Published online by Cambridge University Press:  04 February 2022

Ekhi Ajuria Illarramendi*
Affiliation:
ISAE-SUPAERO, Université de Toulouse, Toulouse, France Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique, CFD department, 31057 Toulouse, France
Michaël Bauerheim
Affiliation:
ISAE-SUPAERO, Université de Toulouse, Toulouse, France
Bénédicte Cuenot
Affiliation:
Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique, CFD department, 31057 Toulouse, France
*
*Corresponding author. E-mail: ekhi.ajuria@cerfacs.fr

Abstract

The resolution of the Poisson equation is usually one of the most computationally intensive steps for incompressible fluid solvers. Lately, DeepLearning, and especially convolutional neural networks (CNNs), has been introduced to solve this equation, leading to significant inference time reduction at the cost of a lack of guarantee on the accuracy of the solution.This drawback might lead to inaccuracies, potentially unstable simulations and prevent performing fair assessments of the CNN speedup for different network architectures. To circumvent this issue, a hybrid strategy is developed, which couples a CNN with a traditional iterative solver to ensure a user-defined accuracy level. The CNN hybrid method is tested on two flow cases: (a) the flow around a 2D cylinder and (b) the variable-density plumes with and without obstacles (both 2D and 3D), demonstrating remarkable generalization capabilities, ensuring both the accuracy and stability of the simulations. The error distribution of the predictions using several network architectures is further investigated in the plume test case. The introduced hybrid strategy allows a systematic evaluation of the CNN performance at the same accuracy level for various network architectures. In particular, the importance of incorporating multiple scales in the network architecture is demonstrated, since improving both the accuracy and the inference performance compared with feedforward CNN architectures. Thus, in addition to the pure networks’ performance evaluation, this study has also led to numerous guidelines and results on how to build neural networks and computational strategies to predict unsteady flows with both accuracy and stability requirements.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Open Practices
Open data
Copyright
© The Author(s), 2022. Published by Cambridge University Press
Figure 0

Figure 1. Von Karman test case configuration.

Figure 1

Figure 2. Plume configuration with and without cylinder.

Figure 2

Figure 3. MonoScale network containing 395,601 parameters, where $ \circ $ corresponds to convolution with kernels of size 5 $ \times $ 5, $ \ast $ to kernels of size 3 $ \times $ 3, and $ \square $ to kernels of size 1 $ \times $ 1. R corresponds to the ReLU activation function. Each box indicates the number of feature maps, or channels ($ C $) present at each layer.

Figure 3

Figure 4. MultiScale network, with 418,640 parameters, where $ \circ $ corresponds to convolution with kernels of size 5 $ \times $ 5, $ \ast $ to kernels of size 3 $ \times $ 3, and $ \square $ to kernels of size 1 $ \times $ 1. R corresponds to the ReLU activation function, $ \searrow $ indicates a bilinear downsampling operation, whereas $ \nearrow $ corresponds to the bilinear interpolation. Each box indicates the number of feature maps, or channels ($ C $) present at each layer.

Figure 4

Figure 5. Unet with 443,521 parameters, where $ \ast $ corresponds to kernels of size 3 $ \times $ 3 and $ \square $ to kernels of size 1 $ \times $ 1. $ R $ corresponds to the ReLU activation function, $ \searrow $ M indicates a MaxP Pooling operation, whereas $ \nearrow $ corresponds to the bilinear interpolation. At each scale, the MaxPooling step reduces the image to half of its original size, whereas the interpolation upsamples the image to the double of its size. Each box indicates the number of feature maps, or channels ($ C $) present at each layer.

Figure 5

Figure 6. Physics-driven neural network learning strategy by combining a short-term loss and a long-term loss. The tiled box for the convolutional neural network indicates that the network parameters are frozen (i.e., they are the same as the one used in the network at time $ t $).

Figure 6

Figure 7. Iso-contours of vorticity in the z axis, for the Von Karman test case at Reynolds 100, 300, and 1,000, for the four studied networks and the reference Jacobi 400 solver. Dashed lines correspond to negative iso-contours, and the continuous lines correspond to the positive iso-contours.

Figure 7

Table 1. Strouhal values and relative error compared to the experimental values.

Figure 8

Figure 8. Iso-contours of the amplitude of the fundamental mode, for the Von Karman test case at Reynolds 100, 300, and 1,000, for the four studied networks and the reference Jacobi 400 solver.

Figure 9

Figure 9. Sketch of the plume-cylinder configuration. $ {\tilde{h}}_x $ and $ {\tilde{h}}_y $ are the coordinates of the plume head location.

Figure 10

Figure 10. Plume head coordinates $ {\tilde{h}}_x $ (······) and $ {\tilde{h}}_y $ (- - -) for the case without (a) and with (b) obstacle at $ {R}_i=14.8 $ obtained by several networks: ▼MonoScale, $ \blacksquare $ MultiScale, $ \bullet $ Unet, and $ \times $ SmallScale. The gray zones show the range of the plume head position obtained with the Jacobi solver, using between 200 and 10,000 iterations.

Figure 11

Figure 11. Divergence error percentiles and density iso-contours (in white) of the four studied networks and a reference Jacobi solver with 400 iterations, at time $ \tilde{t}=0.29 $ for the case with no cylinder (top row) and time $ \tilde{t}=0.41 $ for the cylinder case (bottom row).

Figure 12

Figure 12. Sketch of the hybrid method, which is activated depending on the error $ \mathrm{\mathcal{E}} $ compared with the threshold value $ {\mathrm{\mathcal{E}}}_t $.

Figure 13

Figure 13. $ {\mathrm{\mathcal{E}}}_{\infty } $ (- - -) and $ {\mathrm{\mathcal{E}}}_1 $ (······) for the case without (a) and with (b) cylinder obtained by several architectures: ▼MonoScale, $ \blacksquare $ MultiScale, $ \bullet $ Unet, and $ \times $ SmallScale.

Figure 14

Figure 14. $ {\mathrm{\mathcal{E}}}_{\infty } $ (- - -) and $ {\mathrm{\mathcal{E}}}_1 $ (······), and plume head position $ {\tilde{h}}_y $ (bottom), where (a,c) correspond to $ {\mathrm{\mathcal{E}}}_t=\min \left({\mathrm{\mathcal{E}}}_{\infty}\right) $ and (b,d) $ {\mathrm{\mathcal{E}}}_t=\min \left({\mathrm{\mathcal{E}}}_1\right) $ evaluated on the no cylinder test case, at an $ {R}_i=14.8 $ obtained by several networks: ▼MonoScale, $ \blacksquare $ MultiScale, $ \times $ SmallScale, and $ \bullet $ Unet, as well as the $ \star $ Jacobi solver.

Figure 15

Figure 15. $ {\mathrm{\mathcal{E}}}_{\infty } $ (- - -) and $ {\mathrm{\mathcal{E}}}_1 $ (······), and plume head position ($ {\tilde{h}}_y $, $ {\tilde{h}}_x $) (bottom), where (a,c) correspond to $ {\mathrm{\mathcal{E}}}_t=\min \left({\mathrm{\mathcal{E}}}_{\infty}\right) $ and (b,d) $ {\mathrm{\mathcal{E}}}_t=\min \left({\mathrm{\mathcal{E}}}_1\right) $ evaluated on the cylinder test case, at an $ {R}_i=14.8 $ obtained by several networks: ▼MonoScale, $ \blacksquare $ MultiScale, $ \times $ SmallScale, and $ \bullet $ Unet, as well as the $ \star $ Jacobi solver.

Figure 16

Figure 16. Kernel density estimation at four times ($ \tilde{t} $ = 0.10, 0.20, 0.29, and 0.39) of the cases where $ \mathrm{\mathcal{E}}={\mathrm{\mathcal{E}}}_{\infty } $ (top) and $ \mathrm{\mathcal{E}}={\mathrm{\mathcal{E}}}_1 $ (bottom) of the no cylinder test case, at an $ {R}_i=14.8 $ obtained by several networks: ▼MonoScale, $ \blacksquare $ MultiScale, $ \times $ SmallScale, and $ \bullet $ Unet, as well as the $ \star $ Jacobi solver.

Figure 17

Figure 17. Comparison between the Unet network and a reference Jacobi 400 solver for a 3D plume without obstacle at Richardson 14.8.

Figure 18

Table 2. Inference time of each network to produce a 2D ($ {512}^2 $ grid) and 3D ($ {128}^3 $ grid) pressure field without Jacobi iterations, as well as the inference time of a Jacobi solver.

Figure 19

Figure 18. Evolution of the number of floating-point operations (in giga units) with the 2D domain size (varying from 1,024 to 4.2 × $ {10}^6 $ cells) needed in a single network inference for the four studied networks (a): $ \blacksquare $ MultiScale, $ \times $ SmallScale, ▼MonoScale, and $ \bullet $ Unet, and for the scales composing the Unet network (b): $ \bullet $ Unet, ▼$ {n}^2 $, ▼$ {n}_{1/2}^2 $, $ \times $$ {n}_{1/2}^2 $, $ \star $$ {n}_{1/4}^2 $, + $ {n}_{1/8}^2 $, and ▲$ {n}_{1/16}^2 $.

Figure 20

Figure 19. Number of Jacobi iterations needed to ensure $ {\mathrm{\mathcal{E}}}_t=\min \left({\mathrm{\mathcal{E}}}_1\right) $ on the noncylinder (a) and the cylinder (b) 2D test cases with various networks: ▼MonoScale, $ \blacksquare $ MultiScale, $ \times $ SmallScale, and $ \bullet $ Unet, as well as the $ \star $ Jacobi solver.

Figure 21

Figure 20. Time $ {t}_p $ (a) and acceleration factor $ \eta $ (b) for the noncylinder (- - -) and cylinder 2D test case (······), for the four studied networks: ▼MonoScale, $ \blacksquare $ MultiScale, $ \times $ SmallScale, and $ \bullet $ Unet, as well as the $ \star $ Jacobi solver.

Figure 22

Figure 21. Time taken to perform the network prediction $ {t}_{\mathrm{inf}} $ (a) for the four studied networks: ▼ MonoScale, $ \blacksquare $ MultiScale, $ \times $ SmallScale, and $ \bullet $ Unet, on a 2D grid size varying from 1,024 to 4.2 × $ {10}^6 $ and (b) for the 3D Unet network ($ \bullet $) on a 3D grid size varying from $ {16}^3 $ to $ {192}^3 $.

Figure 23

Figure 22. Time taken by each scale ($ \bullet $$ {n}^2 $, ▼$ {n}_{1/2}^2 $, $ \times $$ {n}_{1/2}^2 $, $ \star $$ {n}_{1/4}^2 $, + $ {n}_{1/8}^2 $, and ▲ $ {n}_{1/16}^2 $) for the MultiScale (a) and Unet (b) networks, to perform a single inference on a 2D grid size varying from 1,024 to 4.2 × $ {10}^6 $ cells.

Supplementary material: PDF

Ajuria Illarramendi et al. supplementary material

Ajuria Illarramendi et al. supplementary material

Download Ajuria Illarramendi et al. supplementary material(PDF)
PDF 5.3 MB
Submit a response

Comments

No Comments have been published for this article.