Hostname: page-component-89b8bd64d-r6c6k Total loading time: 0 Render date: 2026-05-07T10:09:16.339Z Has data issue: false hasContentIssue false

Data-driven multifidelity surrogate models for rocket engines injector design

Published online by Cambridge University Press:  08 January 2025

Jose Felix Zapata Usandivaras*
Affiliation:
Fédération ENAC ISAE-SUPAERO ONERA, Université de Toulouse, Toulouse, France
Michael Bauerheim
Affiliation:
Centre Europén de Recherche et de Formation Avancé en Calcul Scientifique (CERFACS), Toulouse, France
Bénédicte Cuenot
Affiliation:
Centre Europén de Recherche et de Formation Avancé en Calcul Scientifique (CERFACS), Toulouse, France
Annafederica Urbano
Affiliation:
Fédération ENAC ISAE-SUPAERO ONERA, Université de Toulouse, Toulouse, France
*
Corresponding author: Jose Felix Zapata Usandivaras; Email: jose.zapata-usandivaras@isae-supaero.fr

Abstract

Surrogate models of turbulent diffusive flames could play a strategic role in the design of liquid rocket engine combustion chambers. The present article introduces a method to obtain data-driven surrogate models for coaxial injectors, by leveraging an inductive transfer learning strategy over a U-Net with available multifidelity Large Eddy Simulations (LES) data. The resulting models preserve reasonable accuracy while reducing the offline computational cost of data-generation. First, a database of about 100 low-fidelity LES simulations of shear-coaxial injectors, operating with gaseous oxygen and gaseous methane as propellants, has been created. The design of experiments explores three variables: the chamber radius, the recess-length of the oxidizer post, and the mixture ratio. Subsequently, U-Nets were trained upon this dataset to provide reasonable approximations of the temporal-averaged two-dimensional flow field. Despite the fact that neural networks are efficient non-linear data emulators, in purely data-driven approaches their quality is directly impacted by the precision of the data they are trained upon. Thus, a high-fidelity (HF) dataset has been created, made of about 10 simulations, to a much greater cost per sample. The amalgamation of low and HF data during the the transfer-learning process enables the improvement of the surrogate model’s fidelity without excessive additional cost.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Figure 1. Drawing of the geometry of the shear-coaxial injector used, with key dimensions identified. The fuel and oxidizer inlet channels and the outlet are signalized too.

Figure 1

Table 1. Parameter range and reference values for the three parameters of the design of experiments

Figure 2

Figure 2. Joint plots of the points from the DOEs projected in 2D subspaces and consumption statistics graph. Colored histograms of the abscissa are displayed on top. a)$ {l}_r $ versus $ O/F $, b)$ O/F $ versus $ {d}_c $, c)$ {d}_c $ versus $ {l}_r $, d) CPU hours consumed versus the number of tetrahedral cells in meshes used.

Figure 3

Figure 3. Mesh quality measures statistics: a) Ensemble of datasets tetrahedral cells aspect ratio histograms, and b) tetrahedra minimum dihedral angle.

Figure 4

Figure 4. The instantaneous longitudinal cut of the temperature field at different levels of mesh resolution. On top, a fine mesh is utilized ($ {\Delta}_0\sim 50\mu $m), while a coarse one ($ {\Delta}_0\sim 100\mu $m), is used for the bottom image.

Figure 5

Figure 5. Relevant quantities of interest (QoIs) evolution along the injector axial dimension in a 30° sector with parameters corresponding to the TUM reference case (Chemnitz et al. (Chemnitz et al., 2018). For reference $ {l}_r=0 $ mm, $ O/F=2.62 $ and $ {d}_c=6.77 $ mm, while $ {l}_c=96.67 $ mm designates the chambers’ length. a) Azimuthally and time-averaged wall-heat flux $ {\dot{Q}}_{wall} $ for three different mesh sizes, a fine case with an adjusted viscosity ($ \mu $) power-law and experimental measurements from Chemnitz et al. b) Cross-section and time-averaged temperature solution ($ \overline{T}(x) $, full line) and cumulative heat release ($ QHR(x) $, dashed lines) for three different mesh size.

Figure 6

Figure 6. Side-by-side comparison of two different levels of mesh-resolution for a) averaged temperature field solution longitudinal cut with oxygen mass fraction isolines $ {\overline{Y}}_{O_2}=0.1,\hskip0.3em 0.8 $ and stoichiometric line, $ {Z}_{st} $ superimposed. c) Velocity-u root mean squared (RMS) field longitudinal cut. The fine mesh solution, $ {\Delta}_0\sim 50\mu $m is shown on top ($ {N}_{tet}=1.02\times {10}^7 $), and below are displayed the solutions for the coarse case, $ {\Delta}_0\sim 100\mu $m ($ {N}_{tet}=9.9\times {10}^5 $) which corresponds to the resolution adopted in datasets DS1 and DS2. The scale of the figures has been modified to simplify its visualization.

Figure 7

Table 2. Summary of LES datasets of shear coaxial injectors simulations. The sampled design-space in all cases is detailed in Table 1

Figure 8

Figure 7. U-Net architecture is being used. The input layer expects a tensor of four channels, each with of $ 128\times 256 $. These channels correspond to the three normalized parameters O/F, $ {l}_r $ and $ {d}_c $, and a one-hot encoding tensor, the mask, which delimits the fluid region. The output layer returns a single channel corresponding to the normalized predicted quantity with the same resolution. The arrows connecting the encoding layers to the decoding ones are skip-connections.

Figure 9

Table 3. Best-performing networks resulting from the Bayesian hyperparameters optimization. The parameter values and the network performances over the validation and test dataset (DS2) are given

Figure 10

Table 4. The list of hyperparameters varied for the HF model learning, their preset intervals and quantization for each of the field-quantities models provided

Figure 11

Table 5. Best-performing networks resulting from the Bayesian hyperparameters’ optimization for the MFMs

Figure 12

Table 6. LF and MFMs performances comparison

Figure 13

Figure 8. Predictions of the MFM, shown on top and LFM below for the test dataset sample with parameters: $ O/F=2.8104 $, $ {l}_r=7.2 $mm and $ {d}_c=7.3 $mm. a) The time-averaged temperature field $ \overline{T} $ with $ {\overline{Y}}_{O_2}=0.1,0.8 $ oxygen mass fraction isolines as well as the stoichiometric line $ {Z}_{st} $ in white. b) The time-averaged velocity-u (axial) component, $ \overline{u} $ with the predicted stoichiometric line superimposed in dashed-black lines. c) Velocity-u RMS field ($ {u}_{RMS} $) with the predicted stoichiometric line superimposed in dashed-black lines. The figures’ aspect-ratio has been adjusted to ease visualization.

Figure 14

Figure 9. Cross-section averaged of ground-truth and predictions of the: a) time-averaged temperature field predictions and ground-truth and b) time-averaged oxygen mass-fraction predictions and ground-truth. Dashed gray lines 0 indicating the position of third ($ {l}_c/3 $) and two-thirds ($ {l}_c/3 $) of the chamber-length have been added for reference. The origin $ x=0 $ corresponds to the location of the injection plane.

Figure 15

Figure 10. Prediction errors between predictions (pred), and ground truth (gt), over the selected test sample, for MF (top) and LF (bottom) on the: time-averaged temperature ($ \overline{T} $, a), oxygen mass fraction ($ {\overline{Y}}_{O_2} $, b), and velocity-u ($ \overline{u} $, c), as well as the velocity-u RMS field ($ {u}_{RMS} $, d). The figures’ aspect-ratio has been adjusted to ease visualization.

Figure 16

Figure 11. MF models error metrics per field across the samples of the test-dataset DS4. The error displayed per sample is estimated by averaging the errors from all the models obtained from the six folds conducted during training. The dataset averages of the errors are indicated as black dots superimposed over each bar. Also, the bars for each dot indicate the dispersion of the associated metric over DS4. For reference, the error metrics calculated for the predictions of the LFMs over DS4 have been added.

Figure 17

Figure 12. Localized MF models error metrics: a) Injector representative schematic highlighting the locations of the regions identified. b) Bar plot showing the corresponding error metric, either $ {\overline{E}}_{rel, test} $ or $ {\overline{E}}_{\mathit{\operatorname{norm}}, test} $, per field for the different sectors identified.

Figure 18

Figure 13. Mean valued U-Net outputs ($ \overline{T} $-models), feature gradients statistics. a) Histogram of gradients on source network, expressed as a probability density, and b) joint histogram for gradients of source network (x-axis) and target network (y-axis). In the latter, both variables have been normalized by their respective statistical average and standard deviations.

Submit a response

Comments

No Comments have been published for this article.