Hostname: page-component-89b8bd64d-r6c6k Total loading time: 0 Render date: 2026-05-12T12:36:42.613Z Has data issue: false hasContentIssue false

Super-resolution of turbulence with dynamics in the loss

Published online by Cambridge University Press:  09 January 2025

Jacob Page*
Affiliation:
School of Mathematics, University of Edinburgh, Edinburgh, EH9 3FD, UK
*
Email address for correspondence: jacob.page@ed.ac.uk

Abstract

Super-resolution of turbulence is a term used to describe the prediction of high-resolution snapshots of a flow from coarse-grained observations. This is typically accomplished with a deep neural network and training usually requires a dataset of high-resolution images. An approach is presented here in which robust super-resolution can be performed without access to high-resolution reference data, as might be expected in an experiment. The training procedure is similar to data assimilation, wherein the model learns to predict an initial condition that leads to accurate coarse-grained predictions at later times, while only being shown coarse-grained observations. Implementation of the approach requires the use of a fully differentiable flow solver in the training loop to allow for time-marching of predictions. A range of models are trained on data generated from forced, two-dimensional turbulence. The networks have reconstruction errors which are similar to those obtained with ‘standard’ super-resolution approaches using high-resolution data. Furthermore, the methods are comparable to the performance of standard data assimilation for state estimation on individual trajectories, outperforming these variational approaches at initial time and remaining robust when unrolled in time where performance of the standard data-assimilation algorithm improves.

Information

Type
JFM Rapids
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press.
Figure 0

Figure 1. Snapshots of spanwise vorticity from example trajectories at $Re=100$ ((a), contours run between ${\pm }10$) and $Re=1000$ ((b), contours run between ${\pm }15$). The black grids in the leftmost panels highlight coarse-graining by a factor of 16 (thin lines) or 32 (thick lines) relative to the simulation resolution. The Taylor microscale (relative to the length scale $L_x^* / (2{\rm \pi} )$) is indicated by the labelled vertical red lines, while the blue line measures $5{\rm \pi} \eta _K$.

Figure 1

Figure 2. Schematic of the neural network architecture adopted in this study. ‘U.S.’ indicates upsampling. The output of the network is time marched to compute the loss function ((2.3) or (2.4)).

Figure 2

Figure 3. Summary of network performance at both $Re=100$ and $Re=1000$. (a) Average test-set errors (symbols; lines show $\pm$ a standard deviation). Models trained using ‘standard’ super-resolution loss (2.2) are shown in green, time-advancement loss on the high-resolution grid (2.3) is shown in blue and the coarse-only time-dependent loss (2.4) is shown in red. (b) As (a) but errors are now computed after advancing ground truth and predictions forward in time by $T$. (c) Example model performance for a single snapshot at both $Re$ and coarsening factors. Streamwise velocity is shown with contours running between ${\pm }3$. The output of models trained with losses (2.3) and (2.4) is shown.

Figure 3

Figure 4. Comparison of model predictions under time advancement at $Re=1000$ to variational data assimilation for example evolutions at both $M=16$ (a) and $M=32$ (b). Black lines are the time-evolved predictions from data assimilation, red lines the performance of ‘standard’ super-resolution, while blue lines show the performance of the time-dependent loss functions ((2.3) and (2.4)). Solid blue is the coarse-only version, and the symbols identify times where the fields are visualised in figures 5 and 6. Grey regions highlight the assimilation window, where $T_{DA}=1$, and the unroll times to train the networks, $T=0.5$ at $M=16$ and $T=1$ at $M=32$.

Figure 4

Figure 5. Evolution of the out-of-plane vorticity at $Re=1000$, above the evolution of the reconstructed field from data assimilation and using the coarse-only super-resolution model (coarse-graining here is $M=16$). Note the snapshots correspond to the trajectory reported in figure 4 and are extracted at the times indicated by the symbols in that figure. Contours run between ${\pm }15$. For $t\in \{0, 0.1\}$, a low-pass-filtered vorticity has been overlayed in red/blue lines to show the reproduction of the larger-scale features which would otherwise be masked with small-scale noise. The cutoff wavenumber for the filter matches the coarse-graining and the contours are spaced by $\Delta \omega = 3$.

Figure 5

Figure 6. Evolution of streamwise velocity at $Re=1000$, above the evolution of the reconstructed field from data assimilation and using the coarse-only super-resolution model (coarse-graining here is $M=32$). Note the snapshots correspond to the trajectory reported in figure 4 and are extracted at the times indicated by the symbols in that figure. Contour levels run between ${\pm }3$.

Figure 6

Figure 7. Energy spectra for coarse-grained network and assimilated fields. (a) Energy in the initial condition for evolution in figure 6 (grey), energy spectrum for the assimilated field (black) and super-resolved field (from coarse observations only, blue). (b) Energy spectra in the time-advanced super-resolved field (blue) at times indicated in figure 6 up to and including $t = 1$. Time-averaged energy spectrum of the true evolution is shown with the grey dashed line. Vertical lines indicate the forcing wavenumber $k_f=4$ and the Nyquist cutoff wavenumber associated with the filter. Red lines show the scaling $E(k) \propto k^{-4}$.

Figure 7

Figure 8. Effect of noise on network performance. Mean reconstruction errors (3.1) at $t=0$ (a) and $t=T$ (b) for networks trained on corrupted data with $\sigma =0$ (blue), $\sigma =0.05$ (green) and $\sigma =0.1$ (red); vertical lines show $\pm$ one standard deviation. The test dataset is contaminated with noise of varying levels, $\sigma \in \{0, 0.05, 0.1\}$, as indicated on the $x$-axes (blue/red data have been offset horizontally slightly to aid visualisation).