Skip to main content Accessibility help
×
×
Home

Information:

  • Access
  • Cited by 26
  • Cited by
    This article has been cited by the following publications. This list is generated based on data provided by CrossRef.

    Pourrahmani, Hossein Moghimi, Mahdi Siavashi, Majid and Shirbani, Milad 2019. Sensitivity analysis and performance evaluation of the PEMFC using wave-like porous ribs. Applied Thermal Engineering,

    Maulik, R. San, O. Rasheed, A. and Vedula, P. 2019. Subgrid modelling for two-dimensional turbulence using neural networks. Journal of Fluid Mechanics, Vol. 858, Issue. , p. 122.

    Maity, Debotyam and Ciezobka, Jordan 2019. Designing a robust proppant detection and classification workflow using machine learning for subsurface fractured rock samples post hydraulic fracturing operations. Journal of Petroleum Science and Engineering, Vol. 172, Issue. , p. 588.

    Chang, Chih-Wei and Dinh, Nam T. 2019. Classification of machine learning frameworks for data-driven thermal fluid models. International Journal of Thermal Sciences, Vol. 135, Issue. , p. 559.

    Omata, Noriyasu and Shirayama, Susumu 2019. A novel method of low-dimensional representation for temporal behavior of flow fields using deep autoencoder. AIP Advances, Vol. 9, Issue. 1, p. 015006.

    Lui, Hugo and Wolf, William 2019. Flow Modal Decomposition and Deep Neural Networks for the Construction of Reduced Order Models of Compressible Flows.

    Koizumi, Hiroshi Tsutsumi, Seiji and Shima, Eiji 2018. Feedback Control of Karman Vortex Shedding from a Cylinder using Deep Reinforcement Learning.

    Wang, Z. Xiao, D. Fang, F. Govindan, R. Pain, C. C. and Guo, Y. 2018. Model identification of reduced order fluid dynamics systems using deep learning. International Journal for Numerical Methods in Fluids, Vol. 86, Issue. 4, p. 255.

    Chaari, Majdi Seibi, Abdennour C. Hmida, Jalel Ben and Fekih, Afef 2018. An Optimized Artificial Neural Network Unifying Model for Steady-State Liquid Holdup Estimation in Two-Phase Gas–Liquid Flow. Journal of Fluids Engineering, Vol. 140, Issue. 10, p. 101301.

    Ribeiro, B. L. R. Lopes, L. B. F. Mitre, J. F. and Thompson, R. L. 2018. Model analysis of the turbulent flows in a convergent–divergent channel and around a sphere. Journal of the Brazilian Society of Mechanical Sciences and Engineering, Vol. 40, Issue. 8,

    Yan, Xinghui Zhu, Jihong Kuang, Minchi and Wang, Xiangyang 2018. Missile aerodynamic design using reinforcement learning and transfer learning. Science China Information Sciences, Vol. 61, Issue. 11,

    Jin, Xiaowei Cheng, Peng Chen, Wen-Li and Li, Hui 2018. Prediction model of velocity field around circular cylinder over various Reynolds numbers by fusion convolutional neural networks based on pressure on the cylinder. Physics of Fluids, Vol. 30, Issue. 4, p. 047105.

    Hobold, Gustavo M. and da Silva, Alexandre K. 2018. Machine learning classification of boiling regimes with low speed, direct and indirect visualization. International Journal of Heat and Mass Transfer, Vol. 125, Issue. , p. 1296.

    Sundaresan, Sankaran Ozel, Ali and Kolehmainen, Jari 2018. Toward Constitutive Models for Momentum, Species, and Energy Transport in Gas–Particle Flows. Annual Review of Chemical and Biomolecular Engineering, Vol. 9, Issue. 1, p. 61.

    Zhang, Zhen Song, Xu-dong Ye, Shu-ran Wang, Yi-wei Huang, Chen-guang An, Yi-ran and Chen, Yao-song 2018. Application of deep learning method to Reynolds stress models of channel flow based on reduced-order modeling of DNS data. Journal of Hydrodynamics,

    Loiseau, Jean-Christophe and Brunton, Steven L. 2018. Constrained sparse Galerkin regression. Journal of Fluid Mechanics, Vol. 838, Issue. , p. 42.

    Wang, Zhuo Luo, Kun Li, Dong Tan, Junhua and Fan, Jianren 2018. Investigations of data-driven closure for subgrid-scale stress in large-eddy simulation. Physics of Fluids, Vol. 30, Issue. 12, p. 125101.

    Maulik, R. San, O. Rasheed, A. and Vedula, P. 2018. Data-driven deconvolution for large eddy simulations of Kraichnan turbulence. Physics of Fluids, Vol. 30, Issue. 12, p. 125109.

    Nakai, Kengo and Saiki, Yoshitaka 2018. Machine-learning inference of fluid variables from data using reservoir computing. Physical Review E, Vol. 98, Issue. 2,

    Loiseau, Jean-Christophe Noack, Bernd R. and Brunton, Steven L. 2018. Sparse reduced-order modelling: sensor-based dynamics to full-state estimation. Journal of Fluid Mechanics, Vol. 844, Issue. , p. 459.

    ×

Actions:

MathJax
MathJax is a JavaScript display engine for mathematics. For more information see http://www.mathjax.org.
      • Send article to Kindle

        To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

        Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

        Find out more about the Kindle Personal Document Service.

        Deep learning in fluid dynamics
        Available formats
        ×
        Send article to Dropbox

        To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

        Deep learning in fluid dynamics
        Available formats
        ×
        Send article to Google Drive

        To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

        Deep learning in fluid dynamics
        Available formats
        ×
Export citation

Abstract

It was only a matter of time before deep neural networks (DNNs) – deep learning – made their mark in turbulence modelling, or more broadly, in the general area of high-dimensional, complex dynamical systems. In the last decade, DNNs have become a dominant data mining tool for big data applications. Although neural networks have been applied previously to complex fluid flows, the article featured here (Ling et al., J. Fluid Mech., vol. 807, 2016, pp. 155–166) is the first to apply a true DNN architecture, specifically to Reynolds averaged Navier Stokes turbulence models. As one often expects with modern DNNs, performance gains are achieved over competing state-of-the-art methods, suggesting that DNNs may play a critically enabling role in the future of modelling complex flows.

1 Introduction

Neural networks were inspired by the Nobel prize winning work of Hubel and Wiesel on the primary visual cortex of cats (Hubel & Wiesel 1962). Their seminal experiments showed that neuronal networks where organized in hierarchical layers of cells for processing visual stimulus. The first mathematical model of a neural network, termed the Neocognitron in 1980 (Fukushima 1980), had many of the characteristic features of today’s deep neural networks (DNNs), which are typically between 7–10 layers, but more recently have been scaled to hundreds of layers for certain applications. The recent success of DNNs has been enabled by two critical components: (i) the continued growth of computational power and (ii) exceptionally large labelled data sets which take advantage of the power of a multi-layer (deep) architecture. Indeed, although the theoretical inception of DNNs has an almost four-decade history, the analysis and training of a DNN using the ImageNet data set in 2012 (Krizhevsky et al. 2012) provided a watershed moment for deep learning (LeCun et al. 2015). DNNs have since transformed the field of computer vision by dominating the performance metrics in almost every meaningful computer vision task intended for classification and identification. Remarkably, DNNs were not even listed as one of the top 10 algorithms of data mining in 2008 (Wu et al. 2008). But in 2016, its growing list of successes on challenge data sets make it perhaps the most important data mining tool for our emerging generation of scientists and engineers.

Data methods are certainly not new in the fluids community. Computational fluid dynamics has capitalized on machine learning efforts with dimensionality-reduction techniques such as proper orthogonal decomposition or dynamic mode decomposition, which compute interpretable low-rank modes and subspaces that characterize spatio-temporal flow data (Holmes et al. 1998; Kutz et al. 2016). POD and DMD are based on the singular value decomposition which is ubiquitous in the dimensionality reduction of physical systems. When coupled with Galerkin projection, POD reduction forms the mathematical basis of reduced-order modelling, which provides an enabling strategy for computing high-dimensional discretizations of complex flows (Benner et al. 2015).

The success of dimensionality reduction in fluids is enabled by (i) significant performance gains in computational speed and memory and (ii) the generation of physically interpretable spatial and/or spatio-temporal modes that dominate the physics. Thus computations are enabled and critical physical intuition gained. Such success is tempered by two well-known failings of POD/DMD based reductions: (i) their inability to capture transient, intermittent and/or multi-scale phenomenon without significant tuning and (ii) their inability to capture invariances due to translation, rotation and/or scaling. DNNs are almost diametrically opposed in their pros and cons. Specifically, DNNs are well suited for extracting multi-scale features as the DNN decomposition shares many similarities with wavelet decompositions, which are the computational work horse of multi-resolution analysis. Moreover, translations, rotations and other invariances are known to be easily handled in the DNN architecture. These performance gains are tempered by the tremendous computational cost of building a DNN from a large training set and the inability of DNN to produce easily interpretable physical modes and/or features.

2 Overview of DNNs in turbulence applications

Turbulent flows generally exhibit multi-scale (spatial and temporal) physics that are high dimensional with rotational and translational intermittent structures also present. Such data provide an opportunity for DNN to make an impact in the modelling and analysis of turbulent flow fields. Ling, Kurzawski & Templeton (2016) have proposed using DNNs for Reynolds averaged Navier Stokes (RANS) models which are widely used because of their computational tractability in modelling the rich set of dynamics induced by turbulent flows. In this highlighted body of work, the specific aim is to use DNNs to build an improved representation of the Reynolds stress anisotropy tensor from high-fidelity simulation data. Remarkably, despite the widespread success of DNNs at providing high-quality predictions in complex problems, there have been only limited attempts to apply deep learning techniques to turbulence. Thus far, these attempts have been limited to a couple hidden layers (Zhang & Duraisamy 2015). Ling et al. (2016) move to DNNs by constructing 8–10 hidden layers, making it truly a deep network. But this highlighted work does so much more than simply build a DNN. Indeed, the authors construct a specialized neural network architecture which directly embeds Galilean invariance into the neural network predictions. This neural network is able to predict not only the anisotropy eigenvalues, but the full anisotropy tensor while preserving Galilean invariance. This invariance preserving DNN is critical for respecting physical properties in order to provide significant performance gains in prediction. The DNN is trained and evaluated on a database of flows for which both RANS and high-fidelity data are available.

The specific DNN architecture used by the authors is referred to as the tensor basis neural network. This network architecture is capable of embedding rotational invariance by enforcing that the predicted anisotropy tensor lies on a basis of isotropic tensors. Rotational invariance signifies that the physics of the fluid flow does not depend on the orientation of the coordinate frame of the observer. This is a fundamental physical principle, and it is important that any turbulence closure obeys it. Otherwise, the machine learning model evaluated on identical flows with the axes defined in different directions could yield different predictions. Enforcement of the rotational invariance in this DNN shows substantial improvement over a more generic feed forward multi-layer perceptron that does not embed Galilean invariance. Training of their DNN was performed on a database of flows where both high-fidelity as well as RANS results were available. The flow database represents various flow configurations, thus the DNN is not simply interpolating or matching to similar flows. Rather, the DNN extracts information about the underlying flow in a principled manner.

The authors demonstrated that their DNN configuration was significantly more accurate than either of two conventional RANS models on two different test cases. Moreover, on a wavy wall test case which had a different geometry than any of the training cases, their DNN was able to provide improved predictions, suggesting the method was doing more than simply interpolating the training data. Additionally, on a duct flow test, the DNN was capable of improving predictions despite the test set being at a significantly different Reynolds number. Ultimately, the results suggest that the physics respecting DNN trained with embedded Galilean invariance can outperform, often significantly, other RANS turbulence models.

3 The future of DNNs for fluids modelling

DNNs will almost certainly have a transformative impact on modelling high-dimensional complex systems such as turbulent flows. The successes with many complex data sets will compel researchers to utilize this rapidly emerging data analysis tool for improving predictive capabilities. DNNs represent a paradigm shift for the community. Whereas many innovations have often been inspired from expert-in-the-loop intuition and physically interpretable models, DNNs have challenged these traditional notions by building prediction engines that simply outperform competing methods without providing clear evidence of why they are doing so. To some extent, the application of DNNs to turbulent flows will bring awareness to the fluids community of the two cultures of statistics and data science (Breiman 2001). These two outlooks are centred around the concepts of machine learning and statistical learning. The former focuses on prediction (DNNs) while the latter is concerned with inference of interpretable models from data (POD/DMD reductions). Although both methodologies have achieved significant success across many areas of big data analytics, the physical and engineering sciences have primarily focused on interpretable methods.

Despite its successes, significant challenges remain for DNNs. Simple questions remain wide open: (i) How many layers are necessary for a given data set? (ii) How many nodes at each layer are needed? (iii) How big must my data set be to properly train the network? (iv) What guarantees exist that the mathematical architecture can produce a good predictor of the data? (v) What is my uncertainty and/or statistical confidence in the DNN output? (vi) Can I actually predict data well outside of my training data? (vii) Can I guarantee that I am not overfitting my data with such a large network? The list goes on. These questions remain central to addressing the long-term viability of DNNs. The good news is that such topics are currently being intensely investigated by academic researchers and industry (Google, Facebook, etc.) alike. Undoubtedly, the next decade will witness significant progress in addressing these issues. From a practical standpoint, the work of Ling et al. (2016) determine the number of layers and nodes based upon prediction success, i.e. more layers and more nodes do not improve performance. Additionally, cross-validation is imperative to suppress overfitting. As a general rule, one should never trust results of a DNN unless rigorous cross-validation has been performed. Cross-validation plays the same critical role as a convergence study of a numerical scheme.

Given the computational maturity of DNNs and how readily available they are (see Google’s open source software called TensorFlow: tensorflow.org), it is perhaps time for part of the turbulence modelling community to adopt what has become an important and highly successful part of the machine learning culture: challenge data sets. Donoho argues (Donoho 2015), and I am in complete agreement, that challenge data sets allow researchers a fair comparison of their DNN innovations on training data (publicly available to all) and test data (not publicly available, but accessible with your algorithm). Importantly, this would give the fluids community their own ImageNet data sets to help generate reproducible and validated performance gains on DNNs for applications on complex flows. Perhaps Ling, Kurzawski and Templeton can help push the community forward in this way.

References

Benner, P., Gugercin, S. & Willcox, K. 2015 A survey of projection-based model reduction methods for parametric dynamical systems. SIAM Rev. 57, 483531.
Breiman, L. 2001 Statistical modeling: the two cultures (with comments and a rejoinder by the author). Stat. Sci. 16 (3), 199231.
Donoho, D. L. 2015 50 Years of Data Science. Tukey Centennial Workshop.
Fukushima, F. 1980 A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36, 193202.
Holmes, P., Lumley, J. & Berkooz, G. 1998 Turbulence, Coherent Structures, Dynamical Systems and Symmetry. Cambridge University Press.
Hubel, D. H. & Wiesel, T. N. 1962 Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 160, 106154.
Krizhevsky, A., Sutskever, I. & Hinton, G. 2012 Imagenet classification with deep convolutional neural networks. Adv. Neural Inform. Proc. Syst. 10971105.
Kutz, J. N., Brunton, S., Brunton, B. & Proctor, J. 2016 Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems. SIAM.
LeCun, Y., Bengio, Y. & Hinton, G. 2015 Deep learning. Nature 521 (7553), 436444.
Ling, J., Kurzawski, A. & Templeton, J. 2016 Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. J. Fluid Mech 807, 155166.
Wu, X., Kumar, V., Quinlan, J., Ghosh, J., Yang, Q., Motoda, H., McLachlan, G., Ng, A., Liu, B., Philip, S. & Zhou, Z. 2008 Top 10 algorithms in data mining. Know. inf. sys. 14 (1), 137.
Zhang, Z. & Duraisamy, K. 2015 Machine learning methods for data-driven turbulence modeling. AIAA Aviation 20152460.