Hostname: page-component-76fb5796d-vvkck Total loading time: 0 Render date: 2024-04-28T15:47:21.044Z Has data issue: false hasContentIssue false

Accurate prediction of the particle image velocimetry flow field and rotor thrust using deep learning

Published online by Cambridge University Press:  23 March 2022

Sehyeok Oh
Affiliation:
Department of Mechanical Engineering, Ulsan National Institute of Science and Technology (UNIST), 50 UNIST-gil, Ulsan 44919, South Korea Department of Materials AI & Big-Data, Korea Institute of Materials Science (KIMS), 797 Changwon-daero, Changwon 51508, South Korea
Seungcheol Lee
Affiliation:
Department of Mechanical Engineering, Ulsan National Institute of Science and Technology (UNIST), 50 UNIST-gil, Ulsan 44919, South Korea
Myeonggyun Son
Affiliation:
Department of Mechanical Engineering, Ulsan National Institute of Science and Technology (UNIST), 50 UNIST-gil, Ulsan 44919, South Korea
Jooha Kim*
Affiliation:
Department of Mechanical Engineering, Ulsan National Institute of Science and Technology (UNIST), 50 UNIST-gil, Ulsan 44919, South Korea
Hyungson Ki*
Affiliation:
Department of Mechanical Engineering, Ulsan National Institute of Science and Technology (UNIST), 50 UNIST-gil, Ulsan 44919, South Korea
*
Email addresses for correspondence: hski@unist.ac.kr, kimjooha@unist.ac.kr
Email addresses for correspondence: hski@unist.ac.kr, kimjooha@unist.ac.kr

Abstract

With particle image velocimetry (PIV), cross-correlation and optical flow methods have been mainly adopted to obtain the velocity field from particle images. In this study, a novel artificial intelligence (AI) architecture is proposed to predict an accurate flow field and drone rotor thrust from high-resolution particle images. As the ground truth, the flow fields past a high-speed drone rotor obtained from a fast Fourier transform-based cross-correlation algorithm were used along with the thrusts measured by a load cell. Two deep-learning models were developed, and for instantaneous flow-field prediction, a generative adversarial network (GAN) was employed for the first time. It is a spectral-norm-based residual conditional GAN translator that provides a stable adversarial training and high-quality flow generation. Its prediction accuracy is 97.21 % (coefficient of determination, or R2). Subsequently, a deep convolutional neural network was trained to predict the instantaneous rotor thrust from the flow field, and the model is the first AI architecture to predict the thrust. Based on an input of the generated flow field, the network had an R2 accuracy of 94.57 %. To understand the prediction pathways, the internal part of the model was investigated using a class activation map. The results showed that the model recognized the area receiving kinetic energy from the rotor and successfully made a prediction. The proposed architecture is accurate and nearly 600 times faster than the cross-correlation PIV method for real-world complex turbulent flows. In this study, the rotor thrust was calculated directly from the flow field using deep learning for the first time.

Type
JFM Papers
Copyright
© The Author(s), 2022. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Adrian, L., Adrian, R.J. & Westerweel, J. 2011 Particle Image Velocimetry. Cambridge University Press.Google Scholar
Arjovsky, M., Chintala, S. & Bottou, L. 2017 Wasserstein generative adversarial networks. In International Conference on Machine Learning, pp. 214–223.Google Scholar
Blau, Y. & Michaeli, T. 2018 The perception-distortion tradeoff. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6228–6237.Google Scholar
Bukka, S.R., Gupta, R., Magee, A.R. & Jaiman, R.K. 2021 Assessment of unsteady flow predictions using hybrid deep learning based reduced-order models. Phys. Fluids 33, 013601.CrossRefGoogle Scholar
Cai, S., Liang, J., Gao, Q., Xu, C. & Wei, R. 2019 a Particle image velocimetry based on a deep learning motion estimator. IEEE Trans. Instrum. Meas. 69, 35383554.CrossRefGoogle Scholar
Cai, S., Zhou, S., Xu, C. & Gao, Q. 2019 b Dense motion estimation of particle images via a convolutional neural network. Exp. Fluids 60, 116.CrossRefGoogle Scholar
Chae, S., Lee, S., Kim, J. & Lee, J.H. 2019 Adaptive-passive control of flow over a sphere for drag reduction. Phys. Fluids 31, 015107.CrossRefGoogle Scholar
Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. & Bengio, Y. 2014 Generative adversarial nets. Adv. Neural Inform. Proc. Syst. 27, 26722680.Google Scholar
Gross, S. & Wilber, M. 2016 Training and Investigating Residual Nets. Available at: http://torch.ch/blog/2016/02/04/resnets.htmlGoogle Scholar
Gu, J., Cai, H., Chen, H., Ye, X., Ren, J. & Dong, C. 2020 Image quality assessment for perceptual image restoration: a new dataset, benchmark and metric. arXiv:2011.15002Google Scholar
He, K., Zhang, X., Ren, S. & Sun, J. 2016 Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778.Google Scholar
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B. & Hochreiter, S. 2017 Gans trained by a two time-scale update rule converge to a local nash equilibrium. Adv. Neural Inform. Proc. Syst. 30.Google Scholar
Hinton, G.E. & Salakhutdinov, R.R. 2006 Reducing the dimensionality of data with neural networks. Science 313, 504507.CrossRefGoogle ScholarPubMed
Horn, R.A. & Johnson, C.R. 2012 Matrix Analysis. Cambridge University Press.CrossRefGoogle Scholar
Ioffe, S. & Szegedy, C. 2015 Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167.Google Scholar
Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A.A. 2017 Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134.Google Scholar
Johnson, J., Alahi, A. & Fei-Fei, L. 2016 Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, pp. 694–711.Google Scholar
Kingma, D.P. & Ba, J.L. 2014. Adam: a method for stochastic optimization. arXiv:1412.6980Google Scholar
Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S. & Houlsby, N. 2019. Big transfer (BiT): General visual representation learning. arXiv:1912.11370CrossRefGoogle Scholar
Landgrebe, A.J. 1972 The wake geometry of a hovering helicopter rotor and its influence on rotor performance. J. Am. Helicopter Soc. 17, 315.CrossRefGoogle Scholar
Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. 1998 Gradient-based learning applied to document recognition. Proc. IEEE 86, 22782324.CrossRefGoogle Scholar
Ledig, C., Theis, L., Huszãr, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J. & Wang, Z. 2017 Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer visIon and Pattern Recognition, pp. 4681–4690.Google Scholar
Lee, S., Chae, S., Woo, S.Y., Jang, J. & Kim, J. 2021 Effects of rotor–rotor interaction on the wake structure and thrust generation of a quadrotor unmanned aerial vehicle. IEEE Access 9, 8599586016.CrossRefGoogle Scholar
Lee, Y., Yang, H. & Yin, Z. 2017 PIV-DCNN: cascaded deep convolutional neural networks for particle image velocimetry. Exp. Fluids 58, 171.CrossRefGoogle Scholar
Leishman, J.G. 1998 Measurements of the aperiodic wake of a hovering rotor. Exp. Fluids 25, 352361.CrossRefGoogle Scholar
Leishman, J.G., Baker, A. & Coyne, A. 1996 Measurements of rotor tip vortices using three-component laser Doppler velocimetry. J. Am. Helicopter Soc. 41, 342353.CrossRefGoogle Scholar
Loshchilov, I. & Hutter, F. 2017 Decoupled weight decay regularization. arXiv:1711.05101Google Scholar
Maas, A.L., Hannun, A.Y. & Ng, A.Y. 2013 Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the 30th International Conference on Machine Learning, p. 3.Google Scholar
Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z. & Paul Smolley, S. 2017 Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802.Google Scholar
Mirza, M. & Osindero, S. 2014 Conditional generative adversarial nets. arXiv:1411.1784Google Scholar
Miyanawala, T.P. & Jaiman, R.K. 2017 An efficient deep learning technique for the Navier–Stokes equations: Application to unsteady wake flow dynamics. arXiv:1710.09099Google Scholar
Miyanawala, T.P. & Jaiman, R.K. 2019 A hybrid data-driven deep learning technique for fluid-structure interaction. In ASME 2019 38th International Conference on Ocean, Offshore and Arctic Engineering, Volume 2: CFD and FSI.CrossRefGoogle Scholar
Miyato, T., Kataoka, T., Koyama, M. & Yoshida, Y. 2018 Spectral normalization for generative adversarial networks. arXiv:1802.05957Google Scholar
Mula, S.M., Stephenson, J.H., Tinney, C.E. & Sirohi, J. 2013 Dynamical characteristics of the tip vortex from a four-bladed rotor in hover. Exp. Fluids 54, 114.CrossRefGoogle Scholar
Nair, V. & Hinton, G.E. 2010 Rectified linear units improve restricted boltzmann machines. In International Conference on Machine Learning.Google Scholar
Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T. & Efros, A.A. 2016 Context encoders: feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544.Google Scholar
Rabault, J., Kolaas, J. & Jensen, A. 2017 Performing particle image velocimetry using artificial neural networks: a proof-of-concept. Meas. Sci. Technol. 28, 125301.CrossRefGoogle Scholar
Radford, A., Metz, L. & Chintala, S. 2015 Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434Google Scholar
Snoek, J., Larochelle, H. & Adams, R.P. 2012 Practical bayesian optimization of machine learning algorithms. arXiv:1206.2944Google Scholar
Tensorflow 2021 Transfer Learning and Fine-tuning. Available at: https://www.tensorflow.org/guide/keras/transfer_learningGoogle Scholar
Ulyanov, D., Vedaldi, A. & Lempitsky, V. 2016 Instance normalization: The missing ingredient for fast stylization. arXiv:1607.08022Google Scholar
Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J. & Catanzaro, B. 2018 High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807.Google Scholar
Wang, Z., Bovik, A.C., Sheikh, H.R. & Simoncelli, E.P. 2004 Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600612.CrossRefGoogle ScholarPubMed
Yoon, S., Lee, H.C. & Pulliam, T.H. 2016 Computational analysis of multi-rotor flows. In 54th AIAA Aerospace Sciences Meeting, 0812.Google Scholar
Zeiler, M.D., Krishnan, D., Taylor, G.W. & Fergus, R. 2010 Deconvolutional networks. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2528–2535.Google Scholar
Zhang, H., Goodfellow, I., Metaxas, D. & Odena, A. 2019 Self-attention generative adversarial networks. In International Conference on Machine Learning, pp. 7354–7363.Google Scholar
Zhang, M. & Piggott, M.D. 2020 Unsupervised learning of particle image velocimetry. In International Conference on High Performance Computing, pp. 102–115.Google Scholar
Zhang, R., Isola, P. & Efros, A.A. 2016 Colorful image colorization. In European Conference on Computer Vision, pp. 649–666.Google Scholar
Zhang, R., Isola, P., Efros, A.A., Shechtman, E. & Wang, O. 2018 The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595.Google Scholar
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A. & Torralba, A. 2016 Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929.Google Scholar