Hostname: page-component-89b8bd64d-72crv Total loading time: 0 Render date: 2026-05-06T13:58:04.915Z Has data issue: false hasContentIssue false

Making global sensitivity analysis feasible using neural network surrogates

Published online by Cambridge University Press:  28 November 2025

Gihan Weerasinghe*
Affiliation:
Digital Technology Group, Arup, Manchester, UK
Ramaseshan Kannan
Affiliation:
Digital Technology Group, Arup, Manchester, UK
Samila Bandara
Affiliation:
Digital Technology Group, Arup, London, UK
*
Corresponding author: Gihan Weerasinghe; Email: gihan.weerasinghe@arup.com

Abstract

How can we make global sensitivity analysis accessible and viable for engineering practice? In this translation article, we present a methodology to enable sensitivity analysis for structural and geotechnical engineering for built environment design and assessment workflows. Our technique wraps computational mechanics and geomechanics finite element (FE) simulations and combines high-performance computing on public cloud with surrogate modeling using machine learning. A key question we address is: “Is there a noticeable loss in fidelity of results from the sensitivity analysis when substituting a simulation model with a surrogate model?” We answer this question for both linear and nonlinear FE simulations.

Information

Type
Translational Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Figure 1. Schematic illustrating the workflow of Sobol analysis. Using a defined input space of parameters and a model, Sobol indices are computed using quasirandom samples. These indices are then used to identify a subset of sensitive parameters, the labels of which are denoted as $ \mathbf{u} $.

Figure 1

Figure 2. GSA model of a 900-m span cable-stayed bridge (image upscaled using Microsoft Copilot v19.2509.59141.0, GPT-4).

Figure 2

Table 1. Notation for section modifiers used in the GSA model

Figure 3

Table 2. Parameter groups and corresponding section indices for the GSA model. Constraints are applied to each parameter and are enforced during sampling

Figure 4

Table 3. Spring stiffnesses and constraints for the GSA model. Constraints are applied to each parameter and are enforced during sampling

Figure 5

Figure 3. Schematic showing our geotechnical case study in Oasys Gofer: A 22-m deep excavation problem with a staged construction sequence tailored to typical Central London ground conditions.

Figure 6

Table 4. Young’s Modulus ($ E $) and Cohesion ($ C $) input parameters for the Gofer model. Constraints are applied to each parameter and are enforced during sampling

Figure 7

Table 5. Summary of parameters used for Sobol analysis

Figure 8

Table 6. Parameters defining the scope of investigation for GSA and Gofer surrogates

Figure 9

Table 7. Summary of parameters used for training GSA and Gofer surrogates

Figure 10

Figure 4. Heatmap showing the rank differences (averaged over trials) for input parameters of the GSA model. The surrogate model used here had $ 6065 $ parameters. The Sobol index for each input parameter from the FE simulation is shown on the y-axis. The percentage of the reference dataset used to train the surrogate model is shown on the x-axis. The color intensity of the heatmap represents the rank differences between the FE and surrogate output. Each panel represents these differences for each output of the model. Only Sobol indices $ \ge {10}^{-3} $ are shown.

Figure 11

Figure 5. Averaged normalized weighted rank differences as a function of the percentage of reference data for different GSA neural network surrogates. Each surrogate is described by $ {N}_{nn} $ parameters. Averaging was performed over parameters, outputs, and trials. The error bars are the standard error of the mean when averaged over trials. Only the lowest and highest complexity models are shown here.

Figure 12

Figure 6. Set differences between outputs from (post-processed) Sobol analysis for surrogates and GSA. $ {u}_{GSA} $ and $ {u}_{surrogate} $ denote the set of outputs for GSA and surrogate, respectively. $ \mid {u}_A-{u}_B\mid $ denotes the size of the set difference, that is, all those elements in $ {u}_A $ but not in $ {u}_B $. Each surrogate is described by $ {N}_{nn} $ parameters. Averaging was performed over parameters, outputs, and trials. The error bars are the standard error of the mean when averaged over trials.

Figure 13

Figure 7. Heatmap showing the rank differences (averaged over trials) for input parameters of the Gofer model. The surrogate model used here had $ 471 $ parameters. The Sobol index for each input parameter from the FE simulation is shown on the y-axis. The percentage of the dataset used to train the surrogate model is shown on the x-axis. The color intensity of the heatmap represents the rank differences between the FE and surrogate output. Each panel represents these differences for each output of the model. Only Sobol indices $ \ge {10}^{-3} $ are shown.

Figure 14

Figure 8. Heatmap showing the rank differences (averaged over trials) for input parameters of the Gofer model. The surrogate used here was the lowest complexity we considered, with $ 252 $ parameters. The Sobol index for each input parameter from the FE simulation is shown on the y-axis. The percentage of the dataset used to train the surrogate is shown on the x-axis. The colour intensity of the heatmap represents the rank differences between the FE and surrogate output. Each panel represents these differences for each output of the model. Only Sobol indices $ \ge {10}^{-3} $ are shown.

Figure 15

Figure 9. Averaged normalized weighted rank differences as a function of percentage of training data for different Gofer surrogates. Each surrogate is described by $ {N}_{nn} $ parameters. Averaging was performed over parameters, outputs, and trials. The error bars are the standard error of the mean when averaged over trials. Only the lowest and highest complexity models are shown here.

Submit a response

Comments

No Comments have been published for this article.