Skip to main content Accessibility help
×
Home
Upcoming DCE Webinars


amind

Speaker: Philipp Hennig, Chair for the Methods of Machine Learning at the Eberhard Karl University of Tübingen; Adjunct Scientist at the Max Planck Institute for Intelligent Systems, Tübingen, Germany.

Google Scholar

Webpage

Date/Time: Wednesday, 1st December 2021, 2pm UK / 9am Eastern US / 11pm Japan

Title: Probabilistic Numerics — Computation as Machine Learning


Past Webinars and Recordings


amind

Speaker: Philip Jonathan, Statistical Modeller at Shell and Chair in Environmental Statistics and Data Science at Lancaster University, UK.

Google Scholar

Webpage

Date/Time: Wednesday, November 17th, 4pm UK

Title: Environmental decision support: rare events, monitoring and inversion, and uncertainty quantification

Abstract: Environmental data science offers the opportunity to combine theory and measurement in characterising our natural environment rationally and quantitatively, to understand its evolution, and to take decisions regarding our interactions with it wisely in the presence of uncertainty. The practical challenge is to develop and adapt ideas from statistical theory and method to address complex problems in the natural environment in a physically realistic manner, creating useful tools to quantify risk, and support real-world decision making.

Characterising rare events from observations is problematic by definition, since their rate of occurrence is low. In an environmental context, we discuss how extreme value theory is used to understand extreme ocean storms, floods, heat-waves and earthquakes. Practical application of extreme value methods numerous significant challenges, including the estimation of extreme value threshold, non-stationarity with respect to multiple covariates, and reliable characterisation of extremal dependence. These challenges become ever more acute as the numbers of responses and covariates increase. Given observational data for thousands of variables of interest in some applications, the computational complexity of extreme value analysis is enormous.

Remote sensing technologies including satellite, airborne, line-of-site and point) provide observations of the earth’s atmosphere and surface at different scales and resolutions. We discuss how these data sources are used in statistical models to detect, locate and quantify sources of natural and manmade methane and carbon dioxide emissions into the atmosphere, and to monitor atmospheric concentrations within specific domains.

Bayesian inference provides a natural framework for learning about large-scale physical systems, to quantify uncertainty in predictions regarding those systems, and to facilitate optimal decision-making. We discuss how the computational complexity of practical inference is managed by the adoption of suitable modelling frameworks and efficient numerical schemes.


amind

Speaker: Takemasa Miyoshi, Team Leader of the Data Assimilation Research Team at the RIKEN Center for Computational Science; Deputy Director of the RIKEN Interdisciplinary Theoretical and Mathematical Sciences Program, Wakō, Saitama, Japan

Google Scholar

Webpage

Date/Time: Wednesday 3rd November 2021, 9.45 - 11.00 am UK / 6.45pm - 8pm Japan




Title: Fusing Big Data and Big Computation in Numerical Weather Prediction

Abstract: At RIKEN, we have been exploring a fusion of big data and big computation, and now with AI techniques and machine learning (ML). The new Japan’s flagship supercomputer “Fugaku” is designed to be efficient for both double-precision big simulations and reduced-precision ML applications, aiming to play a pivotal role in creating super-smart “Society 5.0.” Our group in RIKEN has been pushing the limits of numerical weather prediction (NWP) through two orders of magnitude bigger computations using the previous Japan’s flagship “K computer”. The efforts include 100-m mesh, 30-second update “Big Data Assimilation” (BDA) fully exploiting the big data from a novel Phased Array Weather Radar. We have achieved the first-ever real-time BDA application with 500-m mesh NWP in the summer of 2020 using a supercomputer Oakforest-PACS of the University of Tokyo and Tsukuba University. In 2021, we used the new Fugaku and performed real-time 30-second update NWP during the periods of Tokyo Olympic and Paralympic games. With Fugaku, we have been exploring ideas for fusing BDA and AI. The data produced by NWP models become bigger and moving around the data to other computers for ML may not be feasible. Having a next-generation computer like Fugaku, good for both big NWP computation and ML, may bring a breakthrough toward creating a new methodology of fusing data-driven (inductive) and process-driven (deductive) approaches in meteorology. This presentation will introduce the most recent results from data assimilation and NWP experiments, followed by perspectives toward a general theory of prediction and control beyond meteorology.

amind

Speaker: Karthik Duraisamy, Associate Director for the Michigan Institute for Computational Discovery & Engineering; Associate Professor in Aerospace Engineering at University of Michigan, USA

Google Scholar

Webpage

Date/Time: Wednesday 6th October 2021, 4pm BST/11am EDT/12am JST

Title: Data-driven Reduced Order Models for Multi-scale, Multi-physics Systems


This talk presents advances towards the development of effective reduced order models (ROMs) for complex multi-scale multi-physics problems. As a representative application, we consider combustion dynamics in a rocket engine, which is characterized by the coupling between heat release, hydrodynamics and acoustics.  The first part of the talk is focused on improving robustness and consistency: A structure-preserving transformation of the state variables is used along with a discretely consistent least squares formulation to yield symmetrized model operators in both explicit and implicit time integration settings. The resulting reduced order model is well-conditioned and globally stable. Local stability is promoted via limiters that enforce physical realizability.  The second part of the talk is focused on improving accuracy: Dimension reduction approaches based on static linear manifolds are not effective in addressing multi-scale problems with significant convection. To help mitigate this issue, we present formulations using adaptive spaces. Opportunities for further improvement are also highlighted.  The last part of the talk discusses tractability :  ROMs are used to enable computations of problems for which full order models are not affordable. In particular, we develop a multi-fidelity framework in which component-level ROMs are trained on small domains, and integrated to enable full-system predictions in an affordable manner.  In essence, geometry-specific training is replaced by the response generated by perturbing the characteristics at the boundary of the truncated domain. This training method is shown to enhance predictive capabilities and robustness of the resulting ROMs, including conditions outside the training range. 


amind

Speaker: Julie McCann, Professor of Computer Systems and Head of the Adaptive Emergent Systems Engineering (AESE) Group, Imperial College London, UK

Google Scholar

Webpage

Date/Time: Wednesday 8th September 2021, 4pm BST/11am EDT/12am JST




Title: Does a Digital Twin have a Digital Twin?

Since the early days of what is known today as the Internet of Things, there have been over twenty years of research. Such systems are key sources of the data that feed Digital Twins; indeed this might even be in real-time.  However, when one talks to real users in the worlds of civil engineering, environment modelling, digital agriculture and the industrial IoT etc. they complain about IoT systems being flaky and completely unusable after a few years.  Given that these users are designing infrastructures that are required to last potentially 10 to hundreds of years what are the implications for the field of IoT if we cannot deliver? In my talk I will discuss some of the issues we have been dealing with and put forward some of my thoughts of what we need to start looking at, as a community. Real users of networked sensor systems want smart infrastructures, my question is, will Digital Twin driven systems thinking be the answer?

You can watch a recording of Julie McCann's talk here


amind

Speaker: Neil Lawrence, DeepMind Professor of Machine Learning at the University of Cambridge, UK

Google Scholar

Webpage

Date/Time: Wednesday 5th May 2021, 4pm BST/11am EDT/12am JST





Title: Machine Learning and the Physical World

Abstract: Machine learning technologies have underpinned the recent revolution in artificial intelligence. But at their heart, they are simply data driven decision making algorithms. While the popular press is filled with the achievements of these algorithms in important domains such as object detection in images, machine translation and speech recognition, there are still many open questions about how these technologies might be implemented in domains where we have existing solutions but we are constantly looking for improvements. Roughly speaking, we characterise this domain as “machine learning in the physical world”. How do we design, build and deploy machine learning algorithms that are part of a decision making system that interacts with the physical world around us? In particular, machine learning is a data driven endeavour, but real world systems are physical and mechanistic. In this talk we will introduce some of the challenges for this domain and and propose some ways forward in terms of solutions.

You can watch a recording of Neil Lawrence's talk here.


amind

Speaker: George Em Karniadakis, The Charles Pitts Robinson and John Palmer Barstow Professor of Applied Mathematics and Engineering, Brown University, USA

Google Scholar

Webpage

Date/Time: Wednesday 7th April 2021, 4pm BST/11am EDT/12am JST




Title: Approximating functions, functionals, and operators using deep neural networks for diverse applications

Abstract: We will present a new approach to develop a data-driven, learning-based framework for predicting outcomes of physical systems and for discovering hidden physics from noisy data. We will introduce a deep learning approach based on neural networks (NNs) and generative adversarial networks (GANs). Unlike other approaches that rely on big data, here we “learn” from small data by exploiting the information provided by the physical conservation laws, which are used to obtain informative priors or regularize the neural networks. We will demonstrate the power of PINNs for several inverse problems, and we will demonstrate how we can use multi-fidelity modeling in monitoring ocean acidification levels in the Massachusetts Bay. We will also introduce new NNs that learn functionals and nonlinear operators from functions and corresponding responses for system identification. The universal approximation theorem of operators is suggestive of the potential of NNs in learning from scattered data any continuous operator or complex system. We first generalize the theorem to deep neural networks, and subsequently we apply it to design a new composite NN with small generalization error, the deep operator network (DeepONet), consisting of a NN for encoding the discrete input function space (branch net) and another NN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, e.g., integrals, Laplace transforms and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. More generally, DeepOnet can learn multiscale operators spanning across many scales and trained by diverse sources of data simultaneously. 

You can watch George Karniadakis' talk here.


amind

Speaker: Gianluca Iaccarino, Professor of Mechanical Engineering and Director of the Institute for Computational and Mathematical Engineering, Stanford University, USA

Google Scholar

Webpage

Date/Time: Wednesday 2nd June 2021, 4pm BST/11am EDT/12am JST

Title: Data-Free and Data-Driven Uncertainty Quantification in Turbulent Flow Simulations

Abstract: Fluid simulations and predictions of turbulent motions play a critical role in many fluid flow scenarios but particularly when the flow is prone to separation. Despite continued advances in high-fidelity turbulent flow simulations, closure models based on the Reynolds-Averaged Navier-Stokes (RANS) equations are projected to remain in use for considerable time to come, especially when rapid responses are required or large number of conditions need to be studied. However, it is common knowledge that RANS predictions are corrupted by epistemic model-form uncertainty to a degree which is unknown a-priori. Hence, to obtain a computational framework of predictive utility, a model-form Uncertainty Quantification (UQ) framework is indispensable. UQ enables the characterization of the potential errors in the simulations and leads to prudent estimates of the impact of turbulence assumptions on the quantity of interest. Over the last several years we have developed an approach that uses a spectral decomposition of the modeled Reynolds-Stress Tensor (RST) which is the building block of all RANS closure. This strategy allows for the introduction of decoupled perturbations into the baseline intensity (kinetic energy), shape (eigenvalues), and orientation (eigenvectors) of the Reynolds stresses. Within this perturbation framework, we look for a-priori known limiting physical bounds. Since these bounds are universal, they can be used to constrain uncertainty estimates in any predictive flow scenario. Thus, even in the absence of relevant training data, we can maximize the spectral perturbations in order to obtain conservative uncertainty intervals. The approach has been proven to be useful in a variety of applications. On the other hand, any reference data (experiments or high-fidelity resolved turbulence simulations) can be used to further constrain the uncertainty estimates using commonly available data assimilation techniques. This provides a data-driven path towards UQ estimates. We will demonstrate our framework on canonical flow problems using two common data-driven approaches, random forest regression and deep neural networks. We consider a database of high-fidelity simulation data for the flow over a separated flows and compare the resulting uncertainty estimates to conventional RANS closures and available experimental data.

You can watch Gianluca Iaccarinos' talk here.