Learning nonlinear correlations between multifidelity models using deep neural networks

Leveraging information for multiple sources to construct accurate surrogate models

Abstract

A typical scenario in computational science is the availability of a suite of simulators the solve the same physical problem at varying degrees of accuracy (or fidelity). Higher fidelity solvers adhere to the underlying physics more faithfully, requiring fewer simplications/approximations of the solution methodology and typically incur higher computational cost. A persistant challenge in computational science has thus been on developing methods for efficiently assimilating information from varying sources in order to construct accurate surrogate models for physical systems. O’Hagan’s classical linear autoregressive model for multifidelity data fusion is the seminal work on this topic - improved upon, most notably by Le Gratiet in 2013. Perdikaris et al. (2018) proposed a nonlinear extension to the classical linear autoregressive scheme using Gaussian processes. We propose a simple deep neural network approach to learn nonlinear correlations across models of various fidelity - enabling us to scale to higher dimensional stochastic inputs.

Date
Mar 1, 2017
Event
SIAM Annual Meeting
Location
Pittsburgh, PA

Related