English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Adaptive weighting of Bayesian physics informed neural networks for multitask and multiscale forward and inverse problems.

MPS-Authors
/cone/persons/resource/

Perez,  Sarah
Max Planck Institute for Molecular Cell Biology and Genetics, Max Planck Society;

/cone/persons/resource/persons219410

Maddu,  Suryanarayana
Max Planck Institute for Molecular Cell Biology and Genetics, Max Planck Society;

/cone/persons/resource/persons219620

Sbalzarini,  Ivo F.
Max Planck Institute for Molecular Cell Biology and Genetics, Max Planck Society;

/cone/persons/resource/

Poncet,  Philippe
Max Planck Institute for Molecular Cell Biology and Genetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Perez, S., Maddu, S., Sbalzarini, I. F., & Poncet, P. (2023). Adaptive weighting of Bayesian physics informed neural networks for multitask and multiscale forward and inverse problems. Journal of Computational Physics, 491: 112342, pp. 1-34. doi:10.1016/j.jcp.2023.112342.


Cite as: https://hdl.handle.net/21.11116/0000-000E-AA9E-1
Abstract
In this paper, we present a novel methodology for automatic adaptive weighting of Bayesian Physics-Informed Neural Networks (BPINNs), and we demonstrate that this makes it possible to robustly address multi-objective and multiscale problems. BPINNs are a popular framework for data assimilation, combining the constraints of Uncertainty Quantification (UQ) and Partial Differential Equation (PDE). The relative weights of the BPINN target distribution terms are directly related to the inherent uncertainty in the respective learning tasks. Yet, they are usually manually set a-priori, that can lead to pathological behavior, stability concerns, and to conflicts between tasks which are obstacles that have deterred the use of BPINNs for inverse problems with multiscale dynamics. The present weighting strategy automatically tunes the weights by considering the multitask nature of target posterior distribution. We show that this remedies the failure modes of BPINNs and provides efficient exploration of the optimal Pareto front. This leads to better convergence and stability of BPINN training while reducing sampling bias. The determined weights moreover carry information about task uncertainties, reflecting noise levels in the data and adequacy of the PDE model. We demonstrate this in numerical experiments in Sobolev training, and compare them to analytically ε-optimal baseline, and in a multiscale Lotka-Volterra inverse problem. We eventually apply this framework to an inpainting task and an inverse problem, involving latent field recovery for incompressible flow in complex geometries.