MS364 - Making Decisions With Imperfect Models: Methods To Quantify Model Reliability And Address Model-Form Errors

Organized by: K. Maupin (Sandia National Laboratories, United States) and T. Portone (Sandia National Laboratories, United States)
Keywords: model discrepancy, model reliability, Decision-making under uncertainty, Model-form error, Validation
Computational models are frequently used to make predictions affecting high-consequence engineering design and policy decisions. However, simplifying assumptions (e.g., constitutive models, neglecting interactions) are often made to maintain computational tractability or due to lack of adequate information about the modeled phenomenon. These assumptions, if incorrect, can lead to model-form errors that impact model reliability. Validation and model-form uncertainty (MFU) approaches seek to quantify and assess the reliability of model predictions in the presence of model-form errors. Validation methods and processes probe the adequacy of a model for a target use case by measuring its agreement with data. MFU approaches quantify uncertainty in model predictions arising from simplifying assumptions and model-form errors. Despite advances in validation and MFU methods, quantifying model prediction reliability for predictions that extrapolate beyond available data, is an ongoing research challenge. This minisymposium presents a range of approaches to assess and quantify model trustworthiness, including: novel validation methods, processes, and metrics, especially those supporting model extrapolation beyond available data; methods to quantify model-form uncertainty, e.g., model-form error and model discrepancy approaches; and methods accounting for model misspecification in inference, design optimization, and other outer-loop processes.