Abstract
In the field of heterogeneous catalysis, first- principle-based microkinetic modeling has been proven to be an essential tool to provide a deeper understanding of the microscopic interplay between reactions. It avoids the bias of being fitted to experimental data, which allows us to extract information about the materials’ properties that cannot be drawn from experimental data. Unfortunately, the catalytic models draw information from electronic structure theory (e.g. Density Functional Theory) which contains a sizable error due to intrinsic approximations to make the computational costs feasible. Although the errors are commonly accepted and known, this work will analyse how significant the impact of these errors can be on the model outcome. We first explain how these errors are propagated into a model outcome, e.g., turnover-frequency (TOF), and how significant the outcome is impacted. Secondly, we quantify the propagation of single errors by a local and global sensitivity analysis, including a discussion of their dis-/advantages for a catalytic model. The global approach requires the numerical quadrature of high dimensional integrals as the catalytic model often depends on multiple parameters. This, we tackle with a local and dimension-adaptive sg! (sg!) approach. sg!s have shown to be very useful for medium dimensional problems since their adaptivity feature allows for an accurate surrogate model with a modest number of points. Despite the models’ high dimensionality, the outcome is mostly dominated by a fraction of the input parameter, which implies a high refinement in only a fraction of the dimensions (dimension-adaptive). Additionally, the kinetic data shows characteristics of sharp transitions between "non-active" and "active" areas, which need a higher order of refinement (local-adaptive). The efficiency of the adaptive sg! is tested on different toy models and a realistic first principle model, including the Sensitivity Analysis. Results show that for catalytic models, a local derivative-based sensitivity analysis gives only limited information. However, the global approach can identify the important parameters and allows extracting information from more complex models in more detail. The Sparse Grid approach is useful for reducing the total number of points, but what if evaluating the point itself is very expensive? The second part of this work concentrates on solving high dimensional integrals for models whose evaluations are costly due to, e.g. being only implicitly given by a Monte Carlo model. The evaluation contains an error due to finite sampling. To lower the error, we would have to increase computational effort for a high number of samples. To tackle this problem, we extend the SG method with a multilevel approach to lower the cost. Unlike existing approaches, we will not use the telescoping sum but utilise the sparse grid’s intrinsically given hierarchical structure. We assume that not all the SG points need the same accuracy but that we can double the points’ variance and halve the drawn samples with every refinement step. We demonstrate the methodology on different toy models and a realistic kinetic Monte Carlo system for CO oxidation. Therefore, we compare the non- multilevel adaptive Sparse Grid (ASG) with the Multilevel Adaptive Sparse Grid (MLASG). Results show that ith the multilevel extension we can save up to two orders of magnitude without challenging the accuracy of the surrogate model compared to a non-mulitlevel SG.