English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Extrapolation to complete basis-set limit in density-functional theory by quantile random-forest models

MPS-Authors
/persons/resource/persons265086

Speckhard,  Daniel
NOMAD, Fritz Haber Institute, Max Planck Society;

/persons/resource/persons21413

Carbogno,  Christian
NOMAD, Fritz Haber Institute, Max Planck Society;

/persons/resource/persons21549

Ghiringhelli,  Luca M.
NOMAD, Fritz Haber Institute, Max Planck Society;

/persons/resource/persons22064

Scheffler,  Matthias
NOMAD, Fritz Haber Institute, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

2303.14760.pdf
(Preprint), 3MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Speckhard, D., Carbogno, C., Ghiringhelli, L. M., Lubeck, S., Scheffler, M., & Draxl, C. (in preparation). Extrapolation to complete basis-set limit in density-functional theory by quantile random-forest models.


Cite as: https://hdl.handle.net/21.11116/0000-000C-F7FE-0
Abstract
The numerical precision of density-functional-theory (DFT) calculations depends on a variety of computational parameters, one of the most critical being the basis-set size. The ultimate precision is reached with an infinitely large basis set, i.e., in the limit of a complete basis set (CBS). Our aim in this work is to find a machine-learning model that extrapolates finite basis-size calculations to the CBS limit. We start with a data set of 63 binary solids investigated with two all-electron DFT codes, exciting and FHI-aims, which employ very different types of basis sets. A quantile-random-forest model is used to estimate the total-energy correction with respect to a fully converged calculation as a function of the basis-set size. The random-forest model achieves a symmetric mean absolute percentage error of lower than 25% for both codes and outperforms previous approaches in the literature. Our approach also provides prediction intervals, which quantify the uncertainty of the models' predictions.