English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Understanding machine-learned density functionals

MPS-Authors
/persons/resource/persons173798

Rupp,  Matthias
Theory, Fritz Haber Institute, Max Planck Society;
Institute of Physical Chemistry and National Center for Computational Design and Discovery of Novel Materials (MARVEL), Department of Chemistry, University of Basel;

Locator
There are no locators available
Fulltext (public)

1404.1333.pdf
(Preprint), 750KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Li, L., Snyder, J. C., Pelaschier, I. M., Huang, J., Niranjan, U., Duncan, P., et al. (2016). Understanding machine-learned density functionals. International Journal of Quantum Chemistry, 116(11), 819-833. doi:10.1002/qua.25040.


Cite as: http://hdl.handle.net/11858/00-001M-0000-002A-D796-3
Abstract
Machine learning (ML) is an increasingly popular statistical tool for analyzing either measured or calculated data sets. Here, we explore its application to a well-defined physics problem, investigating issues of how the underlying physics is handled by ML, and how self-consistent solutions can be found by limiting the domain in which ML is applied. The particular problem is how to find accurate approximate density functionals for the kinetic energy (KE) of noninteracting electrons. Kernel ridge regression is used to approximate the KE of non-interacting fermions in a one dimensional box as a functional of their density. The properties of different kernels and methods of cross-validation are explored, reproducing the physics faithfully in some cases, but not others. We also address how self-consistency can be achieved with information on only a limited electronic density domain. Accurate constrained optimal densities are found via a modified Euler-Lagrange constrained minimization of the machine-learned total energy, despite the poor quality of its functional derivative. A projected gradient descent algorithm is derived using local principal component analysis. Additionally, a sparse grid representation of the density can be used without degrading the performance of the methods. The implications for machine-learned density functional approximations are discussed.