English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Interpretability of machine-learning models in physical sciences

MPS-Authors
/persons/resource/persons21549

Ghiringhelli,  Luca M.
NOMAD, Fritz Haber Institute, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

2104.10443.pdf
(Preprint), 115KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Ghiringhelli, L. M. (in preparation). Interpretability of machine-learning models in physical sciences.


Cite as: https://hdl.handle.net/21.11116/0000-0008-6F30-6
Abstract
In machine learning (ML), it is in general challenging to provide a detailed explanation on how a trained model arrives at its prediction. Thus, usually we are left with a black-box, which from a scientific standpoint is not satisfactory. Even though numerous methods have been recently proposed to interpret ML models, somewhat surprisingly, interpretability in ML is far from being a consensual concept, with diverse and sometimes contrasting motivations for it. Reasonable candidate properties of interpretable models could be model transparency (i.e. how does the model work?) and post hoc explanations (i.e., what else can the model tell me?). Here, I review the current debate on ML interpretability and identify key challenges that are specific to ML applied to materials science.