English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Interpretability of machine-learning models in physical sciences

Ghiringhelli, L. M. (in preparation). Interpretability of machine-learning models in physical sciences.

Item is

Files

show Files
hide Files
:
2104.10443.pdf (Preprint), 115KB
Name:
2104.10443.pdf
Description:
File downloaded from arXiv at 2021-04-30 14:11
OA-Status:
Green
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Ghiringhelli, Luca M.1, Author           
Affiliations:
1NOMAD, Fritz Haber Institute, Max Planck Society, ou_3253022              

Content

show
hide
Free keywords: Condensed Matter, Materials Science, cond-mat.mtrl-sci, Physics, Data Analysis, Statistics and Probability, physics.data-an
 Abstract: In machine learning (ML), it is in general challenging to provide a detailed explanation on how a trained model arrives at its prediction. Thus, usually we are left with a black-box, which from a scientific standpoint is not satisfactory. Even though numerous methods have been recently proposed to interpret ML models, somewhat surprisingly, interpretability in ML is far from being a consensual concept, with diverse and sometimes contrasting motivations for it. Reasonable candidate properties of interpretable models could be model transparency (i.e. how does the model work?) and post hoc explanations (i.e., what else can the model tell me?). Here, I review the current debate on ML interpretability and identify key challenges that are specific to ML applied to materials science.

Details

show
hide
Language(s): eng - English
 Dates: 2021-04-21
 Publication Status: Not specified
 Pages: 3
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2104.10443
URI: https://arxiv.org/abs/2104.10443
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show