English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  A robust estimator of mutual information for deep learning interpretability

Piras, D., Peiris, H. V., Pontzen, A., Lucie-Smith, L., Guo, N., & Nord, B. (2023). A robust estimator of mutual information for deep learning interpretability. Machine Learning: Science and Technology, 4(2): 025006. doi:10.1088/2632-2153/acc444.

Item is

Files

show Files
hide Files
:
A robust estimator of mutual information for deep learning.pdf (Any fulltext), 2MB
 
File Permalink:
-
Name:
A robust estimator of mutual information for deep learning.pdf
Description:
-
OA-Status:
Visibility:
Private
MIME-Type / Checksum:
application/pdf
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Piras, Davide, Author
Peiris, Hiranya V, Author
Pontzen, Andrew, Author
Lucie-Smith, Luisa1, Author           
Guo, Ningyuan, Author
Nord, Brian, Author
Affiliations:
1Physical Cosmology, MPI for Astrophysics, Max Planck Society, ou_2205644              

Content

show
hide
Free keywords: -
 Abstract: We develop the use of mutual information (MI), a well-established metric in information theory, to interpret the inner workings of deep learning (DL) models. To accurately estimate MI from a finite number of samples, we present GMM-MI (pronounced 'Jimmie'), an algorithm based on Gaussian mixture models that can be applied to both discrete and continuous settings. GMM-MI is computationally efficient, robust to the choice of hyperparameters and provides the uncertainty on the MI estimate due to the finite sample size. We extensively validate GMM-MI on toy data for which the ground truth MI is known, comparing its performance against established MI estimators. We then demonstrate the use of our MI estimator in the context of representation learning, working with synthetic data and physical datasets describing highly non-linear processes. We train DL models to encode high-dimensional data within a meaningful compressed (latent) representation, and use GMM-MI to quantify both the level of disentanglement between the latent variables, and their association with relevant physical quantities, thus unlocking the interpretability of the latent representation. We make GMM-MI publicly available in this GitHub repository.

Details

show
hide
Language(s): eng - English
 Dates: 2023-04-11
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1088/2632-2153/acc444
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Machine Learning: Science and Technology
  Abbreviation : Mach. Learn.: Sci. Technol.
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Bristol, UK : IOP Publishing
Pages: - Volume / Issue: 4 (2) Sequence Number: 025006 Start / End Page: - Identifier: ISSN: 2632-2153
CoNE: https://pure.mpg.de/cone/journals/resource/2632-2153