Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

Probabilistic autoencoder using Fisher information

MPG-Autoren
/persons/resource/persons268684

Zacherl,  Johannes
MPI for Astrophysics, Max Planck Society;

/persons/resource/persons202005

Frank,  Philipp
Computational Structure Formation, MPI for Astrophysics, Max Planck Society;

/persons/resource/persons16142

Enßlin,  Torsten A.
Computational Structure Formation, MPI for Astrophysics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Zacherl, J., Frank, P., & Enßlin, T. A. (2021). Probabilistic autoencoder using Fisher information. Entropy, 23(12): 1640. doi:10.3390/e23121640.


Zitierlink: https://hdl.handle.net/21.11116/0000-0009-CEFC-4
Zusammenfassung
Neural networks play a growing role in many scientific disciplines, including physics. Variational autoencoders (VAEs) are neural networks that are able to represent the essential information of a high dimensional data set in a low dimensional latent space, which have a probabilistic interpretation. In particular, the so-called encoder network, the first part of the VAE, which maps its input onto a position in latent space, additionally provides uncertainty information in terms of variance around this position. In this work, an extension to the autoencoder architecture is introduced, the FisherNet. In this architecture, the latent space uncertainty is not generated using an additional information channel in the encoder but derived from the decoder by means of the Fisher information metric. This architecture has advantages from a theoretical point of view as it provides a direct uncertainty quantification derived from the model and also accounts for uncertainty cross-correlations. We can show experimentally that the FisherNet produces more accurate data reconstructions than a comparable VAE and its learning performance also apparently scales better with the number of latent space dimensions.