English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Intrinsic disentanglement: an invariance view for deep generative models

MPS-Authors
/persons/resource/persons75278

Besserve,  M
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84193

Schölkopf,  B
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

External Ressource
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Besserve, M., Sun, R., & Schölkopf, B. (2018). Intrinsic disentanglement: an invariance view for deep generative models. In ICML 2018 Workshop on Theoretical Foundations and Applications of Deep Generative Models. Retrieved from https://sites.google.com/view/tadgm/accepted-papers.


Cite as: http://hdl.handle.net/21.11116/0000-0007-8B77-7
Abstract
Deep generative models such as Generative Ad- versarial Networks (GANs) and Variational Auto- Encoders (VAEs) are important tools to capture and investigate the properties of complex empiri- cal data. However, the complexity of their inner elements makes their functioning challenging to interpret and modify. In this respect, these archi- tectures behave as black box models. In order to better understand the function of such network, we analyze the modularity of these system by quantifying the disentanglement of their intrinsic parameters. This concept relates to a notion of invariance to transformations of internal variables of the generative model, recently introduced in the field of causality. Our experiments on generation of human faces with VAEs supports that modu- larity between weights distributed over layers of generator architecture is achieved to some degree, and can be used to understand better the function- ing of these architectures. Finally, we show that modularity can be enhanced during optimization.