English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Likelihood Estimation in Deep Belief Networks

Theis, L., Gerwinn, S., Sinz, F., & Bethge, M. (2010). Likelihood Estimation in Deep Belief Networks. Poster presented at Bernstein Conference on Computational Neuroscience (BCCN 2010), Berlin, Germany. doi:10.3389/conf.fncom.2010.51.00116.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/11858/00-001M-0000-0013-BDFE-4 Version Permalink: http://hdl.handle.net/21.11116/0000-0002-9D84-8
Genre: Poster

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Theis, L1, 2, Author              
Gerwinn, S1, 2, Author              
Sinz, F1, 2, Author              
Bethge, M1, 2, Author              
Affiliations:
1Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497805              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: Many models have been proposed to capture the statistical regularities in natural images patches. The average log-likelihood on unseen data offers a canonical way to quantify and compare the performance of statistical models. A class of models that has recently gained increasing popularity for the task of modeling complexly structured data is formed by deep belief networks. Analyses of these models, however, have been typically based on samples from the model due to the computationally intractable nature of the model likelihood. In this study, we investigate whether the apparent ability of a particular deep belief network to capture higher-order statistical regularities in natural images is also reflected in the likelihood. Specifically, we derive a consistent estimator for the likelihood of deep belief networks that is conceptually simpler and more readily applicable than the previously published method [1]. Using this estimator, we evaluate a three-layer deep belief network and compare its density estimation performance with the performance of other models trained on small patches of natural images. In contrast to an earlier analysis based solely on samples, we provide evidence that the deep belief network under study is not a good model for natural images by showing that it is outperformed even by very simple models. Further, we confirm existing results indicating that adding more layers to the network has only little effect on the likelihood if each layer of the model is trained well enough. Finally, we offer a possible explanation for both the observed performance and the small effect of additional layers by analyzing a best case scenario of the greedy learning algorithm commonly used for training this class of models.

Details

show
hide
Language(s):
 Dates: 2010-09
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.3389/conf.fncom.2010.51.00116
BibTex Citekey: 6704
 Degree: -

Event

show
hide
Title: Bernstein Conference on Computational Neuroscience (BCCN 2010)
Place of Event: Berlin, Germany
Start-/End Date: 2010-09-27 - 2010-10-01

Legal Case

show

Project information

show

Source 1

show
hide
Title: Frontiers in Computational Neuroscience
  Abbreviation : Front Comput Neurosci
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Lausanne : Frontiers Research Foundation
Pages: - Volume / Issue: 2010 ( Conference Abstract: Bernstein Conference on Computational Neuroscience) Sequence Number: - Start / End Page: - Identifier: Other: 1662-5188
CoNE: https://pure.mpg.de/cone/journals/resource/1662-5188