English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Texture synthesis using random shallow neural networks

Ustyuzhaninov, I., Brendel, W., Gatys, L., & Bethge, M. (2016). Texture synthesis using random shallow neural networks. Poster presented at Bernstein Conference 2016, Berlin, Germany.

Item is

Files

show Files

Locators

show
hide
Locator:
Link (Any fulltext)
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Ustyuzhaninov, I, Author           
Brendel, W, Author
Gatys, L, Author
Bethge, M1, 2, Author           
Affiliations:
1Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497794              
2Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497805              

Content

show
hide
Free keywords: -
 Abstract: Natural image generation is currently one of the most actively explored fields in Deep Learning. A surprising recent result has been that feature representations from networks trained on a purely discriminative task can be used for state-of-the-art image synthesis (Gatys et al., 2015). However, it is still unclear what aspects of the pre-trained network are critical for high generative performance. It could be, for example, the architecture of the convolutional neural network (CNN) in terms of the number of layers, specific pooling techniques, the connection between filter complexity and filter scale (larger filters are more non-linear), the training task and the network’s performance on that task or the data it was trained on.
To explore the importance of learnt filters and deep architectures, we here consider the task of synthesising natural textures using only a single-layer CNN with completely random filters. Our surprising finding is that we can synthesise natural textures of high perceptual quality that sometimes even rival current state-of-the-art methods (Gatys et al., 2015; Liu et al., 2016) which rely on deep, supervisedly trained multi-layer representations. We hence conclude that neither the supervised training nor the depth of the architecture is indispensable for natural texture generation.
Furthermore, we evaluate the importance of other architectural aspects of random CNNs for natural texture synthesis. For that we introduce a new quantitative measure of texture quality based on the state-of-the-art parametric texture model by Gatys et al. This measure allows us to objectively quantify the performance of each architecture and perform a large-scale grid-search over CNNs with random filters and different architectures (in terms of numbers of layers, sizes of convolutional filters, non-linearities, pooling layers, numbers of feature maps within each layer). The main result is that larger filters and more layers help synthesising textures that are perceptually more similar to the original one, however, at the cost of less variability.

Details

show
hide
Language(s):
 Dates: 2016-09
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.12751/nncn.bc2016.0226
BibTex Citekey: UstyuzhaninovBGB2016
 Degree: -

Event

show
hide
Title: Bernstein Conference 2016
Place of Event: Berlin, Germany
Start-/End Date: 2016-09-21 - 2016-09-23

Legal Case

show

Project information

show

Source 1

show
hide
Title: Bernstein Conference 2016
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 233 - 234 Identifier: -