English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  On Implicit Filter Level Sparsity in Convolutional Neural Networks

Mehta, D., Kim, K. I., & Theobalt, C. (2018). On Implicit Filter Level Sparsity in Convolutional Neural Networks. Retrieved from http://arxiv.org/abs/1811.12495.

Item is

Files

show Files
hide Files
:
arXiv:1811.12495.pdf (Preprint), 8MB
Name:
arXiv:1811.12495.pdf
Description:
File downloaded from arXiv at 2019-02-04 12:15
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Mehta, Dushyant1, Author           
Kim, Kwang In2, Author           
Theobalt, Christian1, Author           
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Learning, cs.LG,Computer Science, Computer Vision and Pattern Recognition, cs.CV,eess.SP,Statistics, Machine Learning, stat.ML
 Abstract: We investigate filter level sparsity that emerges in convolutional neural
networks (CNNs) which employ Batch Normalization and ReLU activation, and are
trained with adaptive gradient descent techniques and L2 regularization (or
weight decay). We conduct an extensive experimental study casting these initial
findings into hypotheses and conclusions about the mechanisms underlying the
emergent filter level sparsity. This study allows new insight into the
performance gap obeserved between adapative and non-adaptive gradient descent
methods in practice. Further, analysis of the effect of training strategies and
hyperparameters on the sparsity leads to practical suggestions in designing CNN
training strategies enabling us to explore the tradeoffs between feature
selectivity, network capacity, and generalization performance. Lastly, we show
that the implicit sparsity can be harnessed for neural network speedup at par
or better than explicit sparsification / pruning approaches, without needing
any modifications to the typical training pipeline.

Details

show
hide
Language(s): eng - English
 Dates: 2018-11-292018
 Publication Status: Published online
 Pages: 13 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1811.12495
URI: http://arxiv.org/abs/1811.12495
BibTex Citekey: Mehta_arXIv1811.12495
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show