hide
Free keywords:
Computer Science, Learning, cs.LG,Computer Science, Computer Vision and Pattern Recognition, cs.CV,eess.SP,Statistics, Machine Learning, stat.ML
Abstract:
We investigate filter level sparsity that emerges in convolutional neural
networks (CNNs) which employ Batch Normalization and ReLU activation, and are
trained with adaptive gradient descent techniques and L2 regularization (or
weight decay). We conduct an extensive experimental study casting these initial
findings into hypotheses and conclusions about the mechanisms underlying the
emergent filter level sparsity. This study allows new insight into the
performance gap obeserved between adapative and non-adaptive gradient descent
methods in practice. Further, analysis of the effect of training strategies and
hyperparameters on the sparsity leads to practical suggestions in designing CNN
training strategies enabling us to explore the tradeoffs between feature
selectivity, network capacity, and generalization performance. Lastly, we show
that the implicit sparsity can be harnessed for neural network speedup at par
or better than explicit sparsification / pruning approaches, without needing
any modifications to the typical training pipeline.