English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Bit Error Robustness for Energy-Efficient DNN Accelerators

Stutz, D., Chandramoorthy, N., Hein, M., & Schiele, B. (2021). Bit Error Robustness for Energy-Efficient DNN Accelerators. In A. Smola, A. Dimakis, & I. Stoica (Eds.), Proceedings of the 4th MLSys Conference. mlsys.org.

Item is

Basic

show hide
Genre: Conference Paper
Latex : Bit Error Robustness for Energy-Efficient {DNN} Accelerators

Files

show Files
hide Files
:
arXiv:2006.13977.pdf (Preprint), 2MB
 
File Permalink:
-
Name:
arXiv:2006.13977.pdf
Description:
File downloaded from arXiv at 2020-12-03 07:44
OA-Status:
Visibility:
Private
MIME-Type / Checksum:
application/pdf
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
:
MLSys-2021-bit-error-robustness-for-energy-efficient-dnn-accelerators-Paper.pdf (Preprint), 2MB
Name:
MLSys-2021-bit-error-robustness-for-energy-efficient-dnn-accelerators-Paper.pdf
Description:
-
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Stutz, David1, Author           
Chandramoorthy, Nandhini2, Author
Hein, Matthias2, Author
Schiele, Bernt1, Author           
Affiliations:
1Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Learning, cs.LG,Computer Science, Architecture, cs.AR,Computer Science, Cryptography and Security, cs.CR,Computer Science, Computer Vision and Pattern Recognition, cs.CV,Statistics, Machine Learning, stat.ML
 Abstract: Deep neural network (DNN) accelerators received considerable attention in
past years due to saved energy compared to mainstream hardware. Low-voltage
operation of DNN accelerators allows to further reduce energy consumption
significantly, however, causes bit-level failures in the memory storing the
quantized DNN weights. In this paper, we show that a combination of robust
fixed-point quantization, weight clipping, and random bit error training
(RandBET) improves robustness against random bit errors in (quantized) DNN
weights significantly. This leads to high energy savings from both low-voltage
operation as well as low-precision quantization. Our approach generalizes
across operating voltages and accelerators, as demonstrated on bit errors from
profiled SRAM arrays. We also discuss why weight clipping alone is already a
quite effective way to achieve robustness against bit errors. Moreover, we
specifically discuss the involved trade-offs regarding accuracy, robustness and
precision: Without losing more than 1% in accuracy compared to a normally
trained 8-bit DNN, we can reduce energy consumption on CIFAR-10 by 20%. Higher
energy savings of, e.g., 30%, are possible at the cost of 2.5% accuracy, even
for 4-bit DNNs.

Details

show
hide
Language(s): eng - English
 Dates: 2020-06-242020-10-2020212021
 Publication Status: Published online
 Pages: 29 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: BibTex Citekey: StutzMLSYS2021
 Degree: -

Event

show
hide
Title: Fourth Conference on Machine Learning and Systems
Place of Event: Virtual Conference
Start-/End Date: 2021-04-05 - 2021-04-09

Legal Case

show

Project information

show

Source 1

show
hide
Title: Proceedings of the 4th MLSys Conference
  Abbreviation : MLSys 2021
Source Genre: Proceedings
 Creator(s):
Smola, A.1, Editor
Dimakis, A.1, Editor
Stoica, I.1, Editor
Affiliations:
1 External Organizations, ou_persistent22            
Publ. Info: mlsys.org
Pages: 30 p. Volume / Issue: - Sequence Number: - Start / End Page: - Identifier: -