English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators

Stutz, D., Chandramoorthy, N., Hein, M., & Schiele, B. (2021). Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators. Retrieved from https://arxiv.org/abs/2104.08323.

Item is

Basic

show hide
Genre: Paper
Latex : Random and Adversarial Bit Error Robustness: {E}nergy-Efficient and Secure {DNN} Accelerators

Files

show Files
hide Files
:
arXiv:2104.08323.pdf (Preprint), 3MB
Name:
arXiv:2104.08323.pdf
Description:
File downloaded from arXiv at 2021-11-22 09:36 arXiv admin note: substantial text overlap with arXiv:2006.13977
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Stutz, David1, Author           
Chandramoorthy, Nandhini2, Author
Hein, Matthias2, Author
Schiele, Bernt1, Author           
Affiliations:
1Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Learning, cs.LG,Computer Science, Architecture, cs.AR,Computer Science, Cryptography and Security, cs.CR,Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: Deep neural network (DNN) accelerators received considerable attention in
recent years due to the potential to save energy compared to mainstream
hardware. Low-voltage operation of DNN accelerators allows to further reduce
energy consumption significantly, however, causes bit-level failures in the
memory storing the quantized DNN weights. Furthermore, DNN accelerators have
been shown to be vulnerable to adversarial attacks on voltage controllers or
individual bits. In this paper, we show that a combination of robust
fixed-point quantization, weight clipping, as well as random bit error training
(RandBET) or adversarial bit error training (AdvBET) improves robustness
against random or adversarial bit errors in quantized DNN weights
significantly. This leads not only to high energy savings for low-voltage
operation as well as low-precision quantization, but also improves security of
DNN accelerators. Our approach generalizes across operating voltages and
accelerators, as demonstrated on bit errors from profiled SRAM arrays, and
achieves robustness against both targeted and untargeted bit-level attacks.
Without losing more than 0.8%/2% in test accuracy, we can reduce energy
consumption on CIFAR10 by 20%/30% for 8/4-bit quantization using RandBET.
Allowing up to 320 adversarial bit errors, AdvBET reduces test error from above
90% (chance level) to 26.22% on CIFAR10.

Details

show
hide
Language(s): eng - English
 Dates: 2021-04-162021
 Publication Status: Published online
 Pages: 39 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2104.08323
URI: https://arxiv.org/abs/2104.08323
BibTex Citekey: Stutz2104.08323
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show