Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Bit Error Robustness for Energy-Efficient DNN Accelerators

Stutz, D., Chandramoorthy, N., Hein, M., & Schiele, B. (2021). Bit Error Robustness for Energy-Efficient DNN Accelerators. In A. Smola, A. Dimakis, & I. Stoica (Eds.), Proceedings of the 4th MLSys Conference. mlsys.org.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Konferenzbeitrag
Latex : Bit Error Robustness for Energy-Efficient {DNN} Accelerators

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:2006.13977.pdf (Preprint), 2MB
 
Datei-Permalink:
-
Name:
arXiv:2006.13977.pdf
Beschreibung:
File downloaded from arXiv at 2020-12-03 07:44
OA-Status:
Sichtbarkeit:
Privat
MIME-Typ / Prüfsumme:
application/pdf
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-
:
MLSys-2021-bit-error-robustness-for-energy-efficient-dnn-accelerators-Paper.pdf (beliebiger Volltext), 2MB
Name:
MLSys-2021-bit-error-robustness-for-energy-efficient-dnn-accelerators-Paper.pdf
Beschreibung:
-
OA-Status:
Grün
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-
Lizenz:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Stutz, David1, Autor           
Chandramoorthy, Nandhini2, Autor
Hein, Matthias2, Autor
Schiele, Bernt1, Autor           
Affiliations:
1Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              
2External Organizations, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Learning, cs.LG,Computer Science, Architecture, cs.AR,Computer Science, Cryptography and Security, cs.CR,Computer Science, Computer Vision and Pattern Recognition, cs.CV,Statistics, Machine Learning, stat.ML
 Zusammenfassung: Deep neural network (DNN) accelerators received considerable attention in
past years due to saved energy compared to mainstream hardware. Low-voltage
operation of DNN accelerators allows to further reduce energy consumption
significantly, however, causes bit-level failures in the memory storing the
quantized DNN weights. In this paper, we show that a combination of robust
fixed-point quantization, weight clipping, and random bit error training
(RandBET) improves robustness against random bit errors in (quantized) DNN
weights significantly. This leads to high energy savings from both low-voltage
operation as well as low-precision quantization. Our approach generalizes
across operating voltages and accelerators, as demonstrated on bit errors from
profiled SRAM arrays. We also discuss why weight clipping alone is already a
quite effective way to achieve robustness against bit errors. Moreover, we
specifically discuss the involved trade-offs regarding accuracy, robustness and
precision: Without losing more than 1% in accuracy compared to a normally
trained 8-bit DNN, we can reduce energy consumption on CIFAR-10 by 20%. Higher
energy savings of, e.g., 30%, are possible at the cost of 2.5% accuracy, even
for 4-bit DNNs.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2020-06-242020-10-2020212021
 Publikationsstatus: Online veröffentlicht
 Seiten: 29 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: BibTex Citekey: StutzMLSYS2021
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: Fourth Conference on Machine Learning and Systems
Veranstaltungsort: Virtual Conference
Start-/Enddatum: 2021-04-05 - 2021-04-09

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Proceedings of the 4th MLSys Conference
  Kurztitel : MLSys 2021
Genre der Quelle: Konferenzband
 Urheber:
Smola, A.1, Herausgeber
Dimakis, A.1, Herausgeber
Stoica, I.1, Herausgeber
Affiliations:
1 External Organizations, ou_persistent22            
Ort, Verlag, Ausgabe: mlsys.org
Seiten: 30 p. Band / Heft: - Artikelnummer: - Start- / Endseite: - Identifikator: -