English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Deep Shading: Convolutional Neural Networks for Screen-Space Shading

MPS-Authors
/persons/resource/persons123414

Nalbach,  Oliver
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons135701

Arabadzhiyska,  Elena
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons129023

Mehta,  Dushyant
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1603.06078.pdf
(Preprint), 9MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Nalbach, O., Arabadzhiyska, E., Mehta, D., Seidel, H.-P., & Ritschel, T. (2016). Deep Shading: Convolutional Neural Networks for Screen-Space Shading. Retrieved from http://arxiv.org/abs/1603.06078.


Cite as: https://hdl.handle.net/11858/00-001M-0000-002B-0174-4
Abstract
In computer vision, Convolutional Neural Networks (CNNs) have recently achieved new levels of performance for several inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen-space shading has recently increased the visual quality in interactive image synthesis, where per-pixel attributes such as positions, normals or reflectance of a virtual 3D scene are converted into RGB pixel appearance, enabling effects like ambient occlusion, indirect light, scattering, depth-of-field, motion blur, or anti-aliasing. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading simulates all screen-space effects as well as arbitrary combinations thereof at competitive quality and speed while not being programmed by human experts but learned from example images.