English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Render2MPEG: A Perception-based Framework Towards Integrating Rendering and Video Compression

MPS-Authors
/persons/resource/persons44618

Herzog,  Robert
Computer Graphics, MPI for Informatics, Max Planck Society;
International Max Planck Research School, MPI for Informatics, Max Planck Society;

/persons/resource/persons44779

Kinuwaki,  Shinichi
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45095

Myszkowski,  Karol       
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Herzog, R., Kinuwaki, S., Myszkowski, K., & Seidel, H.-P. (2008). Render2MPEG: A Perception-based Framework Towards Integrating Rendering and Video Compression. In G. Drettakis, & R. Scopigno (Eds.), EUROGRAPHICS 2008 (pp. 183-192). Oxford: Blackwell.


Cite as: https://hdl.handle.net/11858/00-001M-0000-000F-1CD2-8
Abstract
Currently 3D animation rendering and video compression are completely
independent processes even if rendered
frames are streamed on-the-fly within a client-server platform. In such
scenario, which may involve time-varying
transmission bandwidths and different display characteristics at the client
side, dynamic adjustment of the rendering
quality to such requirements can lead to a better use of server resources. In
this work, we present a framework where
the renderer and MPEG codec are coupled through a straightforward interface
that provides precise motion vectors
from the rendering side to the codec and perceptual error thresholds for each
pixel in the opposite direction. The
perceptual error thresholds take into account bandwidth-dependent quantization
errors resulting from the lossy compression
as well as image content-dependent luminance and spatial contrast masking. The
availability of the discrete
cosine transform (DCT) coefficients at the codec side enables to use advanced
models of the human visual system
(HVS) in the perceptual error threshold derivation without incurring any
significant cost. Those error thresholds
are then used to control the rendering quality and make it well aligned with
the compressed stream quality. In our
prototype system we use the lightcuts technique developed by Walter et al.,
which we enhance to handle dynamic
image sequences, and an MPEG-2 implementation. Our results clearly demonstrate
many advantages of coupling
the rendering with video compression in terms of faster rendering. Furthermore,
temporally coherent rendering leads
to a reduction of temporal artifacts.