Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Interactive Volume Caustics in Single-scattering Media

MPG-Autoren
/persons/resource/persons44343

Dong,  Zhao
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Hu, W., Dong, Z., Ihrke, I., Grosch, T., Yuan, G., & Seidel, H.-P. (2010). Interactive Volume Caustics in Single-scattering Media. In A. Varshney, C. Wyman, D. Aliaga, & M. M. Oliveira (Eds.), Proceedings I3D 2010 (pp. 109-117). New York, NY: ACM. doi:10.1145/1730804.1730822.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-000F-175B-9
Zusammenfassung
Volume caustics are intricate illumination patterns formed by light first
interacting with a specular surface and subsequently being scattered inside a
participating medium. Although this phenomenon can be simulated by existing
techniques, image synthesis is usually non-trivial and time-consuming.

Motivated by interactive applications, we propose a novel volume caustics
rendering method for single-scattering participating media. Our method is based
on the observation that line rendering of illumination rays into the screen
buffer establishes a direct light path between the viewer and the light source.
This connection is introduced via a single scattering event for every pixel
affected by the line primitive. Since the GPU is a parallel processor, the
radiance contributions of these light paths to each of the pixels can be
computed and accumulated independently. The implementation of our method is
straightforward and we show that it can be seamlessly integrated with existing
methods for rendering participating media.

We achieve high-quality results at real-time frame rates for large and dynamic
scenes containing homogeneous participating media. For inhomogeneous media, our
method achieves interactive performance that is close to real-time. Our method
is based on a simplified physical model and can thus be used for generating
physically plausible previews of expensive lighting simulations quickly.