hide
Free keywords:
-
Abstract:
In this work, we focus on efficient compression and streaming of frames
rendered from a dynamic 3D model.
Remote rendering and on-the-fly streaming become increasingly attractive for
interactive applications. Data is kept confidential and only images are sent to
the client. Even if the client's hardware resources are modest, the user can
interact with state-of-the-art rendering applications executed on the server.
Our solution focuses on augmented video information, e.g., by depth, which is
key to increase robustness with respect to data loss, image reconstruction, and
is an important feature for stereo vision and other client-side applications.
Two major challenges arise in such a setup. First, the server workload has to
be controlled to support many clients, second the data transfer needs to be
efficient. Consequently, our contributions are twofold.
First, we reduce the server-based computations by making use of sparse sampling
and temporal consistency to avoid expensive pixel evaluations.
Second, our data-transfer solution takes limited bandwidths into account, is
robust to information loss, and compression and decompression are efficient
enough to support real-time interaction.
Our key insight is to tailor our method explicitly for rendered 3D content and
shift some computations on client GPUs, to better balance the server/client
workload.
Our framework is progressive, scalable, and allows us to stream augmented
high-resolution (e.g., HD-ready) frames with small bandwidth on standard
hardware.