English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Learning Foveated Reconstruction to Preserve Perceived Image Statistics

MPS-Authors
/persons/resource/persons45095

Myszkowski,  Karol       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2108.03499.pdf
(Preprint), 33MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Surace, L., Wernikowski, M., Tursun, O. T., Myszkowski, K., Mantiuk, R., & Didyk, P. (2021). Learning Foveated Reconstruction to Preserve Perceived Image Statistics. Retrieved from https://arxiv.org/abs/2108.03499.


Cite as: https://hdl.handle.net/21.11116/0000-0009-73D9-1
Abstract
Foveated image reconstruction recovers full image from a sparse set of
samples distributed according to the human visual system's retinal sensitivity
that rapidly drops with eccentricity. Recently, the use of Generative
Adversarial Networks was shown to be a promising solution for such a task as
they can successfully hallucinate missing image information. Like for other
supervised learning approaches, also for this one, the definition of the loss
function and training strategy heavily influences the output quality. In this
work, we pose the question of how to efficiently guide the training of foveated
reconstruction techniques such that they are fully aware of the human visual
system's capabilities and limitations, and therefore, reconstruct visually
important image features. Due to the nature of GAN-based solutions, we
concentrate on the human's sensitivity to hallucination for different input
sample densities. We present new psychophysical experiments, a dataset, and a
procedure for training foveated image reconstruction. The strategy provides
flexibility to the generator network by penalizing only perceptually important
deviations in the output. As a result, the method aims to preserve perceived
image statistics rather than natural image statistics. We evaluate our strategy
and compare it to alternative solutions using a newly trained objective metric
and user experiments.