English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Learning a self-supervised tone mapping operator via feature contrast masking loss

MPS-Authors
/persons/resource/persons263980

Wang,  Chao
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons263978

Chen,  Bin
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45095

Myszkowski,  Karol
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2110.09866.pdf
(Preprint), 29MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Wang, C., Chen, B., Seidel, H.-P., Myszkowski, K., & Serrano, A. (2021). Learning a self-supervised tone mapping operator via feature contrast masking loss. Retrieved from https://arxiv.org/abs/2110.09866.


Cite as: https://hdl.handle.net/21.11116/0000-0009-710E-9
Abstract
High Dynamic Range (HDR) content is becoming ubiquitous due to the rapid
development of capture technologies. Nevertheless, the dynamic range of common
display devices is still limited, therefore tone mapping (TM) remains a key
challenge for image visualization. Recent work has demonstrated that neural
networks can achieve remarkable performance in this task when compared to
traditional methods, however, the quality of the results of these
learning-based methods is limited by the training data. Most existing works use
as training set a curated selection of best-performing results from existing
traditional tone mapping operators (often guided by a quality metric),
therefore, the quality of newly generated results is fundamentally limited by
the performance of such operators. This quality might be even further limited
by the pool of HDR content that is used for training. In this work we propose a
learning-based self-supervised tone mapping operator that is trained at test
time specifically for each HDR image and does not need any data labeling. The
key novelty of our approach is a carefully designed loss function built upon
fundamental knowledge on contrast perception that allows for directly comparing
the content in the HDR and tone mapped images. We achieve this goal by
reformulating classic VGG feature maps into feature contrast maps that
normalize local feature differences by their average magnitude in a local
neighborhood, allowing our loss to account for contrast masking effects. We
perform extensive ablation studies and exploration of parameters and
demonstrate that our solution outperforms existing approaches with a single set
of fixed parameters, as confirmed by both objective and subjective metrics.