English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Both Style and Fog Matter: Cumulative Domain Adaptation for Semantic Foggy Scene Understanding

Ma, X., Wang, Z., Zhan, Y., Zheng, Y., Wang, Z., Dai, D., et al. (2022). Both Style and Fog Matter: Cumulative Domain Adaptation for Semantic Foggy Scene Understanding. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 18900-18909). Piscataway, NJ: IEEE. doi:10.1109/CVPR52688.2022.01835.

Item is

Basic

show hide
Genre: Conference Paper
Latex : Both Style and Fog Matter: {C}umulative Domain Adaptation for Semantic Foggy Scene Understanding

Files

show Files
hide Files
:
arXiv:2112.00484.pdf (Preprint), 7MB
 
File Permalink:
-
Name:
arXiv:2112.00484.pdf
Description:
File downloaded from arXiv at 2022-03-09 11:34
OA-Status:
Visibility:
Private
MIME-Type / Checksum:
application/pdf
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
:
Ma_Both_Style_and_Fog_Matter_Cumulative_Domain_Adaptation_for_Semantic_CVPR_2022_paper.pdf (Preprint), 7MB
Name:
Ma_Both_Style_and_Fog_Matter_Cumulative_Domain_Adaptation_for_Semantic_CVPR_2022_paper.pdf
Description:
-
OA-Status:
Green
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
These CVPR 2022 papers are the Open Access versions, provided by the Computer Vision Foundation. Except for the watermark, they are identical to the accepted versions; the final published version of the proceedings is available on IEEE Xplore. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
License:
-

Locators

show

Creators

show
hide
 Creators:
Ma, Xianzheng1, Author
Wang, Zhixiang1, Author
Zhan, Yacheng1, Author
Zheng, Yinqiang1, Author
Wang, Zheng1, Author
Dai, Dengxin2, Author           
Lin, Chia-Wen1, Author
Affiliations:
1External Organizations, ou_persistent22              
2Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: Although considerable progress has been made in semantic scene understanding
under clear weather, it is still a tough problem under adverse weather
conditions, such as dense fog, due to the uncertainty caused by imperfect
observations. Besides, difficulties in collecting and labeling foggy images
hinder the progress of this field. Considering the success in semantic scene
understanding under clear weather, we think it is reasonable to transfer
knowledge learned from clear images to the foggy domain. As such, the problem
becomes to bridge the domain gap between clear images and foggy images. Unlike
previous methods that mainly focus on closing the domain gap caused by fog --
defogging the foggy images or fogging the clear images, we propose to alleviate
the domain gap by considering fog influence and style variation simultaneously.
The motivation is based on our finding that the style-related gap and the
fog-related gap can be divided and closed respectively, by adding an
intermediate domain. Thus, we propose a new pipeline to cumulatively adapt
style, fog and the dual-factor (style and fog). Specifically, we devise a
unified framework to disentangle the style factor and the fog factor
separately, and then the dual-factor from images in different domains.
Furthermore, we collaborate the disentanglement of three factors with a novel
cumulative loss to thoroughly disentangle these three factors. Our method
achieves the state-of-the-art performance on three benchmarks and shows
generalization ability in rainy and snowy scenes.

Details

show
hide
Language(s): eng - English
 Dates: 2021-12-0120222022
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: BibTex Citekey: Ma_CVPR2022
DOI: 10.1109/CVPR52688.2022.01835
 Degree: -

Event

show
hide
Title: 35th IEEE/CVF Conference on Computer Vision and Pattern Recognition
Place of Event: New Orleans, LA, USA
Start-/End Date: 2022-06-19 - 2022-06-24

Legal Case

show

Project information

show

Source 1

show
hide
Title: IEEE/CVF Conference on Computer Vision and Pattern Recognition
  Abbreviation : CVPR 2022
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: Piscataway, NJ : IEEE
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 18900 - 18909 Identifier: ISBN: 978-1-6654-6946-3