日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

  DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation

Hoyer, L., Dai, D., & Van Gool, L. (2022). DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9914-9925). Piscataway, NJ: IEEE. doi:10.1109/CVPR52688.2022.00969.

Item is

基本情報

表示: 非表示:
アイテムのパーマリンク: https://hdl.handle.net/21.11116/0000-000A-16B5-1 版のパーマリンク: https://hdl.handle.net/21.11116/0000-000C-2B17-B
資料種別: 会議論文
LaTeX : {DAFormer}: {I}mproving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation

ファイル

表示: ファイル
非表示: ファイル
:
arXiv:2111.14887.pdf (プレプリント), 9MB
 
ファイルのパーマリンク:
-
ファイル名:
arXiv:2111.14887.pdf
説明:
File downloaded from arXiv at 2022-03-09 14:42
OA-Status:
閲覧制限:
非公開
MIMEタイプ / チェックサム:
application/pdf
技術的なメタデータ:
著作権日付:
-
著作権情報:
-
:
Hoyer_DAFormer_Improving_Network_Architectures_and_Training_Strategies_for_Domain-Adaptive_Semantic_CVPR_2022_paper.pdf (プレプリント), 737KB
ファイルのパーマリンク:
https://hdl.handle.net/21.11116/0000-000C-1395-6
ファイル名:
Hoyer_DAFormer_Improving_Network_Architectures_and_Training_Strategies_for_Domain-Adaptive_Semantic_CVPR_2022_paper.pdf
説明:
-
OA-Status:
Green
閲覧制限:
公開
MIMEタイプ / チェックサム:
application/pdf / [MD5]
技術的なメタデータ:
著作権日付:
-
著作権情報:
These CVPR 2022 papers are the Open Access versions, provided by the Computer Vision Foundation. Except for the watermark, they are identical to the accepted versions; the final published version of the proceedings is available on IEEE Xplore. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
CCライセンス:
-

関連URL

表示:

作成者

表示:
非表示:
 作成者:
Hoyer, Lukas1, 著者
Dai, Dengxin2, 著者           
Van Gool, Luc1, 著者
所属:
1External Organizations, ou_persistent22              
2Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              

内容説明

表示:
非表示:
キーワード: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 要旨: As acquiring pixel-wise annotations of real-world images for semantic
segmentation is a costly process, a model can instead be trained with more
accessible synthetic data and adapted to real images without requiring their
annotations. This process is studied in unsupervised domain adaptation (UDA).
Even though a large number of methods propose new adaptation strategies, they
are mostly based on outdated network architectures. As the influence of recent
network architectures has not been systematically studied, we first benchmark
different network architectures for UDA and then propose a novel UDA method,
DAFormer, based on the benchmark results. The DAFormer network consists of a
Transformer encoder and a multi-level context-aware feature fusion decoder. It
is enabled by three simple but crucial training strategies to stabilize the
training and to avoid overfitting DAFormer to the source domain: While the Rare
Class Sampling on the source domain improves the quality of pseudo-labels by
mitigating the confirmation bias of self-training towards common classes, the
Thing-Class ImageNet Feature Distance and a learning rate warmup promote
feature transfer from ImageNet pretraining. DAFormer significantly improves the
state-of-the-art performance by 10.8 mIoU for GTA->Cityscapes and 5.4 mIoU for
Synthia->Cityscapes and enables learning even difficult classes such as train,
bus, and truck well. The implementation is available at
https://github.com/lhoyer/DAFormer.

資料詳細

表示:
非表示:
言語: eng - English
 日付: 2021-11-2920222022
 出版の状態: オンラインで出版済み
 ページ: -
 出版情報: -
 目次: -
 査読: -
 識別子(DOI, ISBNなど): BibTex参照ID: Hoyer_CVPR2022
DOI: 10.1109/CVPR52688.2022.00969
 学位: -

関連イベント

表示:
非表示:
イベント名: 35th IEEE/CVF Conference on Computer Vision and Pattern Recognition
開催地: New Orleans, LA, USA
開始日・終了日: 2022-06-19 - 2022-06-24

訴訟

表示:

Project information

表示:

出版物 1

表示:
非表示:
出版物名: IEEE/CVF Conference on Computer Vision and Pattern Recognition
  省略形 : CVPR 2022
種別: 会議論文集
 著者・編者:
所属:
出版社, 出版地: Piscataway, NJ : IEEE
ページ: - 巻号: - 通巻号: - 開始・終了ページ: 9914 - 9925 識別子(ISBN, ISSN, DOIなど): ISBN: 978-1-6654-6946-3