English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation

MPS-Authors
/persons/resource/persons261420

Dai,  Dengxin
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Supplementary Material (public)
There is no public supplementary material available
Citation

Hoyer, L., Dai, D., & Van Gool, L. (2022). DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9914-9925). Piscataway, NJ: IEEE. doi:10.1109/CVPR52688.2022.00969.


Cite as: https://hdl.handle.net/21.11116/0000-000A-16B5-1
Abstract
As acquiring pixel-wise annotations of real-world images for semantic
segmentation is a costly process, a model can instead be trained with more
accessible synthetic data and adapted to real images without requiring their
annotations. This process is studied in unsupervised domain adaptation (UDA).
Even though a large number of methods propose new adaptation strategies, they
are mostly based on outdated network architectures. As the influence of recent
network architectures has not been systematically studied, we first benchmark
different network architectures for UDA and then propose a novel UDA method,
DAFormer, based on the benchmark results. The DAFormer network consists of a
Transformer encoder and a multi-level context-aware feature fusion decoder. It
is enabled by three simple but crucial training strategies to stabilize the
training and to avoid overfitting DAFormer to the source domain: While the Rare
Class Sampling on the source domain improves the quality of pseudo-labels by
mitigating the confirmation bias of self-training towards common classes, the
Thing-Class ImageNet Feature Distance and a learning rate warmup promote
feature transfer from ImageNet pretraining. DAFormer significantly improves the
state-of-the-art performance by 10.8 mIoU for GTA->Cityscapes and 5.4 mIoU for
Synthia->Cityscapes and enables learning even difficult classes such as train,
bus, and truck well. The implementation is available at
https://github.com/lhoyer/DAFormer.