Skip to yearly menu bar Skip to main content


Poster

Parameter Efficient Self-Supervised Geospatial Domain Adaptation

Linus Scheibenreif · Michael Mommert · Damian Borth

Arch 4A-E Poster #355
[ ] [ Project Page ]
Fri 21 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract: As large-scale foundation models become publicly available for different domains, efficiently adapting them to individual downstream applications and additional data modalities has turned into a central challenge.For example, foundation models for geospatial and satellite remote sensing applications are commonly trained on large optical RGB or multi-spectral datasets, although data from a wide variety of heterogeneous sensors are available in the remote sensing domain. This leads to significant discrepancies between pre-training and downstream target data distributions for many important applications. Fine-tuning large foundation models to bridge that gap incurs high computational cost and can be infeasible when target datasets are small.In this paper, we address the question of how large, pre-trained foundational transformer models can be efficiently adapted to downstream remote sensing tasks involving different data modalities or limited dataset size.We present a self-supervised adaptation method that boosts downstream linear evaluation accuracy of different foundation models by $4\text -6$% (absolute) across 8 remote sensing datasets while outperforming full fine-tuning when training only $1{\text -}2$% of the model parameters. Our method significantly improves label efficiency and increases few-shot accuracy by $6{\text -}10$% on different datasets\footnote{Code available at \texttt{anonymized}}.

Live content is unavailable. Log in and register to view live content