dereksgithub / SARoptical_fusion

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SARoptical Applications

Dataset

We developed our experiments on a subset of DFC2020 dataset.This subset consists of 2000 triple samples, which are SAR-optical image pairs and more accurate semi-manually derived high resolution (10 m) land-cover maps.SAR images with dual-polarized (VV and VH) components were acquired by the Sentinel-1 satellite.The optical images with 12 bands were taken by the multi-spectral sensor of the Sentinel-2 satellite.
Link:https://ieee-dataport.org/competitions/2020-ieee-grss-data-fusion-contest

SAR-Optical Feature Fusion

The effective combination of the complementary information provided by huge amount of unlabeled multi-sensor data (e.g., Synthetic Aperture Radar (SAR) and optical images) is a critical issue in remote sensing. Recently, contrastive learning methods have reached remarkable success in obtaining meaningful feature representations from multi-view data. However, these methods only focus on image-level features, which may not satisfy the requirement for dense prediction tasks such as land-cover mapping. In this work, we propose a self-supervised framework for SAR-optical data fusion and land-cover mapping tasks. SAR and optical images are fused by using a multi-view contrastive loss at image-level and super-pixel level according to one of those possible strategies: in the early, intermediate and late strategies. For the land-cover mapping task, we assign each pixel a land-cover class by the joint use of pre-trained features and spectral information of the image itself. Experimental results show that the proposed approach not only achieves a comparable accuracy but also reduces the dimension of features with respect to the image-level contrastive learning method. Among three fusion strategies, the intermediate fusion strategy achieves the best performance.

Methods

Image text

Results

Image text

Training

python train_XX.py

Further Improvements

We further used the multi-crops (multi-scale superpixels) and EMA to stablize the training processes.

python train_twins2s2_shift_spix.py

Unsupervised SAR-Optical Segmentation

SAR and optical images provide complementary information on land-cover categories in terms of both spectral signatures and dielectric properties. This paper proposes a new unsupervised land-cover segmentation approach based on contrastive learning and vector quantization that jointly uses SAR and optical images. This approach exploits a pseudo-Siamese network to extract and discriminate features of different categories, where one branch is a ResUnet and the other branch is a gumble-softmax vector quantizer. The core idea is to minimize the contrastive loss between the learned features of the two branches. To segment images, for each pixel the output of gumble-softmax is discretized as a one-hot vector and its proxy label is chosen as the corresponding class. The proposed approach is validated on a subset of DFC2020 dataset including six different land-cover categories. Experimental results demonstrate improvements over the current state-of-the-art techniques and the effectiveness of unsupervised land-cover segmentation on SAR-optical image pairs.

Methods

Image text

Results

Image text

Training

Python train_vq_Efusion.py

About

License:MIT License


Languages

Language:Python 100.0%