XavierJiezou / cloudseg

Cloud Segmentation for Remote Sensing

Home Page:https://xavierjiezou.github.io/Cloud-Adapter/

Repository from Github https://github.comXavierJiezou/cloudsegRepository from Github https://github.comXavierJiezou/cloudseg

Cloud Segmentation for Remote Sensing

arXiv Datasets Models Spaces Template

CloudSeg is a repository containing the implementation of methods compared in the paper Cloud-Adapter. We have open-sourced the pretrained weights for these methods on various datasets, available at Hugging Face.

Leaderboard (mIoU, %, ↑)

Methods HRC GF1 GF2 L1C L2A L8B
SCNN 57.22 81.68 76.99 22.75 28.76 32.38
CDNetv1 77.79 81.82 78.20 60.35 62.39 34.58
CDNetv2 76.75 84.93 78.84 65.60 66.05 43.63
MCDNet 53.50 85.16 78.36 44.80 46.52 33.85
UNetMobv2 79.91 91.71 80.44 71.65 70.36 47.76
DBNet 77.78 91.36 78.68 65.52 65.65 51.41
HRCloudNet 83.44 91.86 75.57 68.26 68.35 43.51
KappaMask 67.48 92.42 72.00 41.27 45.28 42.12

Installation

git clone https://github.com/XavierJiezou/cloudseg.git
cd cloudseg
conda create -n cloudseg python=3.11.7
conda activate cloudseg
pip install -r requirements.txt

We have uploaded the conda virtual environment used in our experiments to Hugging Face. You can download it directly from the link, extract the files, and activate the environment using the following commands:

mkdir envs
tar -zxvf envs.tar.gz -C envs
source envs/bin/activate

Datasets

You can download all datasets from Hugging Face: CloudSeg Datasets. The available datasets include:

Directory Structure

Below is an overview of the directory structure:

cloudseg
├── src
├── configs
├── ...
├── data
│   ├── cloudsen12_high
│   ├── l8_biome
│   ├── gf12ms_whu
│   ├── hrc_whu
CloudSEN12_High
triain
├── EXTRA_*.dat
├── L1C_B*.dat
├── L2A_*.dat
├── LABEL_*.data
├── S1_*.data
├── metadata.csv
val
├── EXTRA_*.dat
├── L1C_B*.dat
├── L2A_*.dat
├── LABEL_*.data
├── S1_*.data
├── metadata.csv
test
├── EXTRA_*.dat
├── L1C_B*.dat
├── L2A_*.dat
├── LABEL_*.data
├── S1_*.data
├── metadata.csv
L8_Biome
train.txt
val.txt
test.txt
img_dir
├── train
├── val
├── test
ann_dir
├── train
├── val
├── test
GF12MS_WHU
GF1MS-WHU
├── TestBlock250
│   ├── *_Mask.tif
│   ├── *.tiff
├── TrainBlock250
│   ├── *_Mask.tif
│   ├── *.tiff
├── TestList.txt
├── TrainList.txt
GF2MS-WHU
├── TestBlock250
│   ├── *_Mask.tif
│   ├── *.tiff
├── TrainBlock250
│   ├── *_Mask.tif
│   ├── *.tiff
├── TestList.txt
├── TrainList.txt
HRC_WHU
train.txt
test.txt
img_dir
├── train
├── test
ann_dir
├── train
├── test

Usage

Training

Train model with chosen experiment configuration from configs/experiment/.

python src/train.py experiment=hrc_whu_cdnetv1.yaml

You can override any parameter from command line like this.

python src/train.py trainer.devices=["1"]

In this example, the trainer.devices parameter is overridden to use GPU 1 for training.

Evaluation

  1. Download model weights from hugging face.
cloudseg
├── src
├── configs
├── ...
├── checkpoints
│   ├── cloudsen12_high_l1c
│   │   ├──cdnetv1.bin
│   │   ├──cdnetv2.bin
│   │   ├──...
│   ├── cloudsen12_high_l2a
│   ├── l8_biome
│   ├── gf12ms_whu_gf1
│   ├── gf12ms_whu_gf2
│   ├── hrc_whu
  1. General Evaluation Command

To evaluate the performance of models on a specified dataset:

python src/eval/eval_on_experiment.py --experiment_name=dataset_name --gpu="cuda:0"
  • experiment_name: Specifies the name of the dataset.
  • gpu: Specifies the device to use for running the evaluation.
  1. Scene-wise Evaluation (Just for L8_Biome Dataset)

To evaluate the model's performance on L8_Biome dataset by scenes:

python src/eval/eval_l8_scene.py --root="dataset_path" --gpu="cuda:0"
  • `root: Specifies the dataset path.
  • gpu: Specifies the device to use for running the evaluation.

Visualization

We have published the pre-trained model's visualization results of various datasets on Hugging Face at Hugging Face. If you prefer not to run the code, you can directly visit the repository to download the visualization results.

Supported Methods

Gradio Demo

We provide a Gradio Demo Application for testing the methods in this repository. You can choose to run the demo locally or access it directly through our Hugging Face Space.

Option 1: Run Locally

git clone https://huggingface.co/XavierJiezou/cloudseg-models
cd cloudseg-models
mkdir envs
tar -xzf envs.tar.gz -C envs
source envs/bin/activate
python app.py

Option 2: Access on Hugging Face Space

You can also try the demo online without any setup: https://huggingface.co/spaces/caixiaoshun/cloudseg

Citation

If you use our code or models in your research, please cite with:

@misc{cloud-adapter,
      title={Adapting Vision Foundation Models for Robust Cloud Segmentation in Remote Sensing Images}, 
      author={Xuechao Zou and Shun Zhang and Kai Li and Shiying Wang and Junliang Xing and Lei Jin and Congyan Lang and Pin Tao},
      year={2024},
      eprint={2411.13127},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2411.13127}, 
}

About

Cloud Segmentation for Remote Sensing

https://xavierjiezou.github.io/Cloud-Adapter/


Languages

Language:Python 98.7%Language:Shell 1.1%Language:Makefile 0.2%