grannycola / trash_segmentation_synth

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

trash_segmentation_synth

This is my garbage segmentation project. In this work, I used 5 supercategories from TACO Dataset. There are 1093 images in the dataset. Split into train/val/test - 80/10/10. To train the model, you can use the python src/models/train_model.py file in the console using the CLI. The default settings for model training are specified in the config.yaml file. To test the model, use the file python src/models/eval_model.py. Baseline model deeplabv3_mobilenet_v3_large from Torchvision with IoU: ~0.27. When mixing data (with mixing_proportion 0.25 - 0.5), the quality increases to ~0.33 IoU.

Classes

  • "Plastic bag & wrapper": 1,
  • "Bottle": 2,
  • "Carton": 3,
  • "Can": 4,
  • "Cup": 5

Prediction Examples

image 1

Confusion Matrix

CM

Dockerfile

Required:

At first set num_workers: 0 in config.yaml

Check file content /etc/docker/daemon.json It should look like:

{
  "runtimes": {
    "nvidia": {
      "path": "/usr/bin/nvidia-container-runtime",
      "runtimeArgs": []
    }
  },
  "default-runtime": "nvidia"
}

Run dockerd (if it's not running):

sudo dockerd

Make sure that "nvidia" is in "Runtimes" list:

$ docker info|grep -i runtime
 Runtimes: nvidia runc
 Default Runtime: runc

Build image:

sudo docker build --no-cache -t trash_segmentation .

Run container:

sudo docker run --name=trash_segmentation -p 5000:5000 --memory=16g --gpus all -it -v $PWD/data:/app/data trash_segmentation

Run training:

make train

Tested on Nvidia RTX 3090Ti with Cuda driver 12.0.0

About


Languages

Language:Python 93.5%Language:Dockerfile 3.8%Language:Makefile 1.5%Language:Shell 1.2%