ThomasDeLange / AttackDefenseYOLO

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

This repository contains the adversarial attack/defense implementation for the paper:

Jung Im Choi and Qing Tian. "Adversarial attack and defense of YOLO detectors in autonomous driving scenarios." 2022 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2022.

Model Training

1. Convert your datasets to VOC format for training.

  • Put the label file in the Annotation under the data folder.
  • Put the image file in JPEGImages under the data folder.

2. Create .txt file by using kitti_annotation.py for training.

  • Create a your_classes.txt under the model_data folder and write the categories you need to classifiy in it.
  • Modify the class_path in kitti_annotation.py to model_data/your_cls_classes.txt.

3. Modify the classes_path in adv_training.py and run it to start adversarial training.

Results

1. Attacks

a. KITTI

The table below shows the model performance degradation under various attack strengths for the task-oriented attacks using FGSM and 10-step PGD on KITTI. Aloc, Acls, Aloc+cls+conf, and Aobj denote the attacks sourced from corresponding task losses (i.e., localization, classification, overall, and objectness losses). The objectness-oriented attacks decrease the mAP most. The clean mAP on KITTI is 80.10%.

Method Attack_Size Aloc Acls Aloc+cls+conf Aobj
FGSM ϵ = 2 -0.98 -0.97 -8.42 -10.49
FGSM ϵ = 4 -3.20 -3.15 -13.88 -16.85
FGSM ϵ = 6 -6.08 -5.65 -17.88 -22.59
FGSM ϵ = 8 -10.44 -9.65 -22.04 -27.31
PGD-10 ϵ = 2 -1.22 -0.87 -42.44 -42.64
PGD-10 ϵ = 4 -4.11 -2.64 -51.47 -51.67
PGD-10 ϵ = 6 -7.00 -5.91 -54.17 -54.39
PGD-10 ϵ = 8 -10.66 -9.59 -55.48 -55.83

b. COCO_traffic

The table below shows the comparison of impact of different task loss-based attacks on model performance (mAP) under various attack sizes using FGSM and PGD-10 on COCO traffic. The clean mAP on COCO traffic is 66.10%.

Method Attack_Size Aloc Acls Aloc+cls+conf Aobj
FGSM ϵ = 2 -0.31 -0.22 -7.42 -7.49
FGSM ϵ = 4 -1.01 -0.95 -9.40 -9.74
FGSM ϵ = 6 -1.85 -1.86 -10.54 -10.97
FGSM ϵ = 8 -3.30 -3.22 -12.33 -12.45
PGD-10 ϵ = 2 -0.19 -0.15 -36.55 -37.55
PGD-10 ϵ = 4 -0.70 -0.77 -43.84 -43.93
PGD-10 ϵ = 6 -1.88 -2.26 -45.31 -45.69
PGD-10 ϵ = 8 -3.24 -3.58 -46.88 -47.08

2. Defense (Adversarial Training)

The tables below show mAP comparison of various adversarially trained YOLO models under PGD-10 attacks on KITTI and COCO traffic validation sets. Depending on which losses the adversarial examples are originated from, the following adversarially trained models are obtained for each dataset: MSTD (trained with only clean images), MALL (adversarially trained using the overall loss), MMTD (trained with the multi-task domain algorithm), MLOC, MCLS, MOBJ (trained with our adversarial examples solely from one kind of three losses), MOA (trained with our objectness-aware AT algorithm).

a. KITTI

Method Aobj Aloc+cls+conf
MSTD 28.43 28.63
MALL 39.65 40.65
MMTD 36.13 35.94
MLOC 37.86 37.61
MCLS 39.29 39.70
MOBJ 49.43 48.83
MOA 42.26 41.86

b. COCO_traffic

Method Aobj Aloc+cls+conf
MSTD 22.17 22.29
MALL 34.58 33.44
MMTD 33.26 33.20
MLOC 33.23 32.10
MCLS 31.71 31.58
MOBJ 33.30 32.69
MOA 34.77 33.61

Citation

If you find this repository helpful to your research, please consider citing our paper:

@InProceedings{choi2022advYOLO,
  title = {Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving Scenarios},
  author = {Choi, Jung Im and Tian, Qing},
  booktitle = {2022 IEEE Intelligent Vehicles Symposium (IV)},
  year = {2022},
  pages = {1011-1017},
  doi={10.1109/IV51971.2022.9827222},
}

References

Any pretrained weights and other codes needed for YOLOv4 can be founded via link.

Contact

If you have any questions or suggestions, feel free to contact us. (choij@bgsu.edu)

About


Languages

Language:Python 100.0%