This is a pytorch implementation of YOLOv4 based on argusswift/YOLOv4-pytorch. You can train model on your own dataset and deploy it to the ModelArts easily.
- Windows
- python 3.6
Run the installation script to install all the dependencies.
pip install -r requirements.txt
This project supports datasets in Pascal VOC format. You need to place your data as follows:
ModelArts_Yolov4
├───data
│ └───Your_dataset
│ ├───Annotations
│ │ ├──1.xml
│ │ └──...
│ ├───JPEGImages
│ │ ├──1.jpg
│ │ └──...
Then:
-
Update the
"DATASET_NAME"
and"Customer_DATA"
in theconfig/yolov4_config.py
. -
Split the data into trainset and testset with
data/gen_img_index_file.py
. After this you will get two files:train.txt
andtest.txt
in your dataset folder. -
Convert the pascal voc *.xml format annotation to *.txt format (Image_path xmin0,ymin0,xmax0,ymax0,class0 ) using
data/convert_voc_to_txt.py
. You will getdata/train_annotation.txt
anddata/test_annotation.txt
. -
Generate annotation files for each class with
data/gen_cls_anno.py
. These files are generated in thedata/your_dataset/ClassAnnos/
directory and are used to calculate APs. -
Run
utils/anchor_kmeans.py
, which performs kmeans algrithom on the ground truth bboxes to get the most general anchor boxes. Update the"MODEL['ANCHORS']"
in theconfig/yolov4_config.py
.
- Mobilenetv3 pre-trained weight: mobilenetv3(code: yolo)
- Make dir
weights/
in the ModelArts_Yolov4 and put the weight file in it.
Run the following command to start training and see the details in the config/yolov4_config.py.
python -u train.py
During training, backups of model will be saved in weights/*.pth
. You can interrupt training and resume training from these backups at any time using the following command.
python -u train.py --weight_file your_backup.pth --resume
Run predict.py
and you can predict images from testset one by one.
Copy weights/best.pth
to ModelArts/model/best.pth
. Then upload the entire ModelArts
folder to the ModelArts platform.