Some questions about training models
AntyRia opened this issue · comments
Hi, thank you for your outstanding contributions.
One of the problems I found when I was doing multi-card training through DDP was that the GPU itself was taking progressively more time per epoch under normal working conditions.
May I ask whether this phenomenon is normal? If not, can you provide some solutions? Thank u so much
This is the command I used:yolo detect train data=/data/dataset/yaml/base.yaml model=/data/dataset/model/yolov8m.pt \ epochs=1500 imgsz=640 device=0,1 batch=64 workers=32 \ name=0705_base_dataset project=train_result_base_tyh exist_ok=True \ cos_lr=true profile=true degrees=45 scale=0.8 shear=45 flipud=0.5 mosaic=0.8 mixup=0.4 copy_paste=0.4 perspective=0.001 save_period=200