open-mmlab / mmyolo

OpenMMLab YOLO series toolbox and benchmark. Implemented RTMDet, RTMDet-Rotated,YOLOv5, YOLOv6, YOLOv7, YOLOv8,YOLOX, PPYOLOE, etc.

Home Page:https://mmyolo.readthedocs.io/zh_CN/dev/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

"ValueError: not enough values to unpack(expected 2, got 0)" when training

sltlls opened this issue Β· comments

Prerequisite

🐞 Describe the bug

I use coco train2017 dataset to train yolov5 in a single titan xp gpu, my config file is the official "yolov5_s-v61_syncbn_fast_8xb16-300e_coco.py". And this error occurred when the first training epoch is finished, so I guess this problem may occurred when packing the pt weight file. The detailed error info is as follows:

10/05 16:07:06 - mmengine - [4m[37mINFO[0m - Epoch(train) [1][7250/7393] lr: 3.2684e-03 eta: 7 days, 14:59:43 time: 0.2958 data_time: 0.0012 memory: 5200 loss_cls: 1.2465 loss_obj: 1.3546 loss_bbox: 1.2261 loss: 3.8272
10/05 16:07:21 - mmengine - [4m[37mINFO[0m - Epoch(train) [1][7300/7393] lr: 3.2910e-03 eta: 7 days, 14:58:42 time: 0.2950 data_time: 0.0010 memory: 5200 loss_cls: 1.2226 loss_obj: 1.3625 loss_bbox: 1.2239 loss: 3.8091
10/05 16:07:36 - mmengine - [4m[37mINFO[0m - Epoch(train) [1][7350/7393] lr: 3.3135e-03 eta: 7 days, 14:57:19 time: 0.2935 data_time: 0.0010 memory: 5200 loss_cls: 1.2308 loss_obj: 1.3674 loss_bbox: 1.2283 loss: 3.8264
10/05 16:07:51 - mmengine - [4m[37mINFO[0m - Exp name: yolov5_s-v61_syncbn_fast_8xb16-300e_coco_20221005_153017

Exception in thread Thread-1:
Traceback (most recent call last):
File "/home/amax/anaconda3/envs/openmmlab2/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/home/amax/anaconda3/envs/openmmlab2/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/amax/anaconda3/envs/openmmlab2/lib/python3.8/site-packages/torch/utils/data/_utils/pin_memory.py", line 28, in _pin_memory_loop
idx, data = r
ValueError: not enough values to unpack (expected 2, got 0)
Traceback (most recent call last):
File "tools/train.py", line 106, in
main()
File "tools/train.py", line 102, in main
runner.train()
File "/home/amax/anaconda3/envs/openmmlab2/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1631, in train
model = self.train_loop.run() # type: ignore
File "/home/amax/anaconda3/envs/openmmlab2/lib/python3.8/site-packages/mmengine/runner/loops.py", line 88, in run
self.run_epoch()
File "/home/amax/anaconda3/envs/openmmlab2/lib/python3.8/site-packages/mmengine/runner/loops.py", line 103, in run_epoch
for idx, data_batch in enumerate(self.dataloader):
File "/home/amax/anaconda3/envs/openmmlab2/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 349, in iter
self._iterator._reset(self)
File "/home/amax/anaconda3/envs/openmmlab2/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 852, in _reset
data = self._get_data()
File "/home/amax/anaconda3/envs/openmmlab2/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1029, in _get_data
raise RuntimeError('Pin memory thread exited unexpectedly')
RuntimeError: Pin memory thread exited unexpectedly

Environment

sys.platform: linux
Python: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0]
CUDA available: True
numpy_random_seed: 2147483648
GPU 0,1,2,3: TITAN X (Pascal)
CUDA_HOME: /usr/local/cuda-10.1
NVCC: Cuda compilation tools, release 10.1, V10.1.10
GCC: gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
PyTorch: 1.7.1+cu101
PyTorch compiling details: PyTorch built with:

  • GCC 7.3
  • C++ Version: 201402
  • Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 10.1
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75
  • CuDNN 7.6.3
  • Magma 2.5.2
  • Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

TorchVision: 0.8.2+cu101
OpenCV: 4.6.0
MMEngine: 0.1.0
MMCV: 2.0.0rc1
MMDetection: 3.0.0rc1
MMYOLO: 0.1.1+59f3d30

Additional information

No response

Hi @sltlls, thank you for your attention to MMYOLO.
According to open-mmlab/mmpretrain#392 (comment), it's an upstream issue in PyTorch. If you want to turn on pin_memory and persistent_workers, you need to upgrade the PyTorch (>1.8.0, <1.12.0). Or you can set pin_memory=False and persistent_workers=False in the config file.