lfz / DSB2017

The solution of team 'grt123' in DSB2017

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TypeError: Caught TypeError in DataLoader worker process 0.

Karna4621 opened this issue · comments

I'm running the pretrained models on 2no. patients scans. Here's the Log of error.
starting preprocessing
b83ce5267f3fd41c7029b4e56724cd08 done
b7ef0e864365220b8c8bfb153012d09a done
end preprocessing
Traceback (most recent call last):
File "main.py", line 60, in
test_detect(test_loader, nod_net, get_pbb, bbox_result_path,config1,n_gpu=config_submit['n_gpu'])
File "/Users/bharath/Downloads/DSB2017-master/test_detect.py", line 24, in test_detect
for i_name, (data, target, coord, nzhw) in enumerate(data_loader):
File "/Users/bharath/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in next
data = self._next_data()
File "/Users/bharath/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data
return self._process_data(data)
File "/Users/bharath/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
data.reraise()
File "/Users/bharath/anaconda3/lib/python3.7/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/Users/bharath/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/Users/bharath/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/Users/bharath/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/Users/bharath/Downloads/DSB2017-master/data_detector.py", line 125, in getitem
margin = self.split_comber.margin/self.stride)
File "/Users/bharath/Downloads/DSB2017-master/split_combine.py", line 37, in split
data = np.pad(data, pad, 'edge')
File "<array_function internals>", line 6, in pad
File "/Users/bharath/anaconda3/lib/python3.7/site-packages/numpy/lib/arraypad.py", line 738, in pad
raise TypeError('pad_width must be of integral type.')
TypeError: pad_width must be of integral type.

Since Im running on macOS (cpu only) ,commented below lines in main.py
#torch.cuda.set_device(0)
#nod_net = nod_net.cuda()
#cudnn.benchmark = True
#nod_net = DataParallel(nod_net)
Please guide me how to run pre-trained models in my macOS with 2-3 patients dataset.
Thanks in Advance

Hi have you solved this issue yet? I have the same issue on windows, running on gpu and cpu.

@Karna4621 @shakjm
I came across this issue. Did you get the solution?

Hi @yusuke0324 @shakjm
First of all pad_width error is due to float value passed as calculated.
cuda cant be run on cpu. since cuda supports only keras which is GPU supported.
I used Kaggle to run on GPU and got my results.!!!
Kaggle provides 30hrs of GPU & TPU access free for every account. You can upload your datasets and run easily.

visit- www.kaggle.com