Tramac / Fast-SCNN-pytorch

A PyTorch Implementation of Fast-SCNN: Fast Semantic Segmentation Network

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

VOC or COCO dataset model file

shivSD opened this issue · comments

Hi,

do you support voc or coco dataset ? if not can you guide me through the training process (changes need). Or if you support please can you share model files or script to train.

thanks for your help

Hi, this project does not contain voc and coco, this one may help you.

Thanks for the scripts. I did use pascal_voc.py to load the data. But i'm getting into following error

Found 1464 images in the folder ../datasets/voc/VOC2012
Found 1449 images in the folder ../datasets/voc/VOC2012
w/ class balance
Starting Epoch: 0, Total Epochs: 160
Traceback (most recent call last):
File "train.py", line 200, in
trainer.train()
File "train.py", line 127, in train
for i, (images, targets) in enumerate(self.train_loader):
ValueError: too many values to unpack (expected 2)

Is it something to with torch version i'm using ? i tried with both torch 0.4.1 & 1.3.0 version both are giving the same issue.

I see the issue is related to this snippet of code in train.py
`
for i, (images, targets) in enumerate(self.train_loader):
cur_lr = self.lr_scheduler(cur_iters)
for param_group in self.optimizer.param_groups:
param_group['lr'] = cur_lr

            images = images.to(self.args.device)
            targets = targets.to(self.args.device)

            outputs = self.model(images)
            loss = self.criterion(outputs, targets)

            self.optimizer.zero_grad()
            loss.backward()
            self.optimizer.step()`

I appreciate any help in resolving this.

Hi, @shivSD, I met the same error when use the pascal_voc.py, have you debugged it?

@Liu6697 I think the problem is with what enumerate(self.train_loader) returns. If you notice, for city scapes dataset, __getitem__(...)method in CitySegmentation class returns (img, mask).
But in Pascal VOC data loader script from above reference returns (img, mask, path).
So, you will need to make sure you are unpacking all three items that __getitem__(...) methods returns when you enumerate(self.train_loader).

Something like following should work:

for i, (images, targets, _) in enumerate(self.train_loader):
....

@BhargavaRamM Thanks!!!

Hi, @Liu6697 How about your trainning results on voc?