tsunghan-wu / RandLA-Net-pytorch

:four_leaf_clover: Pytorch Implementation of RandLA-Net (https://arxiv.org/abs/1911.11236)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

problem about train

Dreameo opened this issue · comments

I got a bug when I training

(randlanet) wx@dl-group-workstation:/media/wx/HDD/DQ/RandLA-Net-pytorch-main$ python train_SemanticKITTI.py
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 19130/19130 [00:40<00:00, 473.39it/s]
0%| | 0/3826 [00:03<?, ?it/s]
Traceback (most recent call last):
File "train_SemanticKITTI.py", line 191, in
main()
File "train_SemanticKITTI.py", line 187, in main
trainer.train()
File "train_SemanticKITTI.py", line 131, in train
self.train_one_epoch()
File "train_SemanticKITTI.py", line 120, in train_one_epoch
loss, end_points = compute_loss(end_points, self.train_dataset, self.criterion)
File "/media/wx/HDD/DQ/RandLA-Net-pytorch-main/network/loss_func.py", line 28, in compute_loss
loss = criterion(valid_logits, valid_labels).mean()
File "/home/wx/anaconda3/envs/randlanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/wx/anaconda3/envs/randlanet/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 1152, in forward
label_smoothing=self.label_smoothing)
File "/home/wx/anaconda3/envs/randlanet/lib/python3.6/site-packages/torch/nn/functional.py", line 2846, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: weight tensor should be defined either for all 19 classes or no classes but got weight tensor of shape: [1, 19]

could you please help me?

I've got the same question. Have you solved it yet?

I've got the same question. Have you solved it yet?

I got a bug when I training

(randlanet) wx@dl-group-workstation:/media/wx/HDD/DQ/RandLA-Net-pytorch-main$ python train_SemanticKITTI.py 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 19130/19130 [00:40<00:00, 473.39it/s] 0%| | 0/3826 [00:03<?, ?it/s] Traceback (most recent call last): File "train_SemanticKITTI.py", line 191, in main() File "train_SemanticKITTI.py", line 187, in main trainer.train() File "train_SemanticKITTI.py", line 131, in train self.train_one_epoch() File "train_SemanticKITTI.py", line 120, in train_one_epoch loss, end_points = compute_loss(end_points, self.train_dataset, self.criterion) File "/media/wx/HDD/DQ/RandLA-Net-pytorch-main/network/loss_func.py", line 28, in compute_loss loss = criterion(valid_logits, valid_labels).mean() File "/home/wx/anaconda3/envs/randlanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/wx/anaconda3/envs/randlanet/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 1152, in forward label_smoothing=self.label_smoothing) File "/home/wx/anaconda3/envs/randlanet/lib/python3.6/site-packages/torch/nn/functional.py", line 2846, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) RuntimeError: weight tensor should be defined either for all 19 classes or no classes but got weight tensor of shape: [1, 19]

could you please help me?

I had got the same question,but i had solved it . Modify the "train_SemanticKITTI.py" .
`
# m--------'class_weights' arguments add '.latten()'-----------------

    # Loss Function
    class_weights = torch.from_numpy(train_dataset.get_class_weight()).float().cuda().latten()
    self.criterion = nn.CrossEntropyLoss(weight=class_weights, reduction='none')`

I've got the same question. Have you solved it yet?

# Loss Function
class_weights = torch.from_numpy(train_dataset.get_class_weight()).float().cuda().latten()
self.criterion = nn.CrossEntropyLoss(weight=class_weights, reduction='none')`

I've got the same question. Have you solved it yet?

# Loss Function
class_weights = torch.from_numpy(train_dataset.get_class_weight()).float().cuda().latten()
self.criterion = nn.CrossEntropyLoss(weight=class_weights, reduction='none')`

I got a bug when I training
/home/b6/anaconda3/bin/python /home/b6/ww/KITTI(semantic)/KITTI(semantic)/L5 RandLA-Net-pytorch-main/L5 RandLA-Net-pytorch-main/RandLA-Net-pytorch-main/train_SemanticKITTI.py
100%|██████████| 19130/19130 [00:28<00:00, 665.05it/s]
0%| | 0/3826 [00:01<?, ?it/s]
Traceback (most recent call last):
File "/home/b6/ww/KITTI(semantic)/KITTI(semantic)/L5 RandLA-Net-pytorch-main/L5 RandLA-Net-pytorch-main/RandLA-Net-pytorch-main/train_SemanticKITTI.py", line 191, in
main()
File "/home/b6/ww/KITTI(semantic)/KITTI(semantic)/L5 RandLA-Net-pytorch-main/L5 RandLA-Net-pytorch-main/RandLA-Net-pytorch-main/train_SemanticKITTI.py", line 187, in main
trainer.train()
File "/home/b6/ww/KITTI(semantic)/KITTI(semantic)/L5 RandLA-Net-pytorch-main/L5 RandLA-Net-pytorch-main/RandLA-Net-pytorch-main/train_SemanticKITTI.py", line 131, in train
self.train_one_epoch()
File "/home/b6/ww/KITTI(semantic)/KITTI(semantic)/L5 RandLA-Net-pytorch-main/L5 RandLA-Net-pytorch-main/RandLA-Net-pytorch-main/train_SemanticKITTI.py", line 119, in train_one_epoch
end_points = self.net(batch_data)
File "/home/b6/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(input, **kwargs)
File "/home/b6/ww/KITTI(semantic)/KITTI(semantic)/L5 RandLA-Net-pytorch-main/L5 RandLA-Net-pytorch-main/RandLA-Net-pytorch-main/network/RandLANet.py", line 51, in forward
f_sampled_i = self.random_sample(f_encoder_i, end_points['sub_idx'][i])
File "/home/b6/ww/KITTI(semantic)/KITTI(semantic)/L5 RandLA-Net-pytorch-main/L5 RandLA-Net-pytorch-main/RandLA-Net-pytorch-main/network/RandLANet.py", line 93, in random_sample
pool_features = pool_features.max(dim=3, keepdim=True)[0] # batch
channelnpoints1
RuntimeError: cannot perform reduction function max on tensor with no elements because the operation does not have an identity
could you please help me?

I got a bug when I training
/home/b6/anaconda3/bin/python /home/b6/ww/KITTI(semantic)/KITTI(semantic)/L5 RandLA-Net-pytorch-main/L5 RandLA-Net-pytorch-main/RandLA-Net-pytorch-main/train_SemanticKITTI.py
100%|██████████| 19130/19130 [00:29<00:00, 646.68it/s]
1%| | 45/3826 [02:51<4:00:25, 3.82s/it]
Traceback (most recent call last):
File "/home/b6/ww/KITTI(semantic)/KITTI(semantic)/L5 RandLA-Net-pytorch-main/L5 RandLA-Net-pytorch-main/RandLA-Net-pytorch-main/train_SemanticKITTI.py", line 191, in
main()
File "/home/b6/ww/KITTI(semantic)/KITTI(semantic)/L5 RandLA-Net-pytorch-main/L5 RandLA-Net-pytorch-main/RandLA-Net-pytorch-main/train_SemanticKITTI.py", line 187, in main
trainer.train()
File "/home/b6/ww/KITTI(semantic)/KITTI(semantic)/L5 RandLA-Net-pytorch-main/L5 RandLA-Net-pytorch-main/RandLA-Net-pytorch-main/train_SemanticKITTI.py", line 131, in train
self.train_one_epoch()
File "/home/b6/ww/KITTI(semantic)/KITTI(semantic)/L5 RandLA-Net-pytorch-main/L5 RandLA-Net-pytorch-main/RandLA-Net-pytorch-main/train_SemanticKITTI.py", line 108, in train_one_epoch
for batch_idx, batch_data in enumerate(tqdm_loader):
File "/home/b6/anaconda3/lib/python3.8/site-packages/tqdm/std.py", line 1178, in iter
for obj in iterable:
File "/home/b6/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 435, in next
data = self._next_data()
File "/home/b6/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1085, in _next_data
return self._process_data(data)
File "/home/b6/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1111, in _process_data
data.reraise()
File "/home/b6/anaconda3/lib/python3.8/site-packages/torch/_utils.py", line 428, in reraise
raise self.exc_type(msg)
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/b6/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
data = fetcher.fetch(index)
File "/home/b6/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/b6/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/b6/ww/KITTI(semantic)/KITTI(semantic)/L5 RandLA-Net-pytorch-main/L5 RandLA-Net-pytorch-main/RandLA-Net-pytorch-main/dataset/semkitti_trainset.py", line 36, in getitem
selected_pc, selected_labels, selected_idx, cloud_ind = self.spatially_regular_gen(item, self.data_list)
File "/home/b6/ww/KITTI(semantic)/KITTI(semantic)/L5 RandLA-Net-pytorch-main/L5 RandLA-Net-pytorch-main/RandLA-Net-pytorch-main/dataset/semkitti_trainset.py", line 46, in spatially_regular_gen
selected_pc, selected_labels, selected_idx = self.crop_pc(pc, labels, tree, pick_idx)
File "/home/b6/ww/KITTI(semantic)/KITTI(semantic)/L5 RandLA-Net-pytorch-main/L5 RandLA-Net-pytorch-main/RandLA-Net-pytorch-main/dataset/semkitti_trainset.py", line 68, in crop_pc
select_idx = search_tree.query(center_point, k=cfg.num_points)[1][0]
File "sklearn/neighbors/_binary_tree.pxi", line 1342, in sklearn.neighbors._kd_tree.BinaryTree.query
ValueError: k must be less than or equal to the number of training points

Process finished with exit code 1
could you please help me?

I've got the same question. Have you solved it yet?

# Loss Function
class_weights = torch.from_numpy(train_dataset.get_class_weight()).float().cuda().latten()
self.criterion = nn.CrossEntropyLoss(weight=class_weights, reduction='none')`

do you mean flatten() ?