loicland / superpoint_graph

Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The problem of partitioning without RGB information

zzusunquan opened this issue · comments

Hi loicland:
I want use spg in the Paris-Lille-3D, thit data does not have rgb information. I copied the semantic3d data foramt ,and divided paris data into a training txt and a labels file. Changed function read_custom_fomat in provider.py .and partition.py can run well. but when i run the learning/main.py . Reported the following error.

Total number of parameters: 214286
Module(
(ecc): GraphNetwork(
(0): RNNGraphConvModule(
(_cell): GRUCellEx(
32, 32
(ini): InstanceNorm1d(1, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(inh): InstanceNorm1d(1, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(ig): Linear(in_features=32, out_features=32, bias=True)
)(ingate layernorm)
(_fnet): Sequential(
(0): Linear(in_features=13, out_features=32, bias=True)
(1): ReLU(inplace)
(2): Linear(in_features=32, out_features=128, bias=True)
(3): ReLU(inplace)
(4): Linear(in_features=128, out_features=64, bias=True)
(5): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(6): ReLU(inplace)
(7): Linear(in_features=64, out_features=32, bias=False)
)
)
(1): Linear(in_features=352, out_features=10, bias=True)
)
(ptn): PointNet(
(stn): STNkD(
(convs): Sequential(
(0): Conv1d(11, 64, kernel_size=(1,), stride=(1,))
(1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): Conv1d(64, 64, kernel_size=(1,), stride=(1,))
(4): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace)
(6): Conv1d(64, 128, kernel_size=(1,), stride=(1,))
(7): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): ReLU(inplace)
)
(fcs): Sequential(
(0): Linear(in_features=128, out_features=128, bias=True)
(1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): Linear(in_features=128, out_features=64, bias=True)
(4): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace)
)
(proj): Linear(in_features=64, out_features=4, bias=True)
)
(convs): Sequential(
(0): Conv1d(8, 64, kernel_size=(1,), stride=(1,))
(1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): Conv1d(64, 64, kernel_size=(1,), stride=(1,))
(4): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace)
(6): Conv1d(64, 128, kernel_size=(1,), stride=(1,))
(7): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): ReLU(inplace)
(9): Conv1d(128, 128, kernel_size=(1,), stride=(1,))
(10): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(11): ReLU(inplace)
(12): Conv1d(128, 256, kernel_size=(1,), stride=(1,))
(13): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(14): ReLU(inplace)
)
(fcs): Sequential(
(0): Linear(in_features=257, out_features=256, bias=True)
(1): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): Linear(in_features=256, out_features=64, bias=True)
(4): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace)
(6): Linear(in_features=64, out_features=32, bias=True)
)
)
)
Train dataset: 3 elements - Test dataset: 3 elements - Validation dataset: 0 elements
Epoch 0/500 (results/sema3d/best_paris):
0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last):
File "learning/main.py", line 444, in
main()
File "learning/main.py", line 320, in main
acc, loss, oacc, avg_iou = train()
File "learning/main.py", line 193, in train
embeddings = ptnCloudEmbedder.run(model, *clouds_data)
File "/home/s206/Documents/sunquan/superpoint_graph-ssp-spg/learning/../learning/pointnet.py", line 167, in run_full_monger
out = model.ptn(Variable(clouds), (clouds_global))
File "/home/s206/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/s206/Documents/sunquan/superpoint_graph-ssp-spg/learning/../learning/pointnet.py", line 122, in forward
T = self.stn(input[:,:self.nfeat_stn,:])
File "/home/s206/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/s206/Documents/sunquan/superpoint_graph-ssp-spg/learning/../learning/pointnet.py", line 57, in forward
input = self.convs(input)
File "/home/s206/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/s206/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/s206/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/s206/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 187, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [64, 11, 1], expected input[964, 5, 128] to have 11 channels, but got 5 channels instead
0%| | 0/1 [00:01<?, ?it/s]

During the superpoint generation process, I copied semantic_datast.py and deleted the code related to rgb. it successfully formed Super Point. But the strange thing is that parsed/class_count.h5 is as shown in the figure below, I think this may be a problem, can you tell me about the problem?
image

#209
i run the
CUDA_VISIBLE_DEVICES=0 python learning/main.py --dataset custom_dataset --SEMA3D_PATH /home/s206/Documents/sunquan/superpoint_graph-ssp-spg/SEMA3D_DIR --db_test_name testred --db_train_name trainval
--epochs 500 --lr_steps '[350, 400, 450]' --test_nth_epoch 100 --model_config 'gru_10,f_10' --ptn_nfeat_stn 11
--nworkers 2 --pc_attribs xyzelpsv --odir "results/sema3d/best_paris"

and i changed the --ptn_nfeat_stn 11 to --ptn_nfeat_stn 5
it Reported the following error.
RuntimeError: Given groups=1, weight of size [64, 8, 1], expected input[960, 5, 128] to have 8 channels, but got 5 channels instead

My code is not updated. I changed spg.py according to the question #151 and it works

#166
i meet the same paoblem as the #166
In the train() function, it works. But in the eval() function, it fails
ValueError: operands could not be broadcast together with shapes (10,) (8,) (10,)

I realized the error that my label 0 is an unmarked point, but I think it just made the error into
ValueError: operands could not be broadcast together with shapes (9,) (8,) (9,)
I'm re-partitioning and SPG , I can see the new results tomorrow, will there be the same error, can you give me some guidance