humanpose1 / MS-SVConv

Compute descriptors for 3D point cloud registration using a multi scale sparse voxel architecture

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

A question about the testing result.

QWTforGithub opened this issue · comments

Hi, first of all, thank you for your help many times before. I don't mean to be rude, but I tested your published model 'MS_SVCONV_B2cm_X2_3head.pt 'at different sample points (5000, 2500, 1000, 500, 250) and I could only get between 92-96% of data (95.3%, 95.1%, 94.7%, 94.4%, 92.1%), not as good performance as in your paper. My test setup is based on what you posted. Is there something wrong with my Settings? I tested SpinNet, FCGF and other models, and the results were close to the paper. I don't think it's the environment.

Hmm, that is strange.
I will re run the experiments in a different computer, to see where the problem is.
If you use my setting, can you send me the csv with all the detailed results, please ?

Hmm, that is strange. I will re run the experiments in a different computer, to see where the problem is. If you use my setting, can you send me the csv with all the detailed results, please ?

I send the 'xxx@users.noreply.github.com'.

I did not receive the csv.

Actually I have detected some errors in the config file fragment3dmatch.
You can find the yaml here, that contains the transfo and everything for the experiments on 3DMatch I use.
I have detected a first error in the yaml code . First subsampling is 0.025m in the fragment3dmatch. It should be 0.02.
I do not know if it is the reason why you have worse results but I will test when I'll have time.
If you want to test it yourself, you will need to remove the directory fragments and matches in data/general3dmatch/processed/test (because, we perform a GridSampling as a pre_transform).
Best,

I did not receive the csv.

Actually I have detected some errors in the config file fragment3dmatch. You can find the yaml here, that contains the transfo and everything for the experiments on 3DMatch I use. I have detected a first error in the yaml code . First subsampling is 0.025m in the fragment3dmatch. It should be 0.02. I do not know if it is the reason why you have worse results but I will test when I'll have time. If you want to test it yourself, you will need to remove the directory fragments and matches in data/general3dmatch/processed/test (because, we perform a GridSampling as a pre_transform). Best,

Hi, I can only run the result of only 96.0 according to the configuration file you gave me, 'hydra-config.yaml'. The CSV files can be downloaded here. I don't know if there's something wrong with my Settings. If you don't mind, could you help me run the results of (2500,1000,500,250)? I always can't run good results by myself. Thank you.

Hi, I tested in a different computer and I can reproduce the results (FMR of 98.4 on 3DMatch with 5000 points). But I gave a lot of bad indications (and I'm sorry for that).
For example data should befragment3dmatch_sparse. In this file, you can modify the number of points you sample during inference (parameter is num_points).
For example if you want 1000 points:

python scripts/test_registration_scripts/evaluate.py task=registration models=registration/ms_svconv_base model_name=MS_SVCONV_B2cm_X2_3head data=registration/fragment3dmatch_sparse
training=sparse_fragment_reg training.checkpoint_dir=/your/checkpoint/ training.cuda=True data.num_points=1000

But, I will have to debug it to pinpoint exactly where the mistake is, and have a more reproducible code. But, I'm busy, so it might take some times.
Best,

Hi @QWTforGithub I have a small request. Can you try with this model.

Hi @QWTforGithub I have a small request. Can you try with this model.

Sorry, I still can't get good results(96.5%), even using the model you just provided. I notice data= fragment3DMatch_sparse instead of data= Fragment3DMatch. I tried re-removing matches folder and fragments folder from the process folder, but it still didn't work. I can only use the data that appears in your paper, and the data that doesn't I use the data that I run myself. I'm sorry about that.

Thank you again for your help for many times. I believe your data is true.
Best wish.

Hi @QWTforGithub ,

I re tested and I also got around 96.5% with the models I gave you and the code. Right know I have not yet pinpointed exactly the problem. But with an old version of torch-points3d, I can reproduce the FMR of ~98%.

git clone https://github.com/humanpose1/deeppointcloud-benchmarks.git
git checkout -b "MS_SVCONV_B2cm_X2_3head" d079374da05506762f32bb7b090f35be86a90760

But it use an old version of omegaconf, hydra, torch, and an old version of torchsparse also (version 1.2 instead of 1.4). And it works with this model.

When I will have time, I will find the problem (is it the torchsparse version ,my old code, or something else ?) and fix it so it can work with the last version of tp3d.
Best,.

Hi @QWTforGithub ,

I re tested and I also got around 96.5% with the models I gave you and the code. Right know I have not yet pinpointed exactly the problem. But with an old version of torch-points3d, I can reproduce the FMR of ~98%.

git clone https://github.com/humanpose1/deeppointcloud-benchmarks.git
git checkout -b "MS_SVCONV_B2cm_X2_3head" d079374da05506762f32bb7b090f35be86a90760

But it use an old version of omegaconf, hydra, torch, and an old version of torchsparse also (version 1.2 instead of 1.4). And it works with this model.

When I will have time, I will find the problem (is it the torchsparse version ,my old code, or something else ?) and fix it so it can work with the last version of tp3d. Best,.

Ok , thank you for your reply. By the way, I noticed that the downsampling of training 3DMatch is 0.02 (first_subsampling : 0.02) and when tested, it was 0.02. In general, when we test the model, the training and the testing are usually set up the same way, right? For example, if I train the model with a 0.025 downsample setting, I should also test it with a 0.025 downsample setting.

Yes, exactly, Even when you change sensors, you should keep the same sampling (When I test MS-SVConv on ETH, I keep the subsampling at 0.02).

Hi,

With the current version of torch-points3d, I obtain a FMR of 97.4 instead of 98.4 with tau=0.05 on 3DMatch in a supervised setting. However, with tau=0.2, I obtain a FMR of 91.6 instead of 89.9. In other word, the current version is better in terms of FMR with tau=0.2. If you want to get the previous results, you can use the previous version.