wffancy / 3dssg

3D scene graph generation using GCN

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

visualization

MY-CODE-1981 opened this issue · comments

commented

I was able to get your program to work, I tried visualize(data_dict, model, obj_class_dict, pred_class_dict) using model_last.pth. obj_pred_cls is all set to 88 and There is nothing drawn in the prediction of the pdf file that is saved in the vis folder. I have set a breakpoint in vscode debug mode, but the program does not stop once in the following program. Am I forgetting some step to make it work correctly?

if i[j] >= 0.5:
pred_list.append(rel_pairs[index] + [j])
s, o = rel_pairs[index].
if s == o or j == 0:
continue
g1.edge(str(s.item()) + '', str(o.item()) + '', pred_dict[j])

Translated with www.DeepL.com/Translator (free version)

Train the default GCN model with the following command:

python scripts/train.py

It also makes sense to change the hyperparameters using command line arguments like --batch_size, --epoch etc.
Use argument --use_pretrained after this command to load pretrained model. Use argument --vis for visualization and the results will be saved under vis folder. Visualization process needs pretrained model loaded first.

commented

I appreciate your reply. Thanks to your programming of 3dssg, I have a better understanding of learning relationships between objects with gcn. I respect your achievement.

I have forked your code.
https://github.com/Masanori-Yoshihira/3dssg

Yes, I have run the following program.
python3 scripts/train.py

The arg has been set with the necessary parameters. You can check it from the following link.
https://github.com/Masanori-Yoshihira/3dssg/blob/master/scripts/train.py

The training results were successfully completed and the necessary folders were output in the output folder.
https://github.com/Masanori-Yoshihira/3dssg/tree/master/outputs/2021-06-18_12-37-43

I used model_last.pth to output the results to the vis folder.
https://github.com/Masanori-Yoshihira/3dssg/blob/master/vis/095821f7-e2c2-2de1-9568-b9ce59920e29-1/SG_0.00_0.00.gv.pdf

The pdf file of the output results shows the ground_truth correctly, but the prediction results do not show anything. The node labels are also incorrectly predicted with only 88. Is this because I am using python 3.6?

If you don't mind me asking, what version of python and libraries such as pytorch did you use to create the program?

I use python 3.6 and pytorch 1.6.0. However, I am not clear about the problem you met, as I am continuously trying some new models and ideas on my code. This repo's code is an older version and I have updated it to the improved one. Hope this can work with you.

commented

My problem is that there is no graph under the word predicted in the vis folder SG_0.00_0.00.gv.pdf.

I felt that the code to draw the predicted in the visualization function was not used.

A new commit added the requirement.txt file. When args.use_pretrained is not set in default, use_pretrained_cls = not args.use_pretrained will set use_pretrained_cls to True. model = SGPN(use_pretrained_cls, gconv_dim=128, gconv_hidden_dim=512,
gconv_pooling='avg', gconv_num_layers=5, mlp_normalization='batch') is . /pointnet_cls_best_model.pth is loaded. However, . /pointnet_cls_best_model.pth is not added. is there pointnet_cls_best_model somewhere?

Now I've started to run the code with the new commits you've registered. Note that I have changed use_pretrained_cls = not args.use_pretrained to use_pretrained_cls = args.use_pretrained.

Due to the learning environment, I changed following code from
parser.add_argument("--verbose", type=int, help="iterations of showing verbose", default=100) # train iter
parser.add_argument("--val_step", type=int, help="iterations of validating", default=10000) # val iter
into
parser.add_argument("--verbose", type=int, help="iterations of showing verbose", default=10) # train iter
parser.add_argument("--val_step", type=int, help="iterations of validating", default=1000) # val iter
Does reducing the number of verbose and val_step cause the code to draw the predicted in the visualization function to be unused?

commented

I'm using Quadro M4000 and the M4000 is running out of memory. Can you tell me the specs of your GPU?

Specify the '--vis' argument with pretrained spgn model path, namely '.../model_last.pth', can toggle the visualization part code. I have upload the 'pointnet_cls_best_model.pth', which is downloaded from the PointNet repo for classification task on ModelNet.

The argument '--verbose' and 'val_step' only influence the log frequency and have no influence on the visualization part.

And the card I use is one TITAN X with CUDA 10.1, which is of 12G memory.

However, the reason you run out of memory I suspect is you run out of the CPU memory rather than GPU memory. To accelerate the training process, I load all the json files of the dataset into CPU memory first to avoid loading it one by one from the disk every iteration. This operation needs approximately 15G of CPU memory.

If you don't want this, you can comment the 'self._load_data()' in line 59 of dataset.py and also line 69. Meanwhile, uncomment the line 67 to 68 in the same file can make everything back.

commented

Thank you for your detailed answer. I decided to use rtx2080, which gives me 11GB of memory on my GPU. As a result, I was able to finish the training with the default args in your train.py. In order to display the results using the training results, I typed below in my terminal.
$ python scripts/train.py --vis --use_pretrained 2021-07-14_14-02-41
The result is not displayed like you.

@Masanori-Yoshihira How long did you take to preprocess the data, I spend two hours using "python data/preprocess.py" and get nothing.