alievk / npbg

Neural Point-Based Graphics

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Problem with big dataset?

alex4727 opened this issue · comments

Hi, with your help, i some how managed to solve problems including pytorch rtx 30 series compatibility problem.
I'm trying to apply this npbg to 3d point cloud of our school building.
The dataset consists of 708 photos and I successfully built the required files with metashape.
However, when i try to run the following command,
python train.py --config configs/train_example.yaml --pipeline npbg.pipelines.ogl.TexturePipeline --dataset_names <scene_name>

I get the error below.

multiprocessing.pool.MaybeEncodingError: Error sending result (entire message is at the bottom): [(<npbg.datasets.dynamic.DynamicDataset object at 0x7f0b817b1358>, <npbg.datasets.dynamic.DynamicDataset object at 0x7f0b817b12b0>)]'. Reason: 'error("'i' format requires -2147483648 <= number <= 2147483647",)'

Here is the dataset built with metashape. Give it a try, it won't take that long.
https://drive.google.com/file/d/1hkX5EBLKmGodBqqdkexlFf-Qxv0M-BPS/view?usp=sharing

When i google the message, it says it's something related to python 3.5~3.6's problem of sending and receiving large data. Do you have any solution to this?
Thanks

Entire printed message of above command :

(npbg) alex@alex-System-Product-Name:~/Downloads/npbg-master$ python train.py --config configs/train_example.yaml --pipeline npbg.pipelines.ogl.TexturePipeline --dataset_names scene
experiment dir: data/logs/01-02_01-55-03___dataset_names^scene
 - ARGV:  
 train.py --config configs/train_example.yaml --pipeline npbg.pipelines.ogl.TexturePipeline --dataset_names scene 

                                     Unchanged args

                               batch_size  :  8
                           batch_size_val  :  8
                                  comment  :  <empty>
                               conv_block  :  gated
                           criterion_args  :  {'partialconv': False}
                         criterion_module  :  npbg.criterions.vgg_loss.VGGLoss
                                crop_size  :  (512, 512)
                       dataloader_workers  :  4
                          descriptor_size  :  8
                                   epochs  :  40
                                     eval  :  False
                             eval_in_test  :  True
                            eval_in_train  :  True
                      eval_in_train_epoch  :  -1
                         exclude_datasets  :  None
                               freeze_net  :  False
                      ignore_changed_args  :  ['ignore_changed_args', 'save_dir', 'dataloader_workers', 'epochs', 'max_ds', 'batch_size_val', 'config', 'pipeline']
                                inference  :  False
                           input_channels  :  None
                             input_format  :  uv_1d_p1, uv_1d_p1_ds1, uv_1d_p1_ds2, uv_1d_p1_ds3, uv_1d_p1_ds4
                                 log_freq  :  5
                          log_freq_images  :  100
                                       lr  :  0.0001
                                   max_ds  :  4
                               merge_loss  :  True
                                 multigpu  :  True
                                 n_points  :  0
                                 net_ckpt  :  downloads/weights/01-09_07-29-34___scannet/UNet_stage_0_epoch_39_net.pth
                                 net_size  :  4
                               num_mipmap  :  5
                               paths_file  :  configs/paths_example.yaml
                               reg_weight  :  0.0
                                 save_dir  :  data/logs
                                save_freq  :  1
                                     seed  :  2019
                              simple_name  :  True
                            splitter_args  :  {'train_ratio': 0.9}
                          splitter_module  :  npbg.datasets.splitter.split_by_ratio
                            supersampling  :  1
                       texture_activation  :  none
                             texture_ckpt  :  None
                               texture_lr  :  0.1
                             texture_size  :  None
                       train_dataset_args  :  {'keep_fov': False, 'random_zoom': [0.5, 2.0], 'random_shift': [-1.0, 1.0], 'drop_points': 0.0, 'num_samples': 2000}
                                 use_mask  :  None
                                 use_mesh  :  False
                         val_dataset_args  :  {'keep_fov': False, 'drop_points': 0.0}

                                      Changed args

                                   config  :  configs/train_example.yaml (default None)
                            dataset_names  :  ['scene'] (default [])
                                 pipeline  :  npbg.pipelines.ogl.TexturePipeline (default None)

loading pointcloud...
=== 3D model ===
VERTICES:  129908024
EXTENT:  [-77.48674774 -54.43938446 -22.40940666] [127.75872803 147.50030518  17.70900345]
================
gl_frame False
image_size (512, 512)
gl_frame False
image_size (512, 512)
Traceback (most recent call last):
  File "train.py", line 477, in <module>
    pipeline.create(args)
  File "/home/alex/Downloads/npbg-master/npbg/pipelines/ogl.py", line 90, in create
    self.ds_train, self.ds_val = get_datasets(args)
  File "/home/alex/Downloads/npbg-master/npbg/datasets/dynamic.py", line 332, in get_datasets
    for ds_train, ds_val in pool_out.get():
  File "/home/alex/anaconda3/envs/npbg/lib/python3.6/multiprocessing/pool.py", line 644, in get
    raise self._value
multiprocessing.pool.MaybeEncodingError: Error sending result: '[(<npbg.datasets.dynamic.DynamicDataset object at 0x7f0b817b1358>, <npbg.datasets.dynamic.DynamicDataset object at 0x7f0b817b12b0>)]'. Reason: 'error("'i' format requires -2147483648 <= number <= 2147483647",)'

`

Hi @alex4727, I tried to download the dataset but the file seems to be deleted. May I ask to reupload it? Thank you.

Oh Sorry for late reply, I some how managed to scale the data and the problem is solved by now:) I will close the issue.