kwea123 / nerf_pl

NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning

Home Page:https://www.youtube.com/playlist?list=PLDV2CyUo4q-K02pNEyDr7DYpTQuka3mbV

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

It seems that batch size is controlled by chunk_size

mirlansmind opened this issue · comments

In train.py

def train_dataloader(self):
        return DataLoader(self.train_dataset,
                          shuffle=True,
                          num_workers=4,
                          batch_size=self.hparams.batch_size,
                          pin_memory=True)

where batch_size only dictates the number of images loaded from the train split. Then in rendering chunk_size is used for batched inference and the batch_size does not really appear anywhere.

    for i in range(0, B, self.hparams.chunk):
        rendered_ray_chunks = \
            render_rays(self.models,
                        self.embeddings,
                        rays[i:i+self.hparams.chunk],
                        ts[i:i+self.hparams.chunk],
                        self.hparams.N_samples,
                        self.hparams.use_disp,
                        self.hparams.perturb,
                        self.hparams.noise_std,
                        self.hparams.N_importance,
                        self.hparams.chunk, # chunk size is effective in val mode
                        self.train_dataset.white_back)

I am a bit confused here because it seems that chunk_size is the actual batch size per training step. Please clarify.