NVlabs / RVT

Official Code for RVT-2 and RVT

Home Page:https://robotic-view-transformer-2.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

how to render with real camera pose

Wangweiyao opened this issue · comments

Hi, thanks for your great codebase. I was trying to render pytorch3d to produce view that matches real observation but failed. I wonder if you can also share some code for your ablation study that uses real camera view? Thx

Hi,

Thanks for you interest in our work.

One easy way is to directly use the input images instead of trying to re-render them using PyTorch3D.

If you still want to use PyTorch3D, I will be happy to help with setting it up and share some code. I think the following solution can help. Let me know how it goes and if you face any issue.

Essentially, what one needs is the camera intristics and extrinsics.

Attached is a replay_sample which is saved from here.
replay_sample.pkl.zip

Here is some boiler plate code to load the sample and preprocess it. The preprocessing is same as done here. Note that we cannot apply 3D augmentations in this case.

import torch
import pickle as pkl
import numpy as np
from PIL import Image

from pytorch3d.renderer import FoVPerspectiveCameras

import rvt.utils.peract_utils as peract_utils
import rvt.utils.rvt_utils as rvt_utils
from mvt.renderer import BoxRenderer

with open("replay_sample.pkl", "rb") as file:
   replay_sample  = pkl.load(file)

cameras = ["front", "left_shoulder", "right_shoulder", "wrist"]
obs, pcd = peract_utils._preprocess_inputs(replay_sample, cameras)
pc, img_feat = rvt_utils.get_pc_img_feat(obs, pcd)

For camera intrisics, you can use something like this.

fov_cam = FoVPerspectiveCameras()
cam_intr = {}
zfar = {
    "front": 4.5,
    "left_shoulder": 3.2,
    "right_shoulder": 3.2,
    "wrist": 3.5,
}
fov = {
    "front": 40,
    "left_shoulder": 40,
    "right_shoulder": 40,
    "wrist": 60,
}
for cam in cameras:
    cam_intr[cam] = fov_cam.compute_projection_matrix(
        znear=0.01, zfar=zfar[cam], fov=fov[cam], aspect_ratio=1.0, degrees=True)

Then you need to calculate the camera extrinsics and convert the values to PyTorch3D format. Here is an example:

R = []; T = []; K = []; scale = []
for cam in cameras:
    # extrinsics for the zeroth sample
    extr = replay_sample[f'{cam}_camera_extrinsics'][0]
    assert extr.shape == (1, 4, 4), f"extr.shape={extr.shape}"
    _R = extr[0:1, 0:3, 0:3]
    _T = extr[0:1, 0:3, 3]
    _T = (-_T[0] @ _R[0]).unsqueeze(0)
    _scale = torch.ones(_T.shape) 
    _K = cam_intr[cam]
    R.append(_R); T.append(_T); scale.append(_scale); K.append(_K)
    

R = torch.cat(R, 0)
T = torch.cat(T, 0)
K = torch.cat(K, 0)
scale = torch.cat(scale, 0)
dyn_cam_info = (R, T, scale, K)

Then we can render with PyTorch 3D as following:

renderer = BoxRenderer(img_size=(128, 128), device="cuda:0")
out = renderer(pc[0], img_feat[0],  fix_cam=False, dyn_cam_info=[dyn_cam_info,])
img = (255 * out.cpu().numpy()).astype(np.uint8)
Image.fromarray(img.transpose(1, 0, 2, 3).reshape(128, 128*4, 3))

Here is my output:
Screen Shot 2023-08-09 at 10 52 59 AM

Hope this example is self-sufficient. Let me know if you have any other questions.

thanks you so much. this helps tremendously! just want to double check why we cannot do 3d augmentation in this case? from these real views, if we rerender using PyTorch3D after 3D augmentation, wouldn't that work?

We cannot do 3D augmentation as then the pose of the camera with respect to the point cloud would change. Then the images rendered won't be from the real camera poses.

Thank you!