How to deal with sideways camera shots for new view generation?
buddyhs opened this issue · comments
First of all, thank you for your excellent work. I am having some problems debugging your code, specifically due to the field of view limitation, where we rotate the camera 90 degrees and place it so that the human body is sideways in the image. At this point it seems that effective new viewpoint synthesis is not possible, is this because the training dataset is all positive, and where do I set this up so that it can work effectively.
Sorry, I don't fully understand your camera setup, could you show an example pair of input images?
Also, what does "the training dataset is all positive" mean? I think it would be enough to synthesize training data identical to the targeting hardware setup.
What does the input image pair look like after rectification?
I think it would be better to adjust the FoV of cameras or pad the collected real-world images to maintain the entire human body rather than rotate the camera.