zhangzc21 / DynTet

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is it possible to put the generated Video back on the source video?

schxnhxlz opened this issue · comments

Great Project! Would it be possible to easily put the generated video back on the orignal video to remain body and hand movements?

323089862-a82ba92f-1375-4b30-a75d-4dfc1c0cd170

Thanks!

Hi, the following code is my simple implementation to merge the rendered head with the video frame

### merge ###
import kornia
mask = buffers['shaded'][..., -1:].expand(-1,-1,-1,3).permute(0,3,1,2)
for _ in range(2):
    mask = kornia.morphology.erosion(mask, torch.ones(9, 9).to(mask))
mask = kornia.filters.gaussian_blur2d(mask, (9, 9), (1.5, 1.5)).permute(0,2,3,1)
merge_data = video_frame[:, y1:y2, x1:x2, :3] * (1- mask) + buffers['shaded'][...,:3] * mask # video_frame is the groundtruth, (x1,x2,y1,y2) is your bounding box. Here you can try Poisson fusion to get better effect.
### merge ###

video_frame[:, y1:y2, x1:x2, :3] = merge_data  

Hi, the following code is my simple implementation to merge the rendered head with the video frame

### merge ###
import kornia
mask = buffers['shaded'][..., -1:].expand(-1,-1,-1,3).permute(0,3,1,2)
for _ in range(2):
    mask = kornia.morphology.erosion(mask, torch.ones(9, 9).to(mask))
mask = kornia.filters.gaussian_blur2d(mask, (9, 9), (1.5, 1.5)).permute(0,2,3,1)
merge_data = video_frame[:, y1:y2, x1:x2, :3] * (1- mask) + buffers['shaded'][...,:3] * mask # video_frame is the groundtruth, (x1,x2,y1,y2) is your bounding box. Here you can try Poisson fusion to get better effect.
### merge ###

video_frame[:, y1:y2, x1:x2, :3] = merge_data  

Awesome! Thank you. Where do you place this code? In the infer.py?

How to get video_frame and x1,x2,y1,y2?

@zhangzc21 Can you provide more details about the code for the header pasting back? According to the above code, I posted back is misplaced

Hi, @schxnhxlz, @einsqing, since the current code is only for research purposes, I have not formally tested the pasting function in this code yet. But I think it is not hard, you will just need to modify

DynTet/infer.py

Line 184 in 87c5808

fg_image = buffers['shaded'][..., 0:3]

accordingly and cancel

DynTet/infer.py

Line 147 in 87c5808

dataset_validate.mv = dataset_validate.smooth_mv # use smooth mv to eliminate shaking

Note that you will also need to load the original video.