Gallery
ashawkey opened this issue · comments
- a gummy jellyfish
df_ep0100_rgb.mp4
- a koi fish
df_ep0100_rgb.mp4
- a bread
df_ep0100_rgb.mp4
- a delicious cake
df_ep0100_rgb.mp4
- a DSLR photo of a yellow duck
ngp_ep0150_rgb.mp4
- a rabbit, animated movie character, high detail 3d model
ngp_ep0100_rgb.mp4
- a DSLR photo of a rose
ngp_ep0100_rgb.mp4
- a high quality photo of a pineapple
ngp_ep0150_rgb.mp4
- a DSLR photo of a hamburger
ngp_ep0150_rgb.mp4
- a DSLR photo of cthulhu
ngp_ep0150_rgb.mp4
Failure cases:
- A DSLR photo of a squirrel
ngp_ep0150_rgb.mp4
- A DSLR photo of a frog wearing sweater
ngp_ep0150_rgb.mp4
I tried DSLR photos of a panda, got a similar failed result. Curious: any guess why it failed?
Maybe it's because small prey animals have adapted shape and color patterns to make it difficult to guess their true orientation and so which direction they'll scurry or hop away.
This is my first tests, each take about 90 minutes:
--text "a hamburger"
df_ep0100_rgb.mp4
--text "jeweled dagger"
df_ep0150_rgb.mp4
Certainly, a lot more to tune and play with.
Some results from the pure-pytorch vanilla NeRF backbone:
- a hamburger
df_ep0100_rgb.mp4
- a pineapple
df_ep0100_rgb.mp4
@ashawkey Hi, can you please provide these commands to reproduce the new results video here?
https://github.com/ashawkey/stable-dreamfusion/tree/8d7899ae758e0f683a1cef51b876f2a37cba0c55
@elliottzheng Hi, they are run with the default configs, with prompts like "a superman". You could also refer to this: #128 (comment).