facebookresearch / AnimatedDrawings

Code to accompany "A Method for Animating Children's Drawings of the Human Figure"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Same image different results (demo vs local)

javier-pkg-mda opened this issue · comments

I'm using same image in demo ( https://sketch.metademolab.com/ ) and also running image_to_animation.py locally and I'm getting different results. I've noticed mask is not really well done when using it locally but mask in demo is way better. I've read that the trained AI that demo uses is little bit more trained that the dataset provided in this repository, is it possible to have the same trained AI in this repo than the one being used in demo please? It is just performs too much better.

Hi @javier-pkg-mda

Can you share some images showing the demo and local predictions you are referring to? The keypoint model I've shared here was actually trained on substantially more data than the version used by the demo, so I'm interested to see the image you're referring to.

The masking method used in this repo is basically the same one used in the demo. It's not a model but an image processing algorithm.

Thanks for the quick comment @hjessmith. Here are the files:

Original image:
javi_test

Then running image_to_animation.py gives following mask:

mask

joint_overlay:
joint_overlay

animation:
video

If you use same image in your online demo, results becomes way much better. Any idea why this is happening?

It looks like the bounding box prediction is cutting off part of the character, which is causing the image segmentation to fail. The easiest way to fix this would be to add some padding around the character in the original image. Try to make sure there's at least a 20 pixel buffer on each side. My guess is, if you do that, it should fix your issue.

I want to know how to create joint_overlay by photoshop? I have problem with the docker.

@pingfai I think joint_overlay.png is only for human readable purpose, you can edit it on your own by two options as far as I know:

  1. edit your char_cfg.yaml and add X and Y coordinates on every joint_point, you can get coordinates by opening the mask or texture in any image editor.
  2. you can use the following command:
    python fix_annotations.py path_to_folder_with_char_cfg.yaml
    then it will launch a service on your localhost:5050, open it, move points and click save, it will save new coordinates on your char_cfg.yaml automatically