csxmli2016 / DFDNet

Blind Face Restoration via Deep Multi-scale Component Dictionaries (ECCV 2020)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About update face dictionary

Nise-2-meet-U opened this issue · comments

Dear xiaomingLi,

You have done a great job! Thanks for sharing!
I am trying to update the face dictionary by following your code and the instructions on your paper.
However, there are a few questions need your clarification.

  1. Your paper mentioned that you have used 10,000 face pics to consturct those dictionaries. I have checked your pre-trained dictionary model such as "right_eye_256_center.npy", it seems only contains 512 faces(each face feature have 128 feature map), I guess you use kmeans to select the best representative samples from 10,000 pices, is that right?
    image
    After kmeans operation, how to get the representative component? (average all feature vectors in one cluster?)

  2. If I update the face dictionaries without re-train the DFDNet, the results of face reconstruction will be better or worse?

  3. If my guess(1) is right, it takes huge GPU memory to store all those feature map but it only contains 500 faces. Is there a simply way to get more samples into the dictionary but remain GPU memory efficiency?

By all means, this is the best face reconstruction methods so far!
Thanks for your work, your early reply will be appreciated!

+1 Also interested

hi,I am a fan of this excellent job as well. For guess(1), I think those 512 faces are nodes of k-means on 10000 faces. When I deal with SRNTT, I use this strategy and get not bad result.
Then I form another face dictionaries repository for DFDNet, while using the original generative network. Outputs have some change but not serious.
Hope that we could communicate about DFDNet in future.
(可恶,打了半天英文发现也是**人)

commented

i like this job as well. It would be nice if you release the generating dictionary code and the training code

commented

Custom training faces consisting of the target image is the way to go to achieve better results for known low res enhancements. I too am interested in this.

hi,I am a fan of this excellent job as well. For guess(1), I think those 512 faces are nodes of k-means on 10000 faces. When I deal with SRNTT, I use this strategy and get not bad result.
Then I form another face dictionaries repository for DFDNet, while using the original generative network. Outputs have some change but not serious.
Hope that we could communicate about DFDNet in future.
(可恶,打了半天英文发现也是**人)

I also find that there is no big change when you replace the face dicts without retraining the whole networks. Have you ever tried to retrain the whole network? Thanks.(GitHub shang hai she xie yin wen hahahaha)