Open-Debin / Emotion-FAN

ICIP 2019: Frame Attention Networks for Facial Expression Recognition in Videos

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Validation accuaracy

mathieulutfallah1 opened this issue · comments

Hi,
I am running the code with the frames cropped around the faces as stated in the paper without changing anything in the code. I am only achieving accuracies around 30%. Is this the same network you used to reach a 51% accuracy?
Thank you

Thanks for your attention. It must be wrong because achieving over 45% is very easy in deeplearning based method.

Hi, can you please guide me to what can be wrong in the implementation? I am using the exact same code with the AEFW data set and cropping the frames around faces and resizing them to 224 pixels. I am also loading the FER weights you propose. Unfortunately I still didn't manage to get the 45% validation accuracy.

@mathieulutfallah1 Thanks for your attention. The FER weights I proposed is pretrained weight. That means you should train it on the AFEW training set. Also, please use the right data augmentation strategy. Also, you can notice the accuracy of the training set. If you train the model on the training set but the accuracy can not achieve over 70% that must be some wrong. Finally, please try different learning rates. Also, if you use a very small learning rate you need more epoch to train.

I hope my guidence will help you.

Screenshot from 2020-12-13 23-06-32
Hi, sorry for the questions but I am new to machine learning. Can you please also help in understanding the different numbers. So what does the 70 and 43 means, as well as 25, 49 and 18. Thank you

@mathieulutfallah1
Merry Christmas, I recently update the Emotion-FAN, new features include data process, environment install, CK+ code, and Baseline code. Also, you can find the old version directory of Emotion-FAN in the README.md. I hope my new updates can help you greatly. Please see the Emotion-FAN for more details.