Open-Debin / Emotion-FAN

ICIP 2019: Frame Attention Networks for Facial Expression Recognition in Videos

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ck

jianbaba opened this issue · comments

Hello, I used the demo training script to train on ck. The training set was divided into 10, 9 training, and 1 test. The accuracy rate was only 92.5%. (The learning rate is set according to the thesis.) Is it a problem with the demo training script?(I removed the contempt data)

The performance is too low. I thought the accuracy could easy beyond 97% if you use my method.
You can use the pretrained model that I provide.
My accuracy is based on 10-fold cross-validation. so we achieve 100% accuracy on 9 fold and over 95% on 1 fold, a totally 99.69% accuracy I achieved. I will update my 10-fold list.

I use the demo script ofDemo_AFEW_Attention.py. Is this the reason?

I have update the CK+ 10-fold list.
Answer:
The core code is similar, just some parameter is different like learning rate, Just try some different. CK+ is simple, take it easy.

thanks

Is there a guidance of the CK+ dataset training? How to use the 10-fold list?

@EternalImmortal Thanks for your attention. Please see my published CK+ dataset list. The list divided into 10-fold, selecting 9-fold for training and remaining 1-fold for testing. for example [1,2,3,4,5,6,7,8,9] for training, [10] for testing. Or you can use [1,2,3,5,6,7,8,9,10] for training and [4] for testing. Actually, there are 10-kinds of options, you should report the average accuracy on each fold that used for testing: (fold1_acc + fold2_acc + ... + fold10_acc)/10.

Notice, you should train 10 models from corresponding split rather than training one model used for all split testing!! This is the basic rule.

I hope my answer will help you.

@EternalImmortal Thanks for your attention. Please see my published CK+ dataset list. The list divided into 10-fold, selecting 9-fold for training and remaining 1-fold for testing. for example [1,2,3,4,5,6,7,8,9] for training, [10] for testing. Or you can use [1,2,3,5,6,7,8,9,10] for training and [4] for testing. Actually, there are 10-kinds of options, you should report the average accuracy on each fold that used for testing: (fold1_acc + fold2_acc + ... + fold10_acc)/10.

Notice, you should train 10 models from corresponding split rather than training one model used for all split testing!! This is the basic rule.

I hope my answer will help you.

Thanks a lot for your reply. I try to run the model but got a new error. Plz refer to the newest issue.

@EternalImmortal @jianbaba
Merry Christmas! I recently update the Emotion-FAN, new features include data process, environment install, CK+ code, and Baseline code. Also, you can find the old version directory of Emotion-FAN in the README.md. I hope my new updates can help you greatly. Please see the Emotion-FAN for more details.

commented

@EternalImmortal @jianbaba Merry Christmas! I recently update the Emotion-FAN, new features include data process, environment install, CK+ code, and Baseline code. Also, you can find the old version directory of Emotion-FAN in the README.md. I hope my new updates can help you greatly. Please see the Emotion-FAN for more details.

hi,i recently run your demo on CK+,but my 10 folds accuracy is around 92%,could you please give me some advice? i use your pretrained model and 10 fold list.

commented

@EternalImmortal @jianbaba Merry Christmas! I recently update the Emotion-FAN, new features include data process, environment install, CK+ code, and Baseline code. Also, you can find the old version directory of Emotion-FAN in the README.md. I hope my new updates can help you greatly. Please see the Emotion-FAN for more details.

hi,i recently run your demo on CK+,but my 10 folds accuracy is around 92%,could you please give me some advice? i use your pretrained model and 10 fold list.