zyh-uaiaaaa / Erasing-Attention-Consistency

Official implementation of the ECCV2022 paper: Learn From All: Erasing Attention Consistency for Noisy Label Facial Expression Recognition

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About the pre-trained models

balabala-h opened this issue · comments

HI, it seems that the pre-trained models are not available now.
And I have tried to train this model in RAF-DB without pre-trained models, I got an unsatisfactory result. Would you please provide the results without pre-training

You can find the pretrained model here:
https://drive.google.com/file/d/1yQRdhSnlocOsZA4uT_8VO0-ZeLXF4gKd/view?usp=sharing

Yes, pretrain is important for the performance, as the attention consistency module needs a relatively strong backbone to effectively calculate the attention map. Without pertraining, the performance on RAF-DB with 10%, 20%, 30% noise is around 73.36%, 71.21%, 68.20%.