LetheSec / Fer2013-Facial-Emotion-Recognition-Pytorch

This method achieves SOTA single model accuracy of 73.70 % on FER2013 without using extra training data.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Fer2013 - Facial Emotion Recognition

This work is the final project of the Computer Vision Course of USTC. However, I achieve the highest single-network classification accuracy on FER2013 based on ResNet18. To my best knowledge, this work achieves state-of-the-art single-network accuracy of 73.70 % on FER2013 without using extra training data, which exceeds the previous work [1] of 73.28%. (Chineses Post)

Method Private Test Data
[1] 73.28%
This work 73.70%

Official model checkpoint and training log can be found following:

Fer2013 Leaderboard: Here

环境

  • GPU:2080Ti
  • py:37

其他

  • CUDA:10.2
  • cuDnn:7605

Usage

First, you should download the official fer2013 dataset, and place it in the outmost folder with the following folder structure datasets/fer2013/fer2013.csv

To train your own model, run the following:

python train.py --name='your_version'

To evaluate the model, run the follwing

python evaluate.py --checkpoint='xxx/best_checkpoint.tar'

Result

Citation

If you are considering citing this work, please refer to the following:

@misc{yuan2021fer,
  title        = {Fer2013-Facial-Emotion-Recognition-Pytorch},
  author       = {Yuan, Xiaojian},
  year         = {2021},
  publisher    = {GitHub},
  journal      = {GitHub repository},
  howpublished = {\url{https://github.com/LetheSec/Fer2013-Facial-Emotion-Recognition-Pytorch}},
}

Reference

[1] Khaireddin, Yousif, and Zhuofa Chen. "Facial Emotion Recognition: State of the Art Performance on FER2013." arXiv preprint arXiv:2105.03588 (2021).

About

This method achieves SOTA single model accuracy of 73.70 % on FER2013 without using extra training data.

License:MIT License


Languages

Language:Python 100.0%