av-savchenko / face-emotion-recognition

Efficient face emotion recognition in photos and videos

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Valence and arousal

AmaiaBiomedicalEngineer opened this issue · comments

Hello again!
I've read your paper and I've seen that you use the circumplex model's variables arousal and valence.
How do those variable appears in the code? I can't find them :(
Thank you,
Amaia

Hello! Please, take a look at the section "Multi-task: FER+Valence-Arousal" of train_emotions-pytorch.ipynb. Starting from line PATH='../models/affectnet_emotions/enet_b0_8_va_mtl.pt', you could load the model and run it. First 8 outputs correspond to logits for facial expressions, and the last two outputs stand for valence and arousal. The metrics on validation set of AffectNet are computed in the last two lines of this section right before Example usage. BTW, in this example, you could see the predicted valence and arousal in the titles of photos of my children. But I should say that my estimates of valence and arousal are not the best-of-the-best, I just used them for multi-task learning and improvement of facial embeddings learned by the model.

Closing due to inactivity