In facestretch
we describe how we exploited the dlib’s facial landmarks in order to measure face deformationand perform an expression recognition task. We implemented multiple approaches, mainly based on supervised and weakly-supervised metric learning, neural networks and geodesic distances on a Riemannian manifold computed a transformation of the detected landmarks. For training the metric-learning and neural networks models we built a small dataset made of eight facial expression for each subject.
For more information read the paper located in the docs directory.
To get a local copy up and running follow these simple steps.
The project provide a Pipfile
file that can be managed with pipenv.
pipenv
installation is strongly encouraged in order to avoid dependency/reproducibility problems.
- pipenv
pip3 install pipenv
- Clone the repo
git clone https://gitlab.com/reddeadrecovery/facestretch
- Install Python dependencies
pipenv install
The repo already contains the trained models described in the paper.
For running such trained models just run the app executing the file detect_landmarks.py
You can control the app through the keyboard:
- Press s to save the neutral facial expression
- Press a or d to switch the reference facial expression
- Press w or x to switch models
- Press c to display the landmarks
- Press n to display the (out of scale) normalized landmarks
- Press q to exit from the app
For training from scratch new models with a new dataset follow these steps:
- Delete
.gitkeep
file fromdataset_metric_learning
,dataset_neural_training
anddataset_neural_validation
- Copy the dataset into the folder
dataset_metric_learning
in format subject_action.ext. Remember to assign the format subject_neutro.ext to the neuter images - Split the dataset into training and validation, after that copy the split sets in
dataset_neural_training
anddataset_neural_validation
always in format subject_action.ext - Run
reference_landmark.py
- Run
train.py
selecting the model to train - Run
neural_network.py
copying at the end the best model in the foldermodels
Once trained the new models you can run detect_landmarks.py
Every file with extension .py is executable. If you have pipenv
installed, executing them
so that the python interpreter can find the project dependencies is as easy as running pipenv run python $file
.
Here's a brief description of each and every executable file:
detect_landmarks.py
: Run the application which detects facial expressionsdataset.py
: Dataset buildingneural_networks.py
: Neural Network trainingreference_landmarks.py
: Facial expression reference landmarks calculationtrain.py
: Metric Learning trainigutils.py
: Utils file
Image and Video Analyis © Course held by Professor Pietro Pala - Computer Engineering Master Degree @University of Florence