ABaldrati / facestretch

In facestretch we describe how we exploited the dlib’s facial landmarks in order to measure face deformation and perform an expression recognition task. We implemented multiple approaches, based metric learning, neural networks and geodesic distances

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

facestretch

Table of Contents

About The Project

In facestretch we describe how we exploited the dlib’s facial landmarks in order to measure face deformationand perform an expression recognition task. We implemented multiple approaches, mainly based on supervised and weakly-supervised metric learning, neural networks and geodesic distances on a Riemannian manifold computed a transformation of the detected landmarks. For training the metric-learning and neural networks models we built a small dataset made of eight facial expression for each subject.

For more information read the paper located in the docs directory.

Built With

Getting Started

To get a local copy up and running follow these simple steps.

Prerequisites

The project provide a Pipfile file that can be managed with pipenv. pipenv installation is strongly encouraged in order to avoid dependency/reproducibility problems.

  • pipenv
pip3 install pipenv

Installation

  1. Clone the repo
git clone https://gitlab.com/reddeadrecovery/facestretch
  1. Install Python dependencies
pipenv install

Usage

App Usage

The repo already contains the trained models described in the paper. For running such trained models just run the app executing the file detect_landmarks.py

You can control the app through the keyboard:

  • Press s to save the neutral facial expression
  • Press a or d to switch the reference facial expression
  • Press w or x to switch models
  • Press c to display the landmarks
  • Press n to display the (out of scale) normalized landmarks
  • Press q to exit from the app

Training Usage

For training from scratch new models with a new dataset follow these steps:

  • Delete .gitkeep file from dataset_metric_learning, dataset_neural_training and dataset_neural_validation
  • Copy the dataset into the folder dataset_metric_learning in format subject_action.ext. Remember to assign the format subject_neutro.ext to the neuter images
  • Split the dataset into training and validation, after that copy the split sets in dataset_neural_training and dataset_neural_validation always in format subject_action.ext
  • Run reference_landmark.py
  • Run train.py selecting the model to train
  • Run neural_network.py copying at the end the best model in the folder models

Once trained the new models you can run detect_landmarks.py

Every file with extension .py is executable. If you have pipenv installed, executing them so that the python interpreter can find the project dependencies is as easy as running pipenv run python $file.

Here's a brief description of each and every executable file:

  • detect_landmarks.py: Run the application which detects facial expressions
  • dataset.py: Dataset building
  • neural_networks.py: Neural Network training
  • reference_landmarks.py: Facial expression reference landmarks calculation
  • train.py: Metric Learning trainig
  • utils.py: Utils file

Authors

Acknowledgments

Image and Video Analyis © Course held by Professor Pietro Pala - Computer Engineering Master Degree @University of Florence

About

In facestretch we describe how we exploited the dlib’s facial landmarks in order to measure face deformation and perform an expression recognition task. We implemented multiple approaches, based metric learning, neural networks and geodesic distances


Languages

Language:Python 100.0%