xucong-zhang / ETH-XGaze

Official implementation of ETH-XGaze dataset baseline

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ETH-XGaze baseline

Official implementation of ETH-XGaze dataset baseline.

ETH-XGaze dataset

ETH-XGaze dataset is a gaze estimation dataset consisting of over one million high-resolution images of varying gaze under extreme head poses. We established a simple baseline test on our ETH-XGaze dataset and other datasets. This repository includes the code and pre-trained model. Please find more details about the dataset on our project page. Please note this repository is not responding to the dataset download, and I will not respond to any dataset download request in this repository. Thank you for your understanding.

License

The code is under the license of CC BY-NC-SA 4.0 license

Requirement

  • Python 3.5
  • Pytorch 1.1.0, torchvision
  • opencv-python

For model training

  • h5py to load the training data
  • configparser

For testing

  • dlib for face and facial landmark detection.

Training

  • You need to download the ETH-XGaze dataset for training. After downloading the data, make sure it is the version of pre-processed 224*224 pixels face patch. Put the data under '\data\xgaze'
  • Run the python main.py to train the model
  • The model will be saved under 'ckpt' folder.

Test

The demo.py files show how to perform the gaze estimation from input image. The example image is already in 'example/input' folder.

  • First, you need to download the pre-trained model, and put it under "ckpt" folder.
  • And then, run the 'python demo.py' for test.

Data normalization

The 'normalization_example.py' gives the example of data normalization from the raw dataset to the normalized data.

Citation

If using this code-base and/or the ETH-XGaze dataset in your research, please cite the following publication:

@inproceedings{Zhang2020ETHXGaze,
  author    = {Xucong Zhang and Seonwook Park and Thabo Beeler and Derek Bradley and Siyu Tang and Otmar Hilliges},
  title     = {ETH-XGaze: A Large Scale Dataset for Gaze Estimation under Extreme Head Pose and Gaze Variation},
  year      = {2020},
  booktitle = {European Conference on Computer Vision (ECCV)}
}

FAQ

Q: Where are the test set labels?
You can submit your test result to our leaderboard and get the results. Please do follow the registration first, otherwiese, your request will be ignored. Link to the leaderboard. Please do note the order of the subjects in the configuration file.

Q: What is the data normalization?
As we wrote in our paper, data normalization is a method to crop the face/eye image without head rotation around the roll axis. Please refer to the following paper for details: Revisiting Data Normalization for Appearance-Based Gaze Estimation. We also provide code to convert Gaze360, MPIIFaceGaze and ETH-XGaze datasets into the normalized space in this repository.

Q: Why convert 3D gaze direction (vector) to 2D gaze direction (pitch and yaw)? How to convert between 3D and 2D gaze directions?
Essentially to say, 2D pitch and yaw is enough to describe the gaze direction in the head coordinate system, and using 2D instead of 3D could make the model training easier. There are code examples on how to convert between them in the "utils.py" file as pitchyaw_to_vector and vector_to_pitchyaw.

About

Official implementation of ETH-XGaze dataset baseline


Languages

Language:Python 100.0%