The repo provides code of GMAA with Lightning-Hydra-Template.
The training and testing experiments are conducted using with a single NVIDIA Tesla V100 32GB.
-
Prerequisites:
Note that GMAA is only tested on Ubuntu OS with the following environments. It may work on other operating systems (i.e., Windows) as well but we do not guarantee that it will.
- Creating a virtual environment in terminal:
conda create -n GMAA python=3.8
. - Installing necessary packages:
pip install -r requirements.txt
.
- Creating a virtual environment in terminal:
-
Prepare the data/pretrained weights:
-
downloading CelebA-HQ dataset
Assigning your costumed path
--src_hq_path
to rundata/CelebAHQ/process.py
in order to filter original CelebA-HQ dataset by valid AUs (the confidence from OpenFace need be greater or equal to 0.95). -
downloading pretrained face recognition models from Google Drive (From Adv-Makeup). Move the pretrained models to
pretrained/FRmodels
.
-
-
Training Configuration:
- Just enjoy it via running
bash script/train.sh
in your terminal. The trainning results is inlogs/train/gmaa/runs/%Y-%m-%d_%H-%M-%S
.
- Just enjoy it via running
-
Testing Configuration:
- After step.3, replace your trained model path (
--ckpt_path
) inscript/eval.sh
. The trained model path islogs/train/gmaa/runs/%Y-%m-%d_%H-%M-%S/checkpoints/epoch_019.ckpt
(max_epoch
is setted to20
). - Just enjoy it via running
bash script/eval.sh
in your terminal. The evaluation results directory islogs/eval/gmaa/runs/%Y-%m-%d_%H-%M-%S
. The generated adversarial examples of test dataset is intest_vis
of evaluation results directory.
- After step.3, replace your trained model path (
-
Evaluation Configuration:
-
Replace your testing adversarial examples directory (
--res_root
) inmetric/test_asr.py
&metric/test_faceplusplus.py
. The testing adversarial examples directory of step.4 istest_vis
of evaluation results directory. -
Just enjoy it via running
python metric/test_asr.py
to get the attack success rate. The result is saved intest_asr
of evaluation results directory. -
Just enjoy it via running
python metric/test_faceplusplus.py
to get the Face++ confidence score. The result is saved intest_faceplusplus
of evaluation results directory.Please note: Need fill your own api
--key
and--secret
getted from Face++.
-
logs/eval/gmaa/runs/%Y-%m-%d_%H-%M-%S
└- test_asr # Attack Success Rate
└- test_faceplusplus # Face++ confidence score
└- test_vis # Generated adversarial examples of test dataset
└- ...
The final project should be like this:
GMAA
└- data
└- CelebAHQ
└- CelebA-pairs
└- typical_au.txt
└- log
└- eval
└- train
└- pretrained
└- exper_edit
└- ...
└- FRmodels
└- facenet.pth
└- ir152.pth
└- irse50.pth
└- mobile_face.pth
└- ...
Some of the codes are built upon AMT, pretrained face recognition models are from Adv-Makeup