As outlined in Szegedy (2014). In that paper, an adversarial example for a image classification NN is defined as “an imperceptible non-random perturbation to a test image” that changes the NN’s prediction.
This project entails the following;
1 - The creation of an adversarial attack of me(my images) trying resemble that of target(our case Emanuelle Macron)
2 - The creation of a face mask mimicking the face of our target directly. Several image colors of face mask where used to make sure there is quite a good
Diverse variations in the face masks were produced to diversify the creation the face color intensitifies and attacks.
A number of 6 adversarial attacks were used from the Advertorch library https://advertorch-test.readthedocs.io/en/latest/advertorch/attacks.html.
1 - MomentumIterativeAttack(Resnet - ImageNet)
2 - SparseL1DescentAttack(Resnet - ImageNet)
3 - PGDAttack(Resnet - ImageNet)
4 - MomentumIterativeAttack(Restnet - casia-webface)
5 - SparseL1DescentAttack(Resnet - casia-webface)
6 - PGDAttack(Resnet - casia-webface)
Creating adversarial attack on two face detection algorithms that is; Face-mask resource: https://arxiv.org/pdf/2111.10759.pdf
1. Resnet using Imagenet dataset
2. Resnet using casia-webface dataset
A loss function was defined to calculate the distance between the set of me imgaes to target set of images.
Note: To request access to shared file: https://drive.google.com/file/d/1lSMbi6lwSk54wdDds9QYY8Gyw47snHBK/view?usp=sharing
- You can easily create your me folder using a set of your own images.
- you can make use of the provided target folder to experiement.
Defining a mask as the variable you will optimize and then perform the gradient descent optimizing the variable which has already the right shape.