Shawn-Shan / fawkes

Fawkes, privacy preserving tool against facial recognition systems. More info at https://sandlab.cs.uchicago.edu/fawkes

Home Page:https://sandlab.cs.uchicago.edu/fawkes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

No effect on AWS Rekognition?

pospielov opened this issue · comments

I just downloaded two images, original and cloaked, from your website and uploaded them to AWS Rekognition. The results are 100% "similarity".
Did you upload the wrong images (I checked all of them, including Obama - that have different sizes)?
image

I see the same effect with my photos. In case I don't get back to this, I realized I had only run this with --mode=low. Currently processing --mode=high, but it's taking a while.

Ok, I processed this myself, and this test was run with --mode=high. The alterations are visible enough that you can see them. AWS Rekognition thinks they are the same person.

Comparison Original to High

So I considered it wasn't a realistic test to compare a before and after. I was thinking what we need to test are two different images of the same person that have been run through fawkes. Well I did that, and the results are not good. Basically we get 99.9, 99.5, and 99.3% similarity for low, mid, and high. I imagine these facial recognition tools saw all the publicity for fawkes and started training their networks to recognize cloaked images. Maybe generative adversarial networks would be a good next step, but I am not an expert on this. I imagine we would need those scripts for automating the testing.

Low
Comparison Obama Low
Mid
Comparison Obama Mid
High
Comparison Obama High

https://www.theregister.com/2022/03/15/research_finds_data_poisoning_cant/
This issue suggests that big players already trained their models to resist data poisoning.
Fawkes is about done for.