mapooon / SelfBlendedImages

[CVPR 2022 Oral] Detecting Deepfakes with Self-Blended Images https://arxiv.org/abs/2204.08376

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FF++ c23 c40 result

LOOKCC opened this issue · comments

In paper 4.4. Cross-Manipulation Evaluation,“We use the raw version for evaluation as well as the competitors.” but the most of the CVPR2021 and ICCV21 papers all use the c23 as the FF++ result to compare. But there is no c23 and c40 results in your paper. And as we know, the raw result in ff++ is neally 100% auc which is meanless to compare the raw.

what's more, comparing with FTCN and LipForensics, "Robustness to unseen perturbations" part is lost in paper, So I want to konw the Robustness of Self-Blended.

Have you ever run the code on C40 and get any results?

Have you ever run the code on C40 and get any results?

I did some experiments on c23 and the results were not very satisfying to me.

Thanks for your generous sharing.

@LOOKCC would you tell me how to test FF++ dataset ? should we revise the code in inference._dataset.py !?

I conducted experiments on c40, and the generalization to other datasets yielded good results. However, the performance was quite poor in the cross-manipulation evaluation on FF++. The AUC results were approximately NT-60.67, DF-66.72, F2F-64.54, and FS-56.19.