mapooon / SelfBlendedImages

[CVPR 2022 Oral] Detecting Deepfakes with Self-Blended Images https://arxiv.org/abs/2204.08376

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Training valid AUC too low

dandelion915 opened this issue · comments

Thanks for your brilliant work! I followed your video processing code and trained the model with bathsize=28 on two 1080TI for nearly 80 epochs, but the valid loss is still very high (about 3.8) and the valid acc stucked at 0.53. The following is the training log. I tired for 3 times with a new begining, but the resluts were somehow similar--as bad as the first time. On my other laptop (one 1060), due to the limit of memeory, I set the bathsize of 4, the valid acc is high and up to 90, but the result of cross-dataset test on CDF is only 66% which is far from your given pre-trained model of 93%. So can I ask what's the main problem here and how can I get better result?

Epoch 1/100 | train loss: 0.6957, train acc: 0.5051, val loss: 0.6945, val acc: 0.4991, val auc: 0.4983
Epoch 2/100 | train loss: 0.6949, train acc: 0.5020, val loss: 0.6938, val acc: 0.5055, val auc: 0.5062
Epoch 3/100 | train loss: 0.6939, train acc: 0.5061, val loss: 0.6905, val acc: 0.5345, val auc: 0.5479
Epoch 4/100 | train loss: 0.6931, train acc: 0.5092, val loss: 0.6911, val acc: 0.5195, val auc: 0.5345
Epoch 5/100 | train loss: 0.6929, train acc: 0.5121, val loss: 0.6911, val acc: 0.5256, val auc: 0.5413
Epoch 6/100 | train loss: 0.6927, train acc: 0.5152, val loss: 0.6899, val acc: 0.5389, val auc: 0.5537
Epoch 7/100 | train loss: 0.6922, train acc: 0.5182, val loss: 0.6886, val acc: 0.5470, val auc: 0.5739
Epoch 8/100 | train loss: 0.6918, train acc: 0.5210, val loss: 0.6892, val acc: 0.5391, val auc: 0.5636
Epoch 9/100 | train loss: 0.6912, train acc: 0.5286, val loss: 0.6890, val acc: 0.5345, val auc: 0.5636
Epoch 10/100 | train loss: 0.6906, train acc: 0.5307, val loss: 0.6897, val acc: 0.5330, val auc: 0.5577
Epoch 11/100 | train loss: 0.6906, train acc: 0.5286, val loss: 0.6888, val acc: 0.5459, val auc: 0.5651
Epoch 12/100 | train loss: 0.6894, train acc: 0.5396, val loss: 0.6880, val acc: 0.5488, val auc: 0.5694
Epoch 13/100 | train loss: 0.6883, train acc: 0.5418, val loss: 0.6875, val acc: 0.5478, val auc: 0.5714
Epoch 14/100 | train loss: 0.6872, train acc: 0.5494, val loss: 0.6857, val acc: 0.5651, val auc: 0.5857
Epoch 15/100 | train loss: 0.6859, train acc: 0.5514, val loss: 0.6844, val acc: 0.5600, val auc: 0.5864
Epoch 16/100 | train loss: 0.6829, train acc: 0.5643, val loss: 0.6815, val acc: 0.5787, val auc: 0.6059
Epoch 17/100 | train loss: 0.6788, train acc: 0.5791, val loss: 0.6808, val acc: 0.5748, val auc: 0.6029
Epoch 18/100 | train loss: 0.6709, train acc: 0.6011, val loss: 0.6721, val acc: 0.5950, val auc: 0.6340
Epoch 19/100 | train loss: 0.6565, train acc: 0.6383, val loss: 0.6596, val acc: 0.6409, val auc: 0.6729
Epoch 20/100 | train loss: 0.6043, train acc: 0.7358, val loss: 0.6487, val acc: 0.6480, val auc: 0.6844
Epoch 21/100 | train loss: 0.4083, train acc: 0.8870, val loss: 0.6878, val acc: 0.6033, val auc: 0.7112
Epoch 22/100 | train loss: 0.1422, train acc: 0.9700, val loss: 1.0582, val acc: 0.5720, val auc: 0.6615
Epoch 23/100 | train loss: 0.0659, train acc: 0.9852, val loss: 1.4780, val acc: 0.5259, val auc: 0.6610
Epoch 24/100 | train loss: 0.0353, train acc: 0.9918, val loss: 1.9196, val acc: 0.5036, val auc: 0.5579
Epoch 25/100 | train loss: 0.0349, train acc: 0.9921, val loss: 2.0033, val acc: 0.5147, val auc: 0.6118
Epoch 26/100 | train loss: 0.0154, train acc: 0.9982, val loss: 2.2417, val acc: 0.5201, val auc: 0.5975
Epoch 27/100 | train loss: 0.0135, train acc: 0.9975, val loss: 2.7043, val acc: 0.5063, val auc: 0.5585
Epoch 28/100 | train loss: 0.0102, train acc: 0.9988, val loss: 2.6171, val acc: 0.5312, val auc: 0.5991
Epoch 29/100 | train loss: 0.0094, train acc: 0.9978, val loss: 2.8837, val acc: 0.5129, val auc: 0.5996
Epoch 30/100 | train loss: 0.0079, train acc: 0.9987, val loss: 3.0676, val acc: 0.5196, val auc: 0.5721
Epoch 31/100 | train loss: 0.0086, train acc: 0.9970, val loss: 3.2782, val acc: 0.5121, val auc: 0.5628
Epoch 32/100 | train loss: 0.0055, train acc: 0.9986, val loss: 3.3657, val acc: 0.5152, val auc: 0.5466
Epoch 33/100 | train loss: 0.0046, train acc: 0.9992, val loss: 3.3215, val acc: 0.5300, val auc: 0.5612
Epoch 34/100 | train loss: 0.0034, train acc: 0.9990, val loss: 3.5200, val acc: 0.5254, val auc: 0.5206
Epoch 35/100 | train loss: 0.0026, train acc: 0.9995, val loss: 3.8392, val acc: 0.5340, val auc: 0.5411
Epoch 36/100 | train loss: 0.0083, train acc: 0.9968, val loss: 3.8063, val acc: 0.5348, val auc: 0.5629
Epoch 37/100 | train loss: 0.0021, train acc: 0.9999, val loss: 4.0034, val acc: 0.5317, val auc: 0.5678
Epoch 38/100 | train loss: 0.0084, train acc: 0.9968, val loss: 4.0282, val acc: 0.5080, val auc: 0.5787
Epoch 39/100 | train loss: 0.0024, train acc: 1.0000, val loss: 4.4494, val acc: 0.5214, val auc: 0.5455
Epoch 40/100 | train loss: 0.0022, train acc: 0.9998, val loss: 4.4703, val acc: 0.5241, val auc: 0.5653
Epoch 41/100 | train loss: 0.0017, train acc: 0.9998, val loss: 4.7169, val acc: 0.5313, val auc: 0.5503
Epoch 42/100 | train loss: 0.0040, train acc: 0.9986, val loss: 4.8927, val acc: 0.5358, val auc: 0.5620
Epoch 43/100 | train loss: 0.0024, train acc: 0.9993, val loss: 4.4155, val acc: 0.5335, val auc: 0.5910
Epoch 44/100 | train loss: 0.0033, train acc: 0.9985, val loss: 5.8817, val acc: 0.5286, val auc: 0.5152
Epoch 45/100 | train loss: 0.0063, train acc: 0.9988, val loss: 5.2544, val acc: 0.5322, val auc: 0.5107
Epoch 46/100 | train loss: 0.0035, train acc: 0.9987, val loss: 5.5359, val acc: 0.5273, val auc: 0.5316
Epoch 47/100 | train loss: 0.0014, train acc: 0.9998, val loss: 5.3060, val acc: 0.5277, val auc: 0.5540
Epoch 48/100 | train loss: 0.0009, train acc: 1.0000, val loss: 5.3945, val acc: 0.5143, val auc: 0.5317
Epoch 49/100 | train loss: 0.0006, train acc: 1.0000, val loss: 5.8465, val acc: 0.5263, val auc: 0.5461
Epoch 50/100 | train loss: 0.0006, train acc: 1.0000, val loss: 5.1220, val acc: 0.5559, val auc: 0.5769
Epoch 51/100 | train loss: 0.0046, train acc: 0.9988, val loss: 4.3058, val acc: 0.5523, val auc: 0.5456
Epoch 52/100 | train loss: 0.0008, train acc: 1.0000, val loss: 4.6946, val acc: 0.5429, val auc: 0.5244
Epoch 53/100 | train loss: 0.0005, train acc: 1.0000, val loss: 4.9155, val acc: 0.5375, val auc: 0.5274
Epoch 54/100 | train loss: 0.0008, train acc: 1.0000, val loss: 5.2772, val acc: 0.5273, val auc: 0.5283
Epoch 55/100 | train loss: 0.0010, train acc: 0.9999, val loss: 5.0639, val acc: 0.5277, val auc: 0.5495
Epoch 56/100 | train loss: 0.0011, train acc: 0.9999, val loss: 4.7789, val acc: 0.5344, val auc: 0.5525
Epoch 57/100 | train loss: 0.0008, train acc: 1.0000, val loss: 4.4146, val acc: 0.5406, val auc: 0.5842
Epoch 58/100 | train loss: 0.0005, train acc: 1.0000, val loss: 5.5026, val acc: 0.5210, val auc: 0.5553
Epoch 59/100 | train loss: 0.0019, train acc: 0.9994, val loss: 5.3561, val acc: 0.5694, val auc: 0.5495
Epoch 60/100 | train loss: 0.0052, train acc: 0.9984, val loss: 8.7304, val acc: 0.5268, val auc: 0.5271
Epoch 61/100 | train loss: 0.0008, train acc: 1.0000, val loss: 8.1215, val acc: 0.5165, val auc: 0.5369
Epoch 62/100 | train loss: 0.0026, train acc: 0.9989, val loss: 8.4821, val acc: 0.5281, val auc: 0.5232
Epoch 63/100 | train loss: 0.0025, train acc: 0.9991, val loss: 7.9619, val acc: 0.5335, val auc: 0.5330
Epoch 64/100 | train loss: 0.0021, train acc: 0.9990, val loss: 4.6351, val acc: 0.5040, val auc: 0.5348
Epoch 65/100 | train loss: 0.0006, train acc: 0.9999, val loss: 4.4135, val acc: 0.5098, val auc: 0.5105
Epoch 66/100 | train loss: 0.0006, train acc: 0.9998, val loss: 4.6362, val acc: 0.5013, val auc: 0.5327
Epoch 67/100 | train loss: 0.0006, train acc: 0.9999, val loss: 4.1294, val acc: 0.5264, val auc: 0.5434
Epoch 68/100 | train loss: 0.0005, train acc: 1.0000, val loss: 4.2482, val acc: 0.5357, val auc: 0.5387
Epoch 69/100 | train loss: 0.0004, train acc: 1.0000, val loss: 4.5430, val acc: 0.5219, val auc: 0.5367
Epoch 70/100 | train loss: 0.0004, train acc: 1.0000, val loss: 4.7145, val acc: 0.5143, val auc: 0.5129
Epoch 71/100 | train loss: 0.0006, train acc: 0.9999, val loss: 4.8528, val acc: 0.5183, val auc: 0.5158
Epoch 72/100 | train loss: 0.0002, train acc: 1.0000, val loss: 4.9054, val acc: 0.5183, val auc: 0.5200
Epoch 73/100 | train loss: 0.0002, train acc: 1.0000, val loss: 4.9477, val acc: 0.5184, val auc: 0.5129
Epoch 74/100 | train loss: 0.0003, train acc: 1.0000, val loss: 5.0406, val acc: 0.5201, val auc: 0.5104
Epoch 75/100 | train loss: 0.0008, train acc: 0.9997, val loss: 5.0495, val acc: 0.5188, val auc: 0.5066
Epoch 76/100 | train loss: 0.0003, train acc: 1.0000, val loss: 5.5569, val acc: 0.5282, val auc: 0.5319
Epoch 77/100 | train loss: 0.0003, train acc: 1.0000, val loss: 5.0770, val acc: 0.5326, val auc: 0.5362

Thank you for your interest in our work!
Our code does not currently support multiple GPUs, so it may cause problems.
Please modify the code with synchronized batch norm to support it.
I will work on that, but I can't give you a definite date when that will be.
Also, a small batch size is expected to give poor results.

Me too. I followed the preprocess procedure and get retina faces in FF++. Then I trained the network following the training instructions on A100. What I only changed is the input size (380->224). The cross AUC on CDF is only 76.62. Here is the training log. I have also trained with size 380, the AUC on CDF is 78.94 (we choose the latest ckpt as the final ckpt), still a lot far away from the reported results. We have also tested with the provided ckpt sbi.tar and got AUC on CDF 93.+, indicating that the evaluation procedure is correct. Then what really confuses us is the training procedure. Can you provide any help on this ?
Epoch 1/100 | train loss: 0.6846, train acc: 0.5465, val loss: 0.6709, val acc: 0.6066, val auc: 0.6650
Epoch 2/100 | train loss: 0.6501, train acc: 0.6222, val loss: 0.6317, val acc: 0.6681, val auc: 0.7428
Epoch 3/100 | train loss: 0.6258, train acc: 0.6694, val loss: 0.5935, val acc: 0.7205, val auc: 0.8004
Epoch 4/100 | train loss: 0.6002, train acc: 0.6829, val loss: 0.5504, val acc: 0.7622, val auc: 0.8324
Epoch 5/100 | train loss: 0.5533, train acc: 0.7315, val loss: 0.4909, val acc: 0.7983, val auc: 0.8735
Epoch 6/100 | train loss: 0.4990, train acc: 0.7784, val loss: 0.4470, val acc: 0.8288, val auc: 0.8941
Epoch 7/100 | train loss: 0.4585, train acc: 0.8004, val loss: 0.4122, val acc: 0.8184, val auc: 0.8994
Epoch 8/100 | train loss: 0.4183, train acc: 0.8150, val loss: 0.3443, val acc: 0.8618, val auc: 0.9345
Epoch 9/100 | train loss: 0.3769, train acc: 0.8434, val loss: 0.3224, val acc: 0.8750, val auc: 0.9399
Epoch 10/100 | train loss: 0.3602, train acc: 0.8473, val loss: 0.3099, val acc: 0.8858, val auc: 0.9392
Epoch 11/100 | train loss: 0.3273, train acc: 0.8622, val loss: 0.2743, val acc: 0.8986, val auc: 0.9545
Epoch 12/100 | train loss: 0.3086, train acc: 0.8739, val loss: 0.2738, val acc: 0.9028, val auc: 0.9517
Epoch 13/100 | train loss: 0.2929, train acc: 0.8803, val loss: 0.2565, val acc: 0.8924, val auc: 0.9582
Epoch 14/100 | train loss: 0.2843, train acc: 0.8810, val loss: 0.2370, val acc: 0.9073, val auc: 0.9693
Epoch 15/100 | train loss: 0.2671, train acc: 0.8917, val loss: 0.1967, val acc: 0.9222, val auc: 0.9781
Epoch 16/100 | train loss: 0.2433, train acc: 0.9105, val loss: 0.2363, val acc: 0.8937, val auc: 0.9661
Epoch 17/100 | train loss: 0.2394, train acc: 0.9048, val loss: 0.2278, val acc: 0.9101, val auc: 0.9682
Epoch 18/100 | train loss: 0.2395, train acc: 0.9041, val loss: 0.2455, val acc: 0.8986, val auc: 0.9575
Epoch 19/100 | train loss: 0.2315, train acc: 0.9130, val loss: 0.1784, val acc: 0.9378, val auc: 0.9815
Epoch 20/100 | train loss: 0.2228, train acc: 0.9158, val loss: 0.2006, val acc: 0.9181, val auc: 0.9761
Epoch 21/100 | train loss: 0.2183, train acc: 0.9144, val loss: 0.1781, val acc: 0.9365, val auc: 0.9815
Epoch 22/100 | train loss: 0.2174, train acc: 0.9151, val loss: 0.1532, val acc: 0.9451, val auc: 0.9858
Epoch 23/100 | train loss: 0.2128, train acc: 0.9169, val loss: 0.2064, val acc: 0.9233, val auc: 0.9717
Epoch 24/100 | train loss: 0.2209, train acc: 0.9212, val loss: 0.1771, val acc: 0.9337, val auc: 0.9810
Epoch 25/100 | train loss: 0.1960, train acc: 0.9336, val loss: 0.1772, val acc: 0.9431, val auc: 0.9773
Epoch 26/100 | train loss: 0.1940, train acc: 0.9318, val loss: 0.1883, val acc: 0.9233, val auc: 0.9752
Epoch 27/100 | train loss: 0.1922, train acc: 0.9290, val loss: 0.2072, val acc: 0.9125, val auc: 0.9716
Epoch 28/100 | train loss: 0.1923, train acc: 0.9318, val loss: 0.1836, val acc: 0.9316, val auc: 0.9765
Epoch 29/100 | train loss: 0.1842, train acc: 0.9350, val loss: 0.1633, val acc: 0.9354, val auc: 0.9809
Epoch 30/100 | train loss: 0.1909, train acc: 0.9244, val loss: 0.1572, val acc: 0.9458, val auc: 0.9839
Epoch 31/100 | train loss: 0.1734, train acc: 0.9375, val loss: 0.1675, val acc: 0.9372, val auc: 0.9809
Epoch 32/100 | train loss: 0.1701, train acc: 0.9379, val loss: 0.1518, val acc: 0.9413, val auc: 0.9856
Epoch 33/100 | train loss: 0.1770, train acc: 0.9343, val loss: 0.1769, val acc: 0.9285, val auc: 0.9803
Epoch 34/100 | train loss: 0.1672, train acc: 0.9389, val loss: 0.1638, val acc: 0.9417, val auc: 0.9809
Epoch 35/100 | train loss: 0.1675, train acc: 0.9379, val loss: 0.1600, val acc: 0.9347, val auc: 0.9823
Epoch 36/100 | train loss: 0.1547, train acc: 0.9418, val loss: 0.1247, val acc: 0.9604, val auc: 0.9892
Epoch 37/100 | train loss: 0.1674, train acc: 0.9418, val loss: 0.1581, val acc: 0.9417, val auc: 0.9863
Epoch 38/100 | train loss: 0.1590, train acc: 0.9446, val loss: 0.1733, val acc: 0.9406, val auc: 0.9742
Epoch 39/100 | train loss: 0.1632, train acc: 0.9414, val loss: 0.1443, val acc: 0.9413, val auc: 0.9884
Epoch 40/100 | train loss: 0.1606, train acc: 0.9414, val loss: 0.1457, val acc: 0.9424, val auc: 0.9846

Me too. I have also trained with batch size=32, the AUC on CDF is 78.94.

Thank you for your interest in our work! Our code does not currently support multiple GPUs, so it may cause problems. Please modify the code with synchronized batch norm to support it. I will work on that, but I can't give you a definite date when that will be. Also, a small batch size is expected to give poor results.

Thank you for your tips! I followed your advice and trained on one 3090 for about 250epochs with default settings, and the val auc is up to 99+, but the test AUC on CDF is merely 84, still far from 93. So could you tell me what can i do to further improve the cross-dataset test result?

I again started by cloning this repository and trained a model.
I got the same checkpoint as I distribute. So there are not problems in code.
To reproduce the experimental results, please follow the instruction strictly, including installing the optional landmark augmentation.

I again started by cloning this repository and trained a model. I got the same checkpoint as I distribute. So there are not problems in code. To reproduce the experimental results, please follow the instruction strictly, including installing the optional landmark augmentation.

Thanks for your advice. I included the landmark augmentation and actually achieved CDF AUC of 93.7! Remarkable work!

Thank you for your interest in our work! Our code does not currently support multiple GPUs, so it may cause problems. Please modify the code with synchronized batch norm to support it. I will work on that, but I can't give you a definite date when that will be. Also, a small batch size is expected to give poor results.

I trained the model according to the official guide on two RTX3090s, but only 0.8215 (AUC) on CDF. Maybe there are some problems with my training, do you have any plans to release the official multi-GPU training version?

Thank you so much for the great code for the article, I've never seen code before that runs immediately for both inference and training. However, as a result of training, it is not possible to obtain the same result on Celeb-DF as the provided checkpoint (0.9381 -> 0.8983).
What I've done:

  • double-checked the training videos by hashes with what I had;
  • saved the last checkpoint, even if it is not in the top 5;
  • restarted training with provided docker image;
  • added augmentation landmark;

On the FFIV dataset, the decrease in accuracy is smaller (0.8466 -> 0.8376).
I think it may be in the declared batch size (in the article and in the config it is 32), but in the training code when initializing the dataloader, it's divided by 2 (link).
Maybe you doubled it to fit into A100 40GB, and your model was trained on A100 80GB?
When training, at times I had an error when resizing the image by OpenCV, because img is empty.
OpenCV(4.5.4) /tmp/pip-req-build-khv2fx3p/opencv/modules/imgproc/src/resize.cpp:4051: error: (-215:Assertion failed) !ssize.empty() in function 'resize'. Did you have something similar?
I would appreciate any hint on how to solve this problem.

@jiaming-lee Sorry, I have no plan to release multi-GPU version for now.

@AlexanderParkin
The actual number of images of a batch is twice batch_size set in DataLoader because a batch is composed of real images and their SBIs. So we set batch_size=32/2.
The opencv error is seen in my training, so I don't think it has anything to do with performance degradation.

I again started by cloning this repository and trained a model. I got the same checkpoint as I distribute. So there are not problems in code. To reproduce the experimental results, please follow the instruction strictly, including installing the optional landmark augmentation.

Thanks for your advice. I included the landmark augmentation and actually achieved CDF AUC of 93.7! Remarkable work!

Hi dandelion915, can you explain how you included landmark augmentation in the code? I have put the repo inside the mentioned folder, but how do you use it inside the sbi training code?

Landmark augmentation issue resolved.

Thanks to an enthusiastic collaborator, we have found that there is a bug in crop_dlib_ff.py. And we have just fixed the bug, so please try again from the point of executing preprocessing.