AKASH2907 / deepfakes_video_classification

Deepfakes Video classification via CNN, LSTM, C3D and triplets [IWBF'20]

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Layers Trainable in Extraction and Evaluation code

abacus-A opened this issue · comments

Hello, I want to discuss about the following code you have included in 07.evaluate_CNN_LSTM.py and 05.lstm_features.py

since you are extracting and classifying what is the expected use of training all layers (finetuning) since the weights loaded are already finetuned in cnn file.

for layer in baseModel.layers:
layer.trainable = True
Please guide Thank you.

In extraction and classification, there's no need to keeps basemodel layers trainable as true. Either way, it doesn't have any effect, we are not doing any backprop and updating the weights of the model. We are just loading weights and doing a forward pass. You can pass False. I don't think it's gonna change your performance accuracy.

Since it's not gonna have any effect, that's why I just copied the architecture from training to evaluation.

Thanks for the clarrification. Have you tried passing features from cnn to rnn using global avg pooling instead of FC1 layer for the face forensics++ dataset. I will try to use this approach later on my datset as well I hope it doesn't make any major difference on the accuracy results. Any suggestions will be helpful. Thank you

I haven't tried that. You can print out a summary for both of your approaches and compare the number of parameters. It may depend on this. I don't think it's gonna have a major impact on the results though.