prajwalsingh / EEGStyleGAN-ADA

Pytorch code of paper "Learning Robust Deep Visual Representations from EEG Brain Recordings". [WACV 2024]

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About EEG2Feat

7ASSEL opened this issue · comments

Hello bro,
I was training EEG2Feat-LSTM on the CVPR40 5_95Hz dataset,
And after 1000+epochs, the K_means acc of the train_set was close to 1.0, but the K_means acc of validation was only 0.14+,
Should I make any changes to the network or hyper-paramaters,
Or should I switch to the raw CVPR40 dataset to reproduce the 0.9+ result on validation?

Hi @7ASSEL , the triplet loss based LSTM feature extraction network works for raw only. So you have to used raw for val acc. of 90+

Hi @7ASSEL , the triplet loss based LSTM feature extraction network works for raw only. So you have to used raw for val acc. of 90+

Thanks a lot! It really worked.
And besides the EEGClip experienment on the filtered 5-95Hz dataset mentioned in your paper, is there any other useage of the 5-95Hz dataset?

@7ASSEL , other than experiment we haven't used it for anything else. At present we are accumulating different EEG-Image dataset for diversity.