Kaixhin / Autoencoders

Torch implementations of various types of autoencoders

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

WTA-AE

wenqingchu opened this issue · comments

Have you test the WTA-AE on mnist like the NIPS paper? By myself, the performance is worse than that reported in the paper.

If you're looking at Table 1, it looks like the shallow FC WTA-AE uses 2000 units and 5% sparsity. They don't provide much training details - optimiser, minibatch size, number of unsupervised epochs on MNIST etc. In the appendix they note that they tie the weights of the FC WTA-AEs, which is not implemented here. The SVM from Table 1 seems to be trained on the features from the whole MNIST dataset, but again I can't see the training details.

It's worth having a look at my code to see if there are any issues, but I wouldn't worry about replicating results exactly unless you've done a lot of hyperparameter searching.

I've now added code that visualises the decoder weights at the end of training, so it's preferable to see if you can tune training to match Figure 1 in the paper. I've tuned training a little bit as well, but it's by no means exhaustive.