this repo follow the UFLDL tutorial on linear autoencoder. The training set is no longer MNIST, but the interesting 3 channel RGB image. 1. ============================================ the overall training only has finetune, meaning, for this autoencoder, we don't do the layer-wise pretrainning. just run the 45min finetune. using 100k samples. 2. ========================================== run the validate_cost_grad_func.m to check if the cost function and the gradient computations are correct. 3. ========================== with 400 iterations, the run time is ~900s = ~15min. 4.========================================