Torch7 implementation of "Unsupervised object learning from dense equivariant image labelling"
Note: I am working on training/test regressor code to make cleaner version.
but pretraining the network for latent space mapping works prefectly now. (don't worry :))
- Torch7
- thinplatspline
- python 2.7
- other torch packages (xlua, display, hdf5, image ...)
loarocks install display
loarocks install hdf5
loarocks install image
loarocks install xlua
first, download CelebA dataset (here).
<data_path>
|-- image 1
|-- image 2
|-- iamge 3 ...
To train the feature extractor(CNN):
1. change options in "script/opts.lua" and "data/gen_tps.py"
2. do "th pretrain.lua"
>> pretrained model will saved in 'repo/pretrain/'
To train the regressor(mlp):
1. change options in "script/opts.lua" and "data/gen_reg.lua"
2. do "th regtrain.lua"
>> trained regressor will saved in 'repo/regressor/'
To test the regressor(mlp):
1. change options in "script/opts.lua"
2. do "th regtest.lua"
>> test image wih landmarks will be saved in 'repo/test'
- Red : left-mouth
- Purple : right-mouth
- Green : nose
- Blue : left-eye
- Orange : right-eye
(https://plot.ly/~stellastra666/156/)
(https://plot.ly/~stellastra666/162/)
- good case (red: predict / green: GT)
- badcase
1. Original paper
nLandmark | regressor training | IOD error |
---|---|---|
10 | CelebA | 6.32 |
30 | CelebA | 5.76 |
50 | CelebA | 5.33 |
2. My code
nLandmark | regressor training | Iter(reg) | MSE | IOD error |
---|---|---|---|---|
100 | CelebA | 5K | 3.15 | 5.71 |
100 | CelebA | 50K | 3.31 | 5.67 |
Training images | learning iter | training loss | MSE | IOD error |
---|---|---|---|---|
10 | 1K | 0.04 | 5.67 | 9.97 |
50 | 1K | 0.09 | 4.73 | 8.07 |
100 | 1K | 0.13 | 4.42 | 8.13 |
2000 | 2K | 0.18 | 3.38 | 6.28 |
5000 | 3K | 0.20 | 3.36 | 5.84 |
15000 | 5K | 0.21 | 3.15 | 5.71 |
15000 | 50K | 0.21 | 3.31 | 5.67 |
Thank James for kindly answering my inquries and providing pieces of matlab code :)
MinchulShin / @nashory