DmitryUlyanov / texture_nets

Code for "Texture Networks: Feed-forward Synthesis of Textures and Stylized Images" paper.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

modified the models/johnson.lua's network structure and got wrong results

luweishuang opened this issue · comments

commented

I tried to modified the models/johnson.lua's network structure, aimed to reduce running memory and shorten running time, just as follows:

model:add(pad(4, 4, 4, 4))
model:add(backend.SpatialConvolution(3, 32, 9, 9, 2, 2, 0, 0)) --src_stride=1
model:add(normalization(32))
model:add(nn.ReLU(true))
..........
..........
model:add(nn.SpatialFullConvolution(32, 3, 3, 3, 2, 2, 1, 1, 1, 1)) --source is SpatialConvolution
model:add(normalization(3))
model:add(nn.ReLU(true))

return model:add(nn.TVLoss(params.tv_weight))

and training dataSet is COCO, image_size=512, style_size=512, content_weight=10, style_weight=10, tv_weight=0.001, I also tried others content_weight and style_weight, it seemed it had no significant difference, here is my result image by tubingen_kandinsky.jpg
tubingen512_kandinsky_512_512_w5010_9000_norgrad

commented

I found the error, it need delete the last two lines in models/johnson.lua, "model:add(normalization(3))" and "model:add(nn.ReLU(true))". but I still want to know how to reduce running memory and shorten running time, I found it occupancied nearly 2.1G running memory at resolution 640*480 image and I think it can't run in a ios or android mobile phone

if you erase intermediate results as you go through the net during forward pass it will be much less memory

commented

I'm not very familiar with torch and my understanding is I do one "out = model.forward(input)", it actually has saved every layers outputs just like "pool3_feat = model.blobs('pool3').get_data()" in caffe. so if I want to erase intermediate results, I should modify torch's source code to manually release it. I saw "In symbolic programming, user declares the need by f=compiled([D]) instead. It also declares the boundary of computation, telling the system I only want to compute the forward pass. As a result, the system can free the memory of previous results, and share the memory between inputs and outputs." in Mxnet, Does torch exist some like "f=compiled([D])" to clear the intermediate state for me?