wojzaremba / lstm

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

parametersNoGrad

nicholas-leonard opened this issue · comments

Don't see why it's there or what it is. That cloning code can be reduced and you can just use Torch 7's clone() like the following. BTW Core is just core_network, (i.e. LSTM plus).

     local core       = Core(opt);
     param, gradParam = core:getParameters()
     local p, gradP   = core:parameters()
     rnn.core = {}
     for i = 1, opt.seqLength do
        local clone = core:clone()
        local cloneP, cloneGradP = clone:parameters()
        -- All clones set to same view.
        for i = 1, #p do
           cloneP[i]:set(p[i])
           cloneGradP[i]:set(gradP[i])
        end
        rnn.core[#rnn.core + 1] = clone
        collectgarbage()
     end

That is what I thought. Thank you for the explanation.

@mszlazak Thank you very much for posting your insight 👍

It's really neat :)

Trying to get it to work right now :)

To be fair, I don't think Wojciech wrote the g_cloneManyTimes function, so it's not really his problem?

NP

Cleaned according to your suggestion.