jonathantompson / optim

A numeric optimization package for Torch.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Optimization package

This package contains several optimization routines for Torch. Each optimization algorithm is based on the same interface:

x*, {f}, ... = optim.method(func, x, state)

where:

  • func: a user-defined closure that respects this API: f, df/dx = func(x)
  • x: the current parameter vector (a 1D torch.Tensor)
  • state: a table of parameters, and state variables, dependent upon the algorithm
  • x*: the new parameter vector that minimizes f, x* = argmin_x f(x)
  • {f}: a table of all f values, in the order they've been evaluated (for some simple algorithms, like SGD, #f == 1)

Important Note

The state table is used to hold the state of the algorihtm. It's usually initialized once, by the user, and then passed to the optim function as a black box. Example:

state = {
   learningRate = 1e-3,
   momentum = 0.5
}

for i,sample in ipairs(training_samples) do
    local func = function(x)
       -- define eval function
       return f,df_dx
    end
    optim.sgd(func,x,state)
end

About

A numeric optimization package for Torch.

License:Other


Languages

Language:Lua 99.4%Language:CMake 0.6%