Element-Research / dpnn

deep extensions to nn

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Interpretation of 'target' in VRClassReward

nabihach opened this issue · comments

I'm a little confused about the meaning of the variable target, which is the argument of the following two functions:

  1. VRClassReward:updateOutput(input, target)
  2. VRClassReward:updateGradInput(inputTable, target)

For reinforcement learning agents, the correct target for a given input is not always available. In fact, a reward is computed based on the model's input and the model's output only. Why, then, do we need this
target ?

This particular criterion calculates reward based on classification accuracy. If you want a more traditional reinforcement learning criterion, you can easily implement that by subclassing nn.Criterion and passing the reward as the target argument. VRClassReward for implementing http://torch.ch/blog/2015/09/21/rmva.html.

@nicholas-leonard: thanks for the explanation. I have one more related question.

In VRClassReward:updateOutput(input, target), the input is a table {y,b} where y is the model's probability of classes, and b is the baseline reward. How do we compute b at each step in order to pass it into the criterion? I don't quite understand how this is being done in the RMVA example... what is the following piece of code meant to do?

-- add the baseline reward predictor
seq = nn.Sequential()
seq:add(nn.Constant(1,1))
seq:add(nn.Add(1))
concat = nn.ConcatTable():add(nn.Identity()):add(seq)
concat2 = nn.ConcatTable():add(nn.Identity()):add(concat)

-- output will be : {classpred, {classpred, basereward}}
agent:add(concat2)

Would appreciate some pointers! Thanks.