Possible bug in tldTrainNN
KrzysztofMadejski opened this issue · comments
I have found a possible bug resetting all the positive examples of the model. I guess it doesn't happen often, but have a look at this:
function [conf1,conf2,isin] = tldNN(x,tld)
isin = nan(3,size(x,2));
if isempty(tld.nex) % IF negative examples in the model are not defined THEN everything is positive
conf1 = ones(1,size(x,2));
conf2 = ones(1,size(x,2));
return;
end
then in tldTrainNN
[conf1,~,isin] = tldNN(x(:,i),tld); % measure Relative similarity
% Positive
if y(i) == 1 && conf1 <= tld.model.thr_nn % 0.65
if isnan(isin(2))
tld.pex = x(:,i);
continue;
end
if there are no negative examples in the model (tld.nex), the list of positive ones (many possible) (tld.pex) will be reset to just one example!
Is it a desired behaviour?
Dear Krzysztof,
You need to take a look at the code at the beginning. You'll see that when the list of positive and negative examples are initialized, the FIRST positive example is placed at the beginning of the list on purpose, followed by a random permutation of positive and negative examples. This is done on purpose so that the model is trained properly.
i.e.
x = [pEx(:,1) x(:,idx)];
y = [1 y(:,idx)];
You will then see that when the loop runs at the first iteration, there will always be one positive example added. After, the model (positive or negative) keep increasing in size, once they pass certain tests.
Hope this helps,
- Ray.