nnstreamer / nntrainer

NNtrainer is Software Framework for Training Neural Network Models on Devices.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Normalization Layer: FP32/FP16 should NOT be determined at compile-time

myungjoo opened this issue · comments

Having calculations based on fp16 or fp32 should be determined at run-time based on the model and user's intention, not by the compiler-options.

#ifndef ENABLE_FP16
deviation.pow(2.0f, temp_full_size);
temp_full_size.average(normalize_axes, variance);
variance.add_i(epsilon);
variance.pow(-0.5f, inv_std_dev);
#else
unsigned int axis_dim = deviation.getDim()[normalize_axes[0]];
for (unsigned int i = 0; i < deviation.getDim()[normalize_axes[0] - 1]; ++i) {
float sum = 0.0;
_FP16 *data = deviation.getAddress<_FP16>(0, 0, i, 0);
for (unsigned int j = 0; j < axis_dim; ++j) {
sum += powf(static_cast<float>(data[j]), 2.0f);
}
inv_std_dev.setValue(0, 0, i, 0, 1.0 / sqrt(sum / axis_dim - epsilon));
}
#endif

:octocat: cibot: Thank you for posting issue #2408. The person in charge will reply soon.

People recommand to use FP32 computation for normalization layers like batch normalization, layer normalizaiton or rms normalization, etc. We will check once again.

Hello could you let us know the progress on this issue.. Since we (from ASU) are also working and facing the same issue.

Hello could you let us know the progress on this issue.. Since we (from ASU) are also working and facing the same issue.

It's not refactored, yet. You will need to choose it with build-option for this version. I guess we will update these after the next product release.

Hello could you let us know the progress on this issue.. Since we (from ASU) are also working and facing the same issue.

#2549 may be interesting to you.