nnstreamer / nntrainer

NNtrainer is Software Framework for Training Neural Network Models on Devices.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[ layer ] Erasing `_FP16` code block in each layer

skykongkong8 opened this issue · comments

For quick implementation of half-precision layers, I could not avoid adding _FP16 code blocks like : #2355 #2280 or many other PRs.
However, such behavior is not really helpful for clean code maintenance.

To avoid such coding style, I believe we should only use Tensor ops inside of layer functions, and let the Tensor do the work automatically for given tensor_type.

Thus from now on:

  1. when implementing new layers, avoiding using Tensor member function template inside of layer member function would be highly recommended.
  2. we should discuss about the way how to delete the #ifdef code block inside of the layer member function implementation

:octocat: cibot: Thank you for posting issue #2362. The person in charge will reply soon.

We are going to use some different approach to get rid of if/else code blocks for dataTypes. Close this issue accordingly.