nnstreamer / nntrainer

NNtrainer is Software Framework for Training Neural Network Models on Devices.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[tflite] Support Tensorflow Lite Selected Ops in nntrainer

DonghakPark opened this issue · comments

we will update tensorflow lite package for support tensorflow lite selected ops

below is temp solution for using them

Build Tensorflow lite .so file with selected ops

Requirements
gcc, g++, glibc

  1. Download Tensorflow, Flatbuffer
$ wget https://github.com/tensorflow/tensorflow/archive/refs/tags/v2.11.0.tar.gz 
$ wget https://github.com/google/flatbuffers/archive/refs/tags/v22.11.23.tar.gz
  1. unzip tar.gz files
$ tar -zxvf {filename}
  1. Add this dependency to /tensorflow/lite/BUILD → tflite_cc_shared_object
→(remove this line) "//tensorflow/lite/kernels:builtin_ops_all_linked",
"//tensorflow/lite/kernels:builtin_ops",
"//tensorflow/lite/delegates/flex:delegate",
"//tensorflow/lite/delegates/flex:exported_symbols.lds",
"//tensorflow/lite/delegates/flex:version_script.lds",
  1. Download dependency
$ sudo apt install python3-dev python3-pip
$ pip install pip numpy wheel packaging requests opt_einsum
$ pip install keras_preprocessing --no-deps
$ sudo apt-get install gcc-aarch64-linux-gnu
  1. install bazelisk
$ sudo apt install npm
$ sudo npm install -g @bazel/bazelisk 
$ bazelisk 

for x86_64

$ bazelisk build --config=monolithic --config=noaws --config=nogcp --config=nohdfs --config=nonccl  --fat_apk_cpu=x86,x86_64  --experimental_ui_max_stdouterr_bytes=1073741819 -c opt //tensorflow/lite:libtensorflowlite.so

for aarch64

You need to hide the system's openssl library. For this go to usr/include folder and make sudo mv openssl openssl.original.

$ bazelisk build --config=monolithic --config=noaws --config=nogcp --config=nohdfs --config=nonccl --config=elinux_aarch64 --experimental_ui_max_stdouterr_bytes=1073741819 -c opt //tensorflow/lite:libtensorflowlite.so

Apply to local environments

  1. update "/usr/lib/pkgconfig/tensorflow2-lite.pc" file -- update like below
    Name: tensorflow lite
    Description: tensorflow lite static library
    Version: 2.11.0
    Libs: -L/usr/lib -ltensorflowlite
    Cflags: -I/usr/include/tensorflowlite
  1. move libtensorflowlite.so file to /usr/lib

  2. make dir in /usr/include as name "tensorflowlite"

  3. in /usr/include/tensorflowlite put flatbuffer and tensorflow (it come from github)

  4. in nntrainer meson build -> ninja install again

  5. in your app (you can run)

summary : just change pkgconfig file and make them to use new .so file we can run Simpleshot nntrainer Application

I test this method on my local PC (Ubuntu 22.04, x86_64)

:octocat: cibot: Thank you for posting issue #2193. The person in charge will reply soon.

@DonghakPark Thanks for the instructions! I'm following the instructions on Ubuntu 20.04 machine. I rebuild the libtensorflowlite.so with bazelisk. However, I encounter an error while doing ninja install:

$ ninja -C build install
ninja: Entering directory `build'
[254/443] Linking target nntrainer/libnntrainer.so.
FAILED: nntrainer/libnntrainer.so 
c++  -o nntrainer/libnntrainer.so 'nntrainer/d48ed23@@nntrainer@sha/ini_interpreter.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/activation_realizer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/flatten_realizer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/recurrent_realizer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/remap_realizer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/slice_realizer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/input_realizer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/previous_input_realizer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/multiout_realizer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/bn_realizer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/loss_realizer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/tflite_interpreter.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/tflite_opnode.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/flatbuffer_interpreter.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/iteration_queue.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/databuffer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/data_iteration.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/databuffer_factory.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/random_data_producers.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/func_data_producer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/raw_file_data_producer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/dir_data_producers.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/loss_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/kld_loss_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/mse_loss_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/cross_entropy_sigmoid_loss_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/cross_entropy_softmax_loss_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/constant_derivative_loss_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/activation_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/addition_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/attention_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/mol_attention_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/multi_head_attention_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/concat_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/bn_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/layer_normalization_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/conv2d_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/conv1d_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/fc_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/flatten_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/input_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/multiout_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/layer_node.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/pooling2d_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/preprocess_flip_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/preprocess_translate_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/preprocess_l2norm_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/embedding.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/rnn.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/rnncell.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/acti_func.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/lstm.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/lstmcell.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/lstmcell_core.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/zoneout_lstmcell.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/time_dist.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/common_properties.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/split_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/permute_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/layer_impl.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/gru.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/grucell.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/dropout.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/centroid_knn.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/layer_context.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/reshape_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/reduce_mean_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/positional_encoding_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/identity_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/tflite_layer.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/model_loader.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/neuralnet.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/model_common_properties.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/dynamic_training_optimization.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/adam.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/optimizer_devel.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/sgd.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/optimizer_context.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/lr_scheduler_constant.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/lr_scheduler_exponential.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/lr_scheduler_step.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/optimizer_wrapped.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/blas_interface.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/cache_elem.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/cache_loader.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/cache_pool.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/lazy_tensor.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/manager.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/tensor.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/tensor_dim.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/var_grad.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/weight.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/basic_planner.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/memory_pool.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/swap_device.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/tensor_pool.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/optimized_v1_planner.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/optimized_v2_planner.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/optimized_v3_planner.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/task_executor.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/util_func.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/profiler.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/ini_wrapper.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/node_exporter.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/base_properties.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/nntr_threads.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/network_graph.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/graph_core.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/connection.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/nntrainer_logger.cpp.o' 'nntrainer/d48ed23@@nntrainer@sha/app_context.cpp.o' -Wl,--as-needed -Wl,--no-undefined -Wl,-O1 -shared -fPIC -Wl,--start-group -Wl,-soname,libnntrainer.so /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblas.so -liniparser /usr/lib/x86_64-linux-gnu/libcapi-ml-common.so -lm -ldl -pthread -fopenmp /usr/lib/libtensorflowlite.so -Wl,--end-group -Wl,-rpath,/usr/lib/x86_64-linux-gnu/openblas-pthread -Wl,-rpath-link,/usr/lib/x86_64-linux-gnu/openblas-pthread
/usr/bin/ld: nntrainer/d48ed23@@nntrainer@sha/tflite_layer.cpp.o: in function `nntrainer::TfLiteLayer::~TfLiteLayer()':
tflite_layer.cpp:(.text+0xe6): undefined reference to `tflite::impl::FlatBufferModel::~FlatBufferModel()'
/usr/bin/ld: tflite_layer.cpp:(.text+0x104): undefined reference to `tflite::impl::Interpreter::~Interpreter()'
/usr/bin/ld: nntrainer/d48ed23@@nntrainer@sha/tflite_layer.cpp.o: in function `nntrainer::TfLiteLayer::forwarding(nntrainer::RunLayerContext&, bool)':
tflite_layer.cpp:(.text+0x41a): undefined reference to `tflite::impl::Interpreter::Invoke()'
/usr/bin/ld: tflite_layer.cpp:(.text+0x4c4): undefined reference to `tflite::impl::Interpreter::Invoke()'
/usr/bin/ld: nntrainer/d48ed23@@nntrainer@sha/tflite_layer.cpp.o: in function `nntrainer::TfLiteLayer::finalize(nntrainer::InitLayerContext&)':
tflite_layer.cpp:(.text+0xb45): undefined reference to `tflite::impl::FlatBufferModel::BuildFromFile(char const*, tflite::ErrorReporter*)'
/usr/bin/ld: tflite_layer.cpp:(.text+0xb6a): undefined reference to `tflite::impl::FlatBufferModel::~FlatBufferModel()'
/usr/bin/ld: tflite_layer.cpp:(.text+0xb89): undefined reference to `tflite::impl::FlatBufferModel::~FlatBufferModel()'
/usr/bin/ld: tflite_layer.cpp:(.text+0xbc2): undefined reference to `tflite::impl::InterpreterBuilder::InterpreterBuilder(tflite::impl::FlatBufferModel const&, tflite::OpResolver const&, tflite::InterpreterOptions const*)'
/usr/bin/ld: tflite_layer.cpp:(.text+0xbcf): undefined reference to `tflite::impl::InterpreterBuilder::operator()(std::unique_ptr<tflite::impl::Interpreter, std::default_delete<tflite::impl::Interpreter> >*)'
/usr/bin/ld: tflite_layer.cpp:(.text+0xbd7): undefined reference to `tflite::impl::InterpreterBuilder::~InterpreterBuilder()'
/usr/bin/ld: tflite_layer.cpp:(.text+0xbea): undefined reference to `tflite::impl::Interpreter::AllocateTensors()'
/usr/bin/ld: nntrainer/d48ed23@@nntrainer@sha/tflite_layer.cpp.o: in function `nntrainer::TfLiteLayer::finalize(nntrainer::InitLayerContext&) [clone .cold]':
tflite_layer.cpp:(.text.unlikely+0x2a6): undefined reference to `tflite::impl::InterpreterBuilder::~InterpreterBuilder()'
collect2: error: ld returned 1 exit status
[319/443] Compiling C++ object 'test/unittest/f284a9c@@unittest_nntrainer_tensor@exe/unittest_nntrainer_tensor.cpp.o'.
ninja: build stopped: subcommand failed.

I guess this is the issue with linking NNtrainer's libnntrainer.so with the newly-generated libtensorflowlite.so?

@KirillP2323
If you rebuild .so file you should put that tensorflow file in include dir

Great, it run successfully, thanks!

@KirillP2323

As mentioned you can build aarch64 with this command
if failed to build, i recommand to build in docker container (with provided image from tensorflow)

for aarch64

You need to hide the system's openssl library. For this go to usr/include  folder and make sudo mv openssl openssl.original. 
bazelisk build --config=monolithic --config=noaws --config=nogcp --config=nohdfs --config=nonccl --config=elinux_aarch64 --experimental_ui_max_stdouterr_bytes=1073741819 -c opt //tensorflow/lite:libtensorflowlite.so

after build this .so file you should replace tensorflow lib in android build in
~~~/nntrainer/builddir/tensorflow-2.11.0/tensorflow-lite/lib/arm64/libtensorflow-lite.a

you should run package_android.sh file in tools dir And then you do several jobs like x86_64

Related Issue
tensorflow/tensorflow#54517
tensorflow/tensorflow#48401

ref - Install Tensorflow Lite 2.x on Raspverry Pi 4 (https://qengineering.eu/install-tensorflow-2-lite-on-raspberry-pi-4.html)
guid : https://www.tensorflow.org/lite/guide/build_arm → just guide For build follow this docs

If it doesn't work for Android selected ops please let me know

I will close this issue