tensorflow / tfjs

A WebGL accelerated JavaScript library for training and deploying ML models.

Home Page:https://js.tensorflow.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question: is the Windows tensorflow binary / dll used in tfjs-node compiled with support for zenDNN optimisations?

Mattk70 opened this issue · comments

Please make sure that this is a feature request. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template

System information

  • TensorFlow.js version (you are using): 4.17.0
  • Are you willing to contribute it (Yes/No): N/A

Describe the feature and the current behavior/state.
I am only able to test my tfjs-node electron app on Windows with an Intel chip. If I enable oneDNN using this line:

process.env['TF_ENABLE_ONEDNN_OPTS'] = "1";

I see a significant performance boost. I am wondering if the following

process.env['TF_ENABLE_ZENDNN_OPTS'] = "1";

Will also result in performance improvements for Windows users with an AMD chip.

Will this change the current api? How?
Nope, I'm just asking

Who will benefit with this feature?
If it does: all the AMD users.

Any Other info.

Yes / No will suffice as an answer. If it's no and you'd like to expand on that, I'd appreciate any guidance on whether it's possible and how to build a binary with zenDNN support.

Thanks!

Hi, @Mattk70

As far I know TensorFlow.js supports oneDNN optimizations for faster inference on some hardware, currently there is no support for zenDNN optimizations in the TensorFlow.js binary/dll used for Node.js (including tfjs-node).

oneDNN is an open-source deep learning inference accelerator library developed by Intel. zenDNN is a similar library from AMD, but it's not currently supported in TensorFlow.js.

TensorFlow.js can leverage oneDNN for inference on compatible hardware through the TF_ENABLE_ONEDNN_OPTS environment variable you mentioned. This can improve performance on Intel CPUs.

Unfortunately, TensorFlow.js does not currently utilize zenDNN for AMD optimizations. zenDNN support isn't available yet in the pre-built TensorFlow.js binaries, it might be possible to build TensorFlow from source with zenDNN enabled. However, this process can be complex and requires expertise in building TensorFlow from source. It's also important to note that building with zenDNN support might be experimental and not officially supported.

For now, enabling oneDNN optimizations using TF_ENABLE_ONEDNN_OPTS is the recommended approach for potential performance gains on Windows with Intel CPUs in your tfjs-node Electron app.

Please refer these resources : TensorFlow building from source on Windows, @tensorflow/tfjs-node Build optimal TensorFlow from source and zenDNN Documentation

If I have missed something here please let me know. Thank you for your understanding and patience.

Thank you @gaikwadrahul8 - that's very clear. Thanks also for the link to the build instructions!