Patrick-David / tfjs-converter

Convert TensorFlow SavedModel and Keras models to TensorFlow.js

Home Page:https://js.tensorflow.org/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Build Status

Getting started

TensorFlow.js converter is an open source library to load a pretrained TensorFlow SavedModel into the browser and run inference through TensorFlow.js.

A 2-step process to import your model:

  1. A python pip package to convert a TensorFlow SavedModel to a web friendly format. If you already have a converted model, or are using an already hosted model (e.g. MobileNet), skip this step.
  2. Javascript API, for loading and running inference.

Step 1: Converting a SavedModel to a web-friendly format

  1. Install the TensorFlow.js pip package:
  $ pip install tensorflowjs
  1. Run converter script provided the pacakge

Usage:

$ tensorflowjs_converter \
    --input_format=tf_saved_model \
    --output_node_names='MobilenetV1/Predictions/Reshape_1' \
    --saved_model_tags=serve
    /mobilenet/saved_model \
    /mobilenet/web_model
Positional Arguments Description
input_path Full path of the saved model directory.
output_dir Path for all output artifacts.
Options Description
--input_format The format of input model, use tf_saved_model for SavedModel.
--output_node_names he names of the output nodes, separated by commas.
--saved_model_tags Tags of the MetaGraphDef to load, in comma separated format. Defaults to serve.

Web-friendly format

The conversion script above produces 3 types of files:

  • web_model.pb (the dataflow graph)
  • weights_manifest.json (weight manifest file)
  • group1-shard\*of\* (collection of binary weight files)

For example, here is the MobileNet model converted and served in following location:

  https://storage.cloud.google.com/tfjs-models/savedmodel/mobilenet_v1_1.0_224/optimized_model.pb
  https://storage.cloud.google.com/tfjs-models/savedmodel/mobilenet_v1_1.0_224/weights_manifest.json
  https://storage.cloud.google.com/tfjs-models/savedmodel/mobilenet_v1_1.0_224/group1-shard1of5
  ...
  https://storage.cloud.google.com/tfjs-models/savedmodel/mobilenet_v1_1.0_224/group1-shard5of5

Step 2: Loading and running in the browser

  1. Install the tfjs-converter npm package

yarn add @tensorflow/tfjs-converter or npm install @tensorflow/tfjs-converter

  1. Instantiate the FrozenModel class and run inference.
import * as tfc from '@tensorflow/tfjs-core';
import {loadFrozenModel} from '@tensorflow/tfjs-converter';

const MODEL_URL = 'https://.../mobilenet/web_model.pb';
const WEIGHTS_URL = 'https://.../mobilenet/weights_manifest.json';

const model = await loadFrozenModel(MODEL_URL, WEIGHTS_URL);
const cat = document.getElementById('cat');
model.execute({input: tfc.fromPixels(cat)});

Check out our working MobileNet demo.

Supported operations

Currently TensorFlow.js only supports a limited set of TensorFlow Ops. See the full list. If your model uses an unsupported ops, the tensorflowjs_converter script will fail and produce a list of the unsupported ops in your model. Please file issues to let us know what ops you need support with.

FAQ

  1. What TensorFlow models does the converter currently support?

Image-based models (MobileNet, SqueezeNet, add more if you tested) are the most supported. Models with control flow ops (e.g. RNNs) are not yet supported. The tensorflowjs_converter script will validate the model you have and show a list of unsupported ops in your model. See this list for which ops are currently supported.

  1. Will model with large weights work?

While the browser supports loading 100-500MB models, the page load time, the inference time and the user experience would not be great. We recommend using models that are designed for edge devices (e.g. phones). These models are usually smaller than 30MB.

  1. Will the model and weight files be cached in the browser?

Yes, we are splitting the weights into files of 4MB chunks, which enable the browser to cache them automatically. If the model architecture is less than 4MB (most models are), it will also be cached.

  1. Will it support model with quantization?

Not yet. We are planning to add quantization support soon.

  1. Why the predict() method for inference is so much slower on the first time then the subsequent calls?

The time of first call also includes the compilation time of WebGL shader programs for the model. After the first call the shader programs are cached, which makes the subsequent calls much faster. You can warm up the cache by calling the predict method with an all zero inputs, right after the completion of the model loading.

Development

To build TensorFlow.js converter from source, we need to clone the project and prepare the dev environment:

$ git clone https://github.com/tensorflow/tfjs-converter.git
$ cd tfjs-converter
$ yarn # Installs dependencies.

We recommend using Visual Studio Code for development. Make sure to install TSLint VSCode extension and the npm clang-format 1.2.2 or later with the Clang-Format VSCode extension for auto-formatting.

Before submitting a pull request, make sure the code passes all the tests and is clean of lint errors:

$ yarn test
$ yarn lint

To run a subset of tests and/or on a specific browser:

$ yarn test --browsers=Chrome --grep='execute'
 
> ...
> Chrome 64.0.3282 (Linux 0.0.0): Executed 39 of 39 SUCCESS (0.129 secs / 0 secs)

To run the tests once and exit the karma process (helpful on Windows):

$ yarn test --single-run

About

Convert TensorFlow SavedModel and Keras models to TensorFlow.js

https://js.tensorflow.org/


Languages

Language:JavaScript 69.5%Language:TypeScript 17.1%Language:Python 11.6%Language:Shell 1.8%