bhaney / tensorflow-cpu

Module of the Viam mlmodel service that allows inference on a Tensorflow model in the SavedModel format.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

tensorflow-cpu

Viam provides a tensorflow-cpu model of the ML model service that allows CPU-based inference on a Tensorflow model in the SavedModel format.

Configure this ML model service as a modular resource on your robot to take advantage of Tensorflow on the Viam platform--including previously existing or even user-trained models.

Getting started

The first step is to prepare a valid Tensorflow model. A valid Tensorflow model comes as a directory which can be named anything. Within the model directory, there should at least be a saved_model.pb file and an internal directory named variables, which itself should contain two files: variables.index and variables.data-00000-of-00001. The model directory may also include other files (such as keras_metadata.pb), but those are irrelevant for now. The path to the model directory will be important later.

Note

Before adding or configuring your module, you must create a robot.

Configuration

Navigate to the Config tab of your robot’s page in the Viam app. Click on the Services subtab and click Create service. Select the mlmodel type, then select the tensorflow-cpu model. Enter a name for your service and click Create.

Example Configuration

{
  "modules": [
    {
      "type": "registry",
      "name": "viam_tensorflow-cpu",
      "module_id": "viam:tensorflow-cpu",
      "version": "latest"
    }
  ],
  "services": [
    {
      "model": "viam:mlmodel:tensorflow-cpu",
      "attributes": {
        "package_reference": null,
        "model_path": "/home/kj/Resnet50/",
        "label_path": "/home/kj/imagenetlabels.txt"
      },
      "name": "myTFModel",
      "type": "mlmodel",
      "namespace": "rdk"
    },
  ]
}

Note

For more information, see Configure a Robot.

Attributes

The following attributes are available for viam:mlmodel:tensorflow-cpu services:

Name Type Inclusion Description
model_path string Required The full path (on robot) to a valid Tensorflow model directory.
label_path string Optional The full path (on robot) to a text file with class labels

Usage

This module is made for use with the following methods of the ML model service API:

A call to Metadata() will return relevant information about the shape, type, and size of the input and output tensors. For the Infer() method, the module will accept a struct of numpy arrays representing input tensors. The number and dimensionality of the input tensors depends on the included Tensorflow model. It will return a struct of numpy arrays representing output tensors.

About

Module of the Viam mlmodel service that allows inference on a Tensorflow model in the SavedModel format.

License:Apache License 2.0


Languages

Language:Python 94.4%Language:Shell 5.6%