yoavz / BentoML

Model Serving made easy

Home Page:https://docs.bentoml.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

pypi status python versions Downloads build status Documentation Status join BentoML Slack

From ML model to production API endpoint with a few lines of code

BentoML

BentoML makes it easy to serve and deploy machine learning models in the cloud.

It is an open source framework for building cloud-native model serving services. BentoML supports most popular ML training frameworks and deployment platforms, including major cloud providers and docker/kubernetes.

👉 Join BentoML Slack community to hear about the latest development updates.


Getting Started

Installing BentoML with pip:

pip install bentoml

Defining a prediction service with BentoML:

import bentoml
from bentoml.handlers import DataframeHandler
from bentoml.artifact import SklearnModelArtifact

@bentoml.env(pip_dependencies=["scikit-learn"]) # defining pip/conda dependencies to be packed
@bentoml.artifacts([SklearnModelArtifact('model')]) # defining required artifacts, typically trained models
class IrisClassifier(bentoml.BentoService):

    @bentoml.api(DataframeHandler) # defining prediction service endpoint and expected input format
    def predict(self, df):
        # Pre-processing logic and access to trained model artifacts in API function
        return self.artifacts.model.predict(df)

Train a classifier model with default Iris dataset and pack the trained model with the BentoService IrisClassifier defined above:

from sklearn import svm
from sklearn import datasets

if __name__ == "__main__":
    clf = svm.SVC(gamma='scale')
    iris = datasets.load_iris()
    X, y = iris.data, iris.target
    clf.fit(X, y)

    # Create a iris classifier service
    iris_classifier_service = IrisClassifier()

    # Pack it with the newly trained model artifact
    iris_classifier_service.pack('model', clf)

    # Save the prediction service to a BentoService bundle
    saved_path = iris_classifier_service.save()

A BentoService bundle is a versioned file archive, containing the BentoService you defined, along with trained model artifacts, dependencies and configurations.

Now you can start a REST API server based off the saved BentoService bundle form command line:

bentoml serve {saved_path}

If you are doing this only local machine, visit http://127.0.0.1:5000 in your browser to play around with the API server's Web UI for debugging and sending test request. You can also send prediction request with curl from command line:

curl -i \
  --header "Content-Type: application/json" \
  --request POST \
  --data '[[5.1, 3.5, 1.4, 0.2]]' \
  http://localhost:5000/predict

Saved BentoService bundle is also structured to work as a docker build context, which can be used to build a docker image for deployment:

docker build -t my_api_server {saved_path}

You can also deploy your BentoService directly to cloud services such as AWS Lambda with bentoml, and get back a API endpoint hosting your model, that is ready for production use:

bentoml deployment create my-iris-classifier --bento IrisClassifier:{VERSION} --platform=aws-lambda

Try out the full quickstart notebook: Source, Google Colab, nbviewer

Documentation

Full documentation and API references can be found at https://docs.bentoml.org/

Examples

FastAI

Scikit-Learn

PyTorch

Tensorflow Keras

Tensorflow 2.0

XGBoost

LightGBM

H2O

Visit bentoml/gallery repository for more example projects demonstrating how to use BentoML.

Deployment guides:

Contributing

Have questions or feedback? Post a new github issue or discuss in our Slack channel: join BentoML Slack

Want to help build BentoML? Check out our contributing guide and the development guide.

Releases

BentoML is under active development and is evolving rapidly. Currently it is a Beta release, we may change APIs in future releases.

Read more about the latest features and changes in BentoML from the releases page.

Usage Tracking

BentoML by default collects anonymous usage data using Amplitude. It only collects BentoML library's own actions and parameters, no user or model data will be collected. Here is the code that does it.

This helps BentoML team to understand how the community is using this tool and what to build next. You can easily opt-out of usage tracking by running the following command:

# From terminal:
bentoml config set usage_tracking=false
# From python:
import bentoml
bentoml.config().set('core', 'usage_tracking', 'False')

License

Apache License 2.0

FOSSA Status

About

Model Serving made easy

https://docs.bentoml.org

License:Apache License 2.0


Languages

Language:Jupyter Notebook 53.7%Language:Python 46.0%Language:Shell 0.3%Language:Mako 0.0%