Punchwes / serve

Model Serving on PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TorchServe

TorchServe is a flexible and easy to use tool for serving and scaling PyTorch models in production.

Requires python > 3.8

curl http://127.0.0.1:8080/predictions/bert -T input.txt

πŸš€ Quick start with TorchServe

# Install dependencies
# cuda is optional
python ./ts_scripts/install_dependencies.py --cuda=cu111

# Latest release
pip install torchserve torch-model-archiver torch-workflow-archiver

# Nightly build
pip install torchserve-nightly torch-model-archiver-nightly torch-workflow-archiver-nightly

Getting started guide

🐳 Quick Start with Docker

docker pull pytorch/torchserve

Refer to torchserve docker for details.

⚑ Why TorchServe

πŸ€” How does TorchServe work

πŸ† Highlighted Examples

For more examples

πŸ€“ Learn More

https://pytorch.org/serve

πŸ«‚ Contributing

We welcome all contributions!

To learn more about how to contribute, see the contributor guide here.

To file a bug or request a feature, please file a GitHub issue. For filing pull requests, please use the template here.

πŸ“° News

πŸ’– All Contributors

Made with contrib.rocks.

βš–οΈ Disclaimer

This repository is jointly operated and maintained by Amazon, Meta and a number of individual contributors listed in the CONTRIBUTORS file. For questions directed at Meta, please send an email to opensource@fb.com. For questions directed at Amazon, please send an email to torchserve@amazon.com. For all other questions, please open up an issue in this repository here.

TorchServe acknowledges the Multi Model Server (MMS) project from which it was derived

About

Model Serving on PyTorch

License:Apache License 2.0


Languages

Language:Java 54.2%Language:Python 43.3%Language:Shell 2.0%Language:Dockerfile 0.4%Language:Mustache 0.1%