openfoundry-ai / model_manager

Model Manager is a Python package that simplifies the process of deploying an open source AI model to your own cloud.

Home Page:https://www.openfoundry.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool


Model Manager v0.1, by

openfoundry white

Deploy open source AI models to AWS in minutes.

Table of Contents
  1. About Model Manager
  2. Getting Started
  3. Using Model Manager
  4. What we're working on next
  5. Known issues
  6. Contributing
  7. License
  8. Contact

About Model Manager

Model Manager is a Python tool that simplifies the process of deploying an open source AI model to your own cloud. Instead of spending hours digging through documentation to figure out how to get AWS working, Model Manager lets you deploy open source AI models directly from the command line.

Choose a model from Hugging Face or SageMaker, and Model Manager will spin up a SageMaker instance with a ready-to-query endpoint in minutes.

Here we’re deploying Microsoft’s Phi2. Larger models such as this take about 10 minutes to spin up.

phi2.mov

Once the model is running, you can query it to get a response.

Screenshot 2024-03-20 at 6 01 44 PM

(back to top)


Getting Started

Model Manager works with AWS. Azure and GCP support are coming soon! To get a local copy up and running follow these simple steps.

Prerequisites

  • Python
  • An AWS account
  • Quota for AWS SageMaker instances (by default, you get 2 instances of ml.m5.xlarge for free)
  • Certain Hugging Face models (e.g. Llama2) require an access token (hf docs)

Installation

Step 1: Set up AWS and SageMaker

To get started, you’ll need an AWS account which you can create at https://aws.amazon.com/. Then you’ll need to create access keys for SageMaker.

We made a walkthrough video to show you how to get set up with your SageMaker access keys in 2 minutes.

Setting.up.AWS.for.Model.Manager.Deployment.mp4

If you prefer a written doc, we wrote up the steps in Google Doc as well.

Step 2: Set up Model Manager

You should now have your Access Key and Secret from SageMaker. Now you can set up Model Manager! Clone the repo to your local machine, and then run the setup script in the repo:

   bash setup.sh

This will configure the AWS client so you’re ready to start deploying models. You’ll be prompted to enter your Access Key and Secret here. You can also specify your AWS region. The default is us-east-1. You only need to change this if your SageMaker instance quota is in a different region.

Optional: If you have a Hugging Face Hub Token, you can add it to .env that was generated by the setup script and add it with the key:

HUGGING_FACE_HUB_KEY="KeyValueHere"

This will allow you to use models with access restrictions such as Llama2 as long as your Hugging Face account has permission to do so.

(back to top)


Using Model Manager

After you’ve set up AWS and Model Manager per the above, run Model Manager using python or python3:

python3 model_manager.py

home screen

Now you’re ready to start shipping models onto your cloud!

Deploying models

There are three ways from where you can deploy models: Hugging Face, SageMaker, or your own custom model. Use whichever works for you! If you're deploying with Hugging Face, copy/paste the full model name from Hugging Face. For example, google-bert/bert-base-uncased. Note that you’ll need larger, more expensive instance types in order to run bigger models. It takes anywhere from 2 minutes (for smaller models) to 10+ minutes (for large models) to spin up the instance with your model. If you are deploying a Sagemaker model, select a framework and search from a model. If you a deploying a custom model, provide either a valid S3 path or a local path (and the tool will automatically upload it for you). Once deployed, we will generate a YAML file with the deployment and model under /configs

Deploy using a yaml file

For future deploys, we recommend deploying through a yaml file for reproducability and IAC. From the cli, you can deploy a model without going through all the menus. You can even integrate us with your Github Actions to deploy on PR merge. Deploy via YAML files simply by passing the --deploy option with local path like so:

python model_manager.py --deploy ./example_configs/llama7b.yaml


If you’re using the ml.m5.xlarge instance type, here are some small Hugging Face models that work great:

Model: google-bert/bert-base-uncased

  • Type: Fill Mask: tries to complete your sentence like Madlibs

  • Query format: text string with [MASK] somewhere in it that you wish for the transformer to fill

    fill mask bert query



Model: sentence-transformers/all-MiniLM-L6-v2

  • Type: Feature extraction: turns text into a 384d vector embedding for semantic search / clustering

  • Query format: "type out a sentence like this one."

    sentence transformer query



Model: deepset/roberta-base-squad2

  • Type: Question answering; provide a question and some context from which the transformer will answer the question.

  • Query format: A dict with two keys: question and context. For our tool, we will prompt you a second time to provide the context.

    roberta eqa query



Querying models

There are three ways to query a model you’ve deployed: you can query it using the Model Manager script, spin up a FastAPI server, or call it directly from your code using SageMaker’s API.

To spin up a FastAPI server, run

uvicorn server:app --reload

This will create a server running at 0.0.0.0 on port 8000 which you can query against from your app. There are 2 endpoints:

  1. GET /endpoint/{endpoint_name}: Get information about a deployed endpoint
  2. POST /endpoint/{endpoint_name}/query: Query a model for inference. The request expects a JSON body with only the query key being required. context is required for some types of models (such as question-answering). parameters can be passed for text-generation/LLM models to further control the output of the model.
{
  "query": "string",
  "context": "string",
  "parameters": {
    "max_length": 0,
    "max_new_tokens": 0,
    "repetition_penalty": 0,
    "temperature": 0,
    "top_k": 0,
    "top_p": 0
  }
}

Querying within Model Manager currently works for text-based models. Image generation, multi-modal, etc. models are not yet supported.

You can query all deployed models using the SageMaker API. Documentation for how to do this can be found here.

Deactivating models

Any model endpoints you spin up will run continuously unless you deactivate them! Make sure to delete endpoints you’re no longer using so you don’t keep getting charged for your SageMaker instance.

(back to top)


What we're working on next

  • More robust error handling for various edge cases
  • Verbose logging
  • Enabling / disabling autoscaling
  • Deployment to Azure and GCP

(back to top)


Known issues

  • Querying within Model Manager currently only works with text-based model - doesn’t work with multimodal, image generation, etc.
  • Model versions are static.
  • Deleting a model is not instant, it may show up briefly after it was queued for deletion
  • Deploying the same model within the same minute will break

See open issues for a full list of known issues and proposed features.

(back to top)


Contributing

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".

If you found this useful, please give us a star! Thanks again!

(back to top)


License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)


Contact

You can reach us, Arthur & Tyler, at hello@openfoundry.ai.

We’d love to hear from you! We’re excited to learn how we can make this more valuable for the community and welcome any and all feedback and suggestions.

About

Model Manager is a Python package that simplifies the process of deploying an open source AI model to your own cloud.

https://www.openfoundry.ai

License:MIT License


Languages

Language:Python 75.9%Language:TypeScript 18.7%Language:Shell 2.6%Language:CSS 1.0%Language:JavaScript 0.9%Language:Dockerfile 0.8%Language:Makefile 0.3%