FastAI PyTorch Serverless API (w/ AWS Lambda)
- Install Serverless Framework via npm
npm i -g serverless@1.38.0
- Install python requirements plugin
sls plugin install -n serverless-python-requirements
-
Setup your model in
lib/models.py
so that it can be imported by the handler inapi/predict.py
as a method -
Setup an AWS CLI profile if you don't have one already
-
Create an S3 Bucket that your profile can access and upload your state dictionary
-
Configure the
serverless.yml
### Change service name to whatever you please
service: eminem-fastai-serverless
provider:
...
### set this to your deployment stage
stage: dev
### set this to your aws region
region: us-west-2
### set this to your aws profile
profile: slsadmin
### set this as needed between 128 - 3008, in 64mb intervals
memorySize: 2048
### set this as needed (max 300)
timeout: 120
...
environment:
### set this to your S3 bucket name
BUCKET_NAME: pytorch-serverless
### set this to your state dict filename
STATE_DICT_NAME: dogscats-resnext50.h5
variables:
### set this to your api version
api_version: v0.0.1
Run function locally with params defined in tests/predict_event.json
AWS_PROFILE=yourProfile sls invoke local -f predict -p tests/predict_event.json
Make sure Docker is running
Deploy to AWS Lambda
sls deploy -v
Return prediction for a single image.
- Headers
(required)
X-API-KEY=[string] ### Your generated API Key
- URL Parameters
(required)
predict_text=[string] ### URL of image to classify
- Success Response (200)
{
"predictions": [
{
"label": "dog",
"log": -0.00004426980376592837,
"prob": 0.9999557137489319
},
{
"label": "cat",
"log": -10.025229454040527,
"prob": 0.0000442688433395233
}
]
}
- Error Response (500)
{
"error": "Something went wrong...",
"traceback": "..."
}
Tail logs to console
sls logs -f predict -t