Efficient Serverless deployment of PyTorch models on Azure

By PyTorch

Image for post

Serverless Deployment

Image for post

A Step-by-step walkthrough

Step 1: Export model

dummy_input = torch.randn(1, 3, 224, 224, device='cuda')onnx_path =  "./model.onnx"torch.onnx.export(learn.model, dummy_input, onnx_path, verbose=False)

Step 2: Test model deployment locally

mkdir << Your projectname >>
cd << Your projectname>>]
mkdir start
cd start
func init --worker-runtime python
func new --name classify --template "HTTP trigger"
git clone https://github.com/Azure-Samples/functions-deploy-pytorch-onnx.git /tmp/deploy-onnx-template# Copy the deployment sample to function app
cp -r /tmp/deploy-onnx-template/start ..
python -m venv .venv
source .venv/bin/activate
pip install --no-cache-dir -r requirements.txt
func start

Step 3: Deploy Model to the Azure Functions

az group create --name [[YOUR Function App name]]  --location westus2
az storage account create --name [[Your Storage Account Name]] -l westus2 --sku Standard_LRS -g [[YOUR Function App name]]
az functionapp create --name [[YOUR Function App name]] -g [[YOUR Function App name]] --consumption-plan-location westus2 --storage-account [[Your Storage Account Name]] --runtime python --runtime-version 3.7 --functions-version 3 --disable-app-insights --os-type Linux
pip install  --target="./.python_packages/lib/site-packages"  -r requirements.txt# Publish Azure function to the cloud
func azure functionapp publish [[YOUR Function App name] --no-build
Image for post

Optimizing the runtime footprint

Advanced Deployment Considerations


Learn More

Image for post