Heroku is a Platform which is another option for your team lead to use. So, if your team lead says “Hey you anon, I want you to deploy your model on Heroku”, it means time to read this post to learn how.
Table of Contents
Intro to Heroku
The Heroku Pipeline
Deployment with Heroku
1 - Intro to Heroku
Heroku stands out as a cloud platform that simplifies the deployment of machine learning models. It's an accessible, efficient, and scalable solution, making it a popular choice in the realm of Machine Learning Operations (MLOps).
Heroku is a Platform as a Service (PaaS) that enables developers to build, run, and operate applications in the cloud. It abstracts away the infrastructure management complexities, allowing developers to focus solely on their application. With support for a variety of programming languages and frameworks, Heroku is particularly advantageous for deploying machine learning models.
1.1 Why Heroku
Heroku offers several compelling features for machine learning deployments:
Ease of Use: Straightforward setup and deployment process. Ideal for teams without extensive DevOps experience
Scalability: Heroku can easily scale applications to accommodate varying levels of traffic and computational needs.
Rich Ecosystem: It offers a range of add-ons for data stores, monitoring tools, and more, which are crucial for ML applications.
Language Flexibility: Heroku supports multiple programming languages, including Python, which is extensively used in machine learning.
1.2 Getting started with Heroku
Setting up your ML model deployment on Heroku involves a few key steps:
Heroku Account Creation:
Begin by signing up for a free account on Heroku.
Follow the account creation process.
Verify your email.
Set up your account credentials.
Heroku CLI Installation:
Heroku provides a powerful CLI tool for managing Heroku applications.
Install the Heroku CLI by following the instructions on the Heroku website.
After installation, open a terminal or command prompt and log in to your Heroku account using
heroku login
.
1.3 Deployment with Heroku Git
For a hands-on deployment process, Heroku Git is a reliable method:
Prepare Your ML Project:
Your ML project should have a
requirements.txt
file listing all Python dependencies.Create a
Procfile
, a crucial file that tells Heroku what command to run to start your app. For example, for a FastAPI app, your Procfile might look like this:web: uvicorn main:app --host=0.0.0.0 --port=${PORT:-5000}
Heroku App Creation:
In your project directory, initialize a Heroku app with
heroku create
. This command sets up a new app on Heroku and adds a remote to your local git repository.
Deployment:
Deploy your application by pushing it to the Heroku remote using Git:
git add . git commit -m "Deploy to Heroku" git push heroku master
Heroku detects your language, installs dependencies, and starts your app using the Procfile.
Alternatively, you can integrate your GitHub repository for continuous deployment:
Connect to GitHub:
In your Heroku app dashboard, under the “Deploy” tab, choose GitHub as your deployment method.
Connect your GitHub account and select the repository you wish to deploy.
Automate Deployments:
Enable automatic deployments to deploy your app automatically every time you push changes to your selected branch on GitHub.
2 - The Heroku Pipeline
In the context of MLOps, the deployment of machine learning models needs to be smooth, efficient, and reliable. Heroku Pipelines provide a structured workflow for moving applications from development through staging to production. This ensures robust deployment processes. Let’s delve into the stages of a Heroku Pipeline and how they apply to deploying ML models.
A Heroku Pipeline is a sequence of stages that represent the lifecycle of an application. It's particularly useful in MLOps for managing and automating the deployment of machine learning models. This provides a clear path from initial review to final release.
2.1 The Review Apps Stage
The Review Apps stage is the initial phase of the pipeline, primarily used for reviewing and testing new features, bug fixes, or updates.
Purpose: This stage allows developers and project stakeholders to review changes in a live application environment before merging them into the main codebase.
Functionality: When a new pull request is made in the connected GitHub repository, Heroku automatically creates a temporary app (Review App) based on the pull request. This app includes any changes made in the feature branch. This provides a realistic environment for review and testing
Usage in MLOps: For ML models, the Review Apps stage can be used to test new model versions. You can also experiment with different parameters, or implement new features.
Setting Up Review Apps
Connect to GitHub: In your Heroku pipeline, connect to your GitHub repository containing the ML project.
Configure App Creation: Choose to create Review Apps automatically for pull requests or manually select which ones to create.
2.2 The Staging Stage
After review, the next stage in the pipeline is Staging, which is essentially a pre-production environment that mimics production.
Purpose: The Staging stage is for final testing after merging the pull requests and before deploying to Production. It serves as a last checkpoint to catch any potential issues.
Testing and Validation: In this stage, more rigorous testing and validation of the ML model are performed, ensuring that it behaves as expected in a production-like environment.
Deploying to Staging
Promote Code from Review Apps: Once the code in the Review App is approved, promote it to the Staging stage.
Test Rigorously: Run extensive tests to validate model performance and application functionality.
2.3 The Production Stage
The final stage of the Heroku Pipeline is Production, where the application is live and accessible to users.
Purpose: The Production stage is where the fully tested and vetted application resides. For ML models, this means the model is now serving predictions to end-users or other systems.
Monitoring and Scaling: Continuous monitoring for performance and accuracy is crucial. Heroku offers easy scaling options to manage traffic and computational loads.
Deploying to Production
Promote from Staging: Once the application in the Staging stage is thoroughly tested and confirmed stable, promote it to Production.
Monitor Performance: Use Heroku's monitoring tools to track the application’s performance, uptime, and resource usage.
3 - Deployment With Heroku
Deploying machine learning models on Heroku using GitHub Actions. This offers a seamless, automated path from code updates to production. This method utilizes the power of CI/CD pipelines. This ensures that your ML models are consistently tested, built, and deployed with minimal manual intervention.
3.1 GitHub Actions & Container Registry
GitHub Actions is a CI/CD tool integrated within GitHub. This enables the automation of build, test, and deployment processes directly from a GitHub repository
Container Registry on Heroku stores Docker images, which can be deployed as web applications. It allows you to manage and deploy container-based applications, crucial for ML models that often require specific environments.
3.2 Deployment with Heroku
You can read the post below on how to do deployment with Heroku, or alternatively for those who prefer a video format: here’s a step by step guide you can watch that shows you how to do deployment with Heroku.
1. Preparation of ML Application Components
main.py
(FastAPI Application):This Python file hosts your FastAPI application, including a
/predict
endpoint for ML model predictions.from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class PredictionInput(BaseModel): feature1: float feature2: float @app.post("/predict") def make_prediction(input_data: PredictionInput): # Logic for ML model prediction return {"prediction": "result"}
requirements.txt
(Dependencies):Lists the necessary Python packages.
fastapi==0.65.1 uvicorn==0.13.3 scikit-learn==0.24.1
Dockerfile
(Container Configuration):Specifies how to build the Docker container for the application.
FROM python:3.8 COPY . /app WORKDIR /app RUN pip install -r requirements.txt CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]
docker-compose.yml
(Docker Compose Configuration):Used for defining and running multi-container Docker applications.
version: '3' services: web: build: . ports: - "80:80"
pytest.ini
(Test Configuration):Configuration for running tests with pytest.
[pytest] minversion = 6.0 addopts = -ra -q testpaths = tests
runtime.txt
(Runtime Specification):Specifies the Python runtime version for Heroku.
python-3.8.10
start.sh
(Startup Script):A script that starts the FastAPI app, typically used in container environments.
#!/bin/sh uvicorn main:app --host=0.0.0.0 --port=${PORT:-5000}
test.py
(Test Script):Contains tests for the application.
from fastapi.testclient import TestClient from main import app client = TestClient(app) def test_read_predict(): response = client.post("/predict", json={"feature1": 1.0, "feature2": 2.0}) assert response.status_code == 200 assert "prediction" in response.json()
2. Setting Up GitHub Actions for Continuous Deployment
workflow.yml
(Workflow for Staging):Defines the GitHub Actions workflow for deploying to a staging environment.
name: Deploy to Heroku Staging on: push: branches: [ main ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Login to Heroku Container Registry run: heroku container:login - name: Build and Push Docker Image run: | docker build -t registry.heroku.com/myapp-staging/web . docker push registry.heroku.com/myapp-staging/web - name: Deploy to Heroku run: heroku container:release web --app myapp-staging
production.yml
(Workflow for Production):Similar to
workflow.yml
, but configured for production deployment. Trigger this manually or after successful staging deployment.
3. Deploying to Staging and Production
Staging Deployment:
Triggered by a push to the
main
branch in your GitHub repository.The GitHub Actions workflow defined in
workflow.yml
will build your Docker image, push it to Heroku's Container Registry, and release it to your staging app on Heroku.
Production Deployment:
After verifying the staging app, trigger the production deployment.
The
production.yml
workflow will push the image to a separate production app on Heroku.