How do you deploy Lambda functions as Docker containers through CI/CD?
CloudFormation provides us two options for Lambda deployments:
- Zip the code, copy it to S3, and pass in the S3 path into the CF template
- Containerize the code, push it to Elastic Container Registry (ECR), and pass in the ECR image URI into the CF template
The zip deployments are the most straightforward and where we all start.
Then, why would we choose to deploy Lambda functions as Docker containers?
When importing external libraries during Lambda execution, you might have come across the issue of Lambda Layer size limits (250 MB max size):
This is especially common in machine learning pipeline components where we may need to import several libraries, such as Pandas, Scikit-learn, TensorFlow, etc. Given that certain dependencies are mandatory, Docker containers provide an excellent solution.
Within SageMaker Studio, we start by creating a requirements.txt file with all the required libraries for the given Lambda function:
Specifying versions is optional because at runtime, CodeBuild will automatically figure out the proper combination of versions to make all the libraries compatible with each other.
Next, create a Dockerfile for the Lambda function:
The lambda folder in this same directory contains all the Lambda code. If you are transitioning from zip deployments to container deployments, you don’t need to modify your Lambda code at all because it is deployment agnostic.
Pull, commit, and push the changes to CodeCommit. Once code review is complete and CI/CD CodePipeline is triggered, CodeBuild will perform the deployment:
This buildspec.yml file shows both the Docker container deployment and the traditional zip deployment. You can mix and match because the deployment decision (zip vs image) is on a per Lambda basis.
During aws cloudformation deploy (CLI), we pass in the S3 path or ECR image URI through –parameter-overrides to update the CloudFormation template’s Parameters section. The parameter is then referenced by the Lambda function in the Resources section of the template:
You can go to the console and verify the Lambda container has been deployed successfully. You will see the ECR image URI and a message saying your function is deployed as a container image.
However, you will not be able to see the actual code in the console. My team likes this because it forces us to modify it through CI/CD vs manually in the console. This is the habit CloudFormation has created, and Lambda container deployments enforces it further.
Deploying Lambda functions as Docker containers also has the added benefit of easy transition to ECS Fargate. This would be needed if a Lambda function must take more than 15 minutes to complete execution, and/or if it requires more memory or cores beyond Lambda limits.
How do you deploy your Lambda functions? Comment below!
If you need help implementing cloud-native MLOps, Well-Architected production ML software solutions, training/inference pipelines, monetizing your ML models in production, have specific solution architecture questions, or would just like us to review your solution architecture and provide feedback based on your goals, contact us or send me a message and we will be happy to help you.
Subscribe to my blog at: https://gradientgroup.ai/blog/
Follow me on LinkedIn: https://linkedin.com/in/carloslaraai