Have you released a machine learning solution to production, only to find yourself pulling KPI metrics manually every day to keep updating stakeholders on results? Or, have you found yourself manually updating Lambda code in the AWS console to quickly fix a production bug a few hours into the release? Both of these common scenariosContinue reading “3 Degrees of Automation for Production Machine Learning Solutions”
“Should we use Kubernetes or go serverless first for new software solutions?” This is a common question among technology teams across the world. Based on a recent LinkedIn survey, the answer seems to be an event split between the two approaches, with most people flexible based on the project. Common arguments in favor of Kubernetes includeContinue reading “How To Deploy Serverless Containers For ML Pipelines Using ECS Fargate”
Creating a machine learning software system is like constructing a building. If the foundation is not solid, structural problems can undermine the integrity and function of the building. MLOps considerations, such as systematically building, training, deploying, and monitoring machine learning models, are only a subset of all the elements required for end-to-end production software solutions.Continue reading “5 Pillars of Architecture Design for Production ML Software Solutions”
What does it mean to deploy a machine learning model to production? As technology leaders, we invest in data science and machine learning engineering to improve the performance of the organization. Fundamentally, we are solving business problems systematically through data-driven technology solutions. This is especially true when the problem is recurring at scale and mustContinue reading “Lifecycle of ML Model Deployments to Production”
My team and I built a cloud-native recommender system that matches open jobs and people who are looking for work. We trained machine learning models to power the system, following the tried-and-true process: Set up an end-to-end data science workflow in a Jupyter notebook Use domain knowledge to create the feature space through feature engineeringContinue reading “Custom ML Model Evaluation For Production Deployments”
Why adopt a microservice strategy when building production machine learning solutions? Suppose your data science team produced an end-to-end Jupyter notebook, culminating in a trained machine learning model. This model meets performance KPIs in a development environment, and the next logical step is to deploy it in a production environment to maximize its business value.Continue reading “Microservice Architecture for Machine Learning Solutions in AWS”
From a business leadership standpoint, it always feels risky to deploy a new machine learning model within a production application. “What if the model makes wrong predictions, thereby affecting the stable business operations?” “Will our users be negatively impacted by inaccurate model predictions?” “How do we minimize the revenue impact of false positives or falseContinue reading “Shadow Deployments of Machine Learning Models in AWS”
We went live with a new machine learning product that texts the “job of the day” to our associates. This solution leverages our serverless recommendations engine powered by machine learning.
My team and I released a new machine learning solution for our users this week. There is nothing more exciting than seeing all our business KPIs exceed targets. After all, business value is the reason we build, deploy, and scale ML solutions.
When deploying code changes to production, how do you avoid “re-building” the entire solution and instead only build, test, and deploy the specific component(s) that changed?