“Should we use Kubernetes or go serverless first for new software solutions?”
This is a common question among technology teams across the world. Based on a recent LinkedIn survey, the answer seems to be an event split between the two approaches, with most people flexible based on the project.
Common arguments in favor of Kubernetes include portability, scalability, low latency, low cost, open-source support, and DevOps maturity.
Common arguments in favor of serverless include simplicity, maintainability, shorter lead times, developer experience, talent / skill set availability, native integration with other cloud services, and existing commitment to the cloud.
Is there a way to combine the best of both worlds and create cloud-native, serverless container-based solutions?
Creating a machine learning software system is like constructing a building. If the foundation is not solid, structural problems can undermine the integrity and function of the building.
MLOps considerations, such as systematically building, training, deploying, and monitoring machine learning models, are only a subset of all the elements required for end-to-end production software solutions.
This is because a machine learning model is not deployed to production in a vacuum. It is integrated within a larger software application, which itself is integrated within a larger business process with the goal of achieving specific business outcomes.
What does it mean to deploy a machine learning model to production?
As technology leaders, we invest in data science and machine learning engineering to improve the performance of the organization.
Fundamentally, we are solving business problems systematically through data-driven technology solutions. This is especially true when the problem is recurring at scale and must be addressed continuously.
Why adopt a microservice strategy when building production machine learning solutions?
Suppose your data science team produced an end-to-end Jupyter notebook, culminating in a trained machine learning model. This model meets performance KPIs in a development environment, and the next logical step is to deploy it in a production environment to maximize its business value.
We all go through this transition as we take machine learning projects from research to production. This is typically a hand-off from a data scientist to a machine learning engineer, although in my team it’s the same, properly trained, full-stack ML engineer.