DIY Kubernetes-based platform building – part 3

In a series of 3 posts , we’ll take a look at how Kubernetes fits into the broader technology landscape, and how an enterprise container platform is crucial for digital transformation and the adoption of cloud-native.

Looking back at part 2

In the second part of this series, we discussed what you need to expect from a Container Platform and we tried to point out that Kubernetes only scratches the surface of what a Container Platform needs.

We also noted that Kubernetes is merely one building block in a collection of building blocks needed to create digital services and products. It has no intrinsic business value in and of itself. Even an Enterprise Container Platform has no intrinsic business value. It’s the time and toil saved that has value.

In this third part, we’ll discuss Container Platform deployment models and we’ll take a closer look at The Do It Yourself (DIY) platform building approach, Managed Services, and using off-the-shelf products (COTS).

Enterprise Container Platform Deployment Models

Choosing a suitable deployment model requires you to think holistically. Not just about Kubernetes or the Enterprise Container Platform, but also about infrastructure (compute nodes, storage, networking), bandwidth costs to and from applications already in the cloud (or in SaaS), and even latency to and from your on-prem applications. Additionally, lock-in, flexibility and the cost of migrating to another platform ultimately impact the best solution for you.

Secondly, remember that the Enterprise Container Platform and Kubernetes do not have much intrinsic business value. It’s what you do with it (and at what cost and effort), that determines its value. Does it make sense to invest a lot of time and effort into building your own? Perhaps at a certain scale (and complexity that comes with it). Does it make sense to minimize the cost and effort? Probably. Does it make sense to unburden your staff and reduce their cognitive load? Absolutely. With that in mind, there are a couple of popular delivery and deployment models for Kubernetes and Enterprise Container Platforms. Let’s take a moment to set the stage, to fully understand the what and how of each option.

Do-it-yourself

In the DIY approach, an organization designs, builds and operates the entire Enterprise Container Platform themselves, including lifecycle management, integration of the different components in the Enterprise Container Platform, support, break/fix. This is the most flexible option with the freedom to design a solution exactly to spec, but with the highest operational burden and cognitive load, as well as the highest complexity.

You are entirely responsible for keeping the lights on, fixing issues, supporting the platform’s users and changing the architecture and design as requirements evolve. As DIY platforms often have only a single, internal customer (albeit consisting of multiple teams or business units), the economies of scale are not great. Support and quality of features probably lag behind what commercial Enterprise Container Platform vendors can offer in terms of turn-key experience.

The biggest challenge when going down the DIY path may not be the initial set up, but likely is the effort that needs to be put into maintaining the integrations between all of the tools in the platform, which are fragile and full of custom ’glue’ code, unique to your organization. These integrations tend to break during system upgrades and configuration changes, and are likely to cause outages on production, as well as disrupting developer workflows. As the value in Enterprise Container Platform is in using it to create and run modern applications, not in the platform itself, any issue with the platform will impact business value directly.

This challenge translates well into the cost of the DIY approach. While you may save on licensing cost initially, the meter starts running when you factor in the hours of highly-paid experts for the initial deployment, upgrades, creation and maintenance of glue code and integrations for security, authentication, single sign-on, monitoring and metrics, distributed tracing, the service mesh, load balancers, et cetera. You get the picture. And none of these costs scale particularly well, as the platform team only has a single internal customer and there are no economies of scale to benefit from.

As we can see in the figure above, the operational complexity drives up the personnel cost for SRE and Cloud Ops experts, never reaching the break-even point.

Managed Services

In the Managed (or Hosted) Service approach, a company makes it their core business to offer a commercial service for running Kubernetes or an Enterprise Container Platform. It is the easiest way to get started, and quick onboarding is a major upside of using a service. It is the least flexible option of the three, as it relies heavily on standardization, which may or may not fit your requirements and use case right now, but almost certainly drift apart as you use the service longer.

It shifts control over lifecycle management to the service provider, meaning you have little control over when scheduled downtime for maintenance or updates happens. It also means you have little control over versioning, meaning versions of Kubernetes can lag significantly or version upgrades breaking backward compatibility leaving you scrambling to prevent issues. Simply put: the service provider’s lifecycle management planning is no longer aligned with your business planning.

There are advantages in economies of scale, though. Managed services tend to have many customers, meaning deeper and broader investments into features quantity, feature quality and quality of service (like self-service portals, on-demand configuration and provisioning) make sense. That means you get more and better features, at a lower cost, and with a better user experience. Simply put: they can deliver a quality of service that is hard to match for many internal platform teams DIY’ing an Enterprise Container Platform. It also removes most, if not all, operational work from the customer’s plate, reducing their cognitive load, so they can focus on business-related projects, instead of the infrastructural plumbing.

Don’t forget that managed services often have additional friction in the form of lock-in. In many (but not all) cases, using a managed service also means using the service provider’s hosted infrastructure, locking you into using their services for things like cluster nodes, storage, and networking. Especially in the public cloud realm with AKS, EKS, and similar services, this lock-in leads to additional, hidden costs and lost freedom of choice and flexibility for additional services like compute, storage and networking.

The biggest downside, however, is that managed Enterprise Container Platforms still require varying levels of DIY to make work. It’s almost as if it’s normal to have some part of the infrastructure chronically broken: from identity providers and single sign-on providers, to distributed tracing sidecars; from monitoring integrations to complex CI/CD pipelines: most managed services focus on offering a single service (like Kubernetes), but do not offer a complete Enterprise Container Platform.

Conversely, while common PaaS platforms like RedHat OpenShift, VMware Tanzu and Rancher are Enterprise Container Platforms, they are too prescriptive in what tools to use, locking you into their ecosystem, unable to leverage the power of open source and the CNCF’s ecosystem of complementary products.

Of the self products

Does the current state of Kubernetes as an enterprise container platform leave us with either DIY (resulting in a bespoke, hard-to-manage and expensive but tailored solution) or a one-size-fits-all managed and hosted service that’s nowhere near complete but still locks you into a cloud?

No. A third option is that of a product, platform, or distribution. These options package up the complexity into an engineered solution and balances the freedom and flexibility of DIY with the simplicity of using a service. It is the best of both worlds, so to speak.

Red Kubes’ Otomi Container Platform is a good example of this approach. It is a complete Enterprise Container Platform, packaged up as a ‘distribution’ for self-hosted deployment. Its core value is in the pre-integration of all the open-source components that make up an Enterprise Container Platform, removing the downside of DIY and cloud lock-in, while keeping the upsides.

It’s a single deployable, turn-key solution that helps you get started with a feature-complete Enterprise Container Platform quickly by completely removing the complexity of design and implementation, using a single deployment package that takes care of the entire installation of components with sane, secure defaults, enterprise-grade scaling and functionality. Simply add a Kubernetes environment, like a managed service from a cloud vendor or on-prem distribution, and you’re off to the races.

Otomi reduces the operational burden of keeping the lights on and upgrading components, because Red Kubes will take care of lifecycle management by providing updates and new versions of the entire stack as a whole, so you don’t have to manage each individual component. Because the custom ‘glue’ to integrate each component is now a fully supported part of the product, the operational fragility of DIY is removed, as Otomi has the economies of scale to invest engineering time to maintain the integrations and improve the glue code across the entire customer base. That means the cost of personnel can be reduced over time as the cloud infrastructure estate continues growing, further optimizing the total cost of ownership while keeping complexity down.

Because Otomi is self-hosted (it’s deployed on top of a customer’s cluster), you’re still in control of when to upgrade, if at all. This removes operational downsides of being at the mercy of the service provider for lifecycle management. With managed services often locking you into using additional services for hosting, Otomi gives you the freedom to self-host, as well as mix-and-match clusters across on-prem, hosted environments and the public cloud. This reduces dependency on 3rd parties, reduces cost and increases deployment options for specific use cases, to be able to deploy the Enterprise Container Platform on the most cost-efficient cloud, or closest to your SaaS data, or even close to your existing on-prem data center assets.

What is the best option for you?

The opportunity cost can be significant, from crucial deals lost to losing market share, or losing customers due to bad user experience and non-competitive pricing. And isn’t that the point of developing modern applications? To digitally transform your organization? To create digital products and services, and gain market share before your competitors do? Prioritize adding differentiated value instead of adding infrastructure-related engineering work.

First-mover advantage is crucial for commercial success, so quick time-to-value, like managed solutions deliver, is key. But lock-in lurks, preventing flexibility and optimization (for cost, features or performance), so choosing the right Enterprise Container Platform is crucial to get the balance between speed, flexibility, features and cost right.

For long-term success, distributions like Otomi are the most flexible and able to bend with your changing requirements, while minimizing operational work and cost; helping you to get the most out of Kubernetes and the Enterprise Container Platform.

With this innate ability to breathe with, instead of against the organization, while providing the benefits of a managed solution, technical debt does not build up and teams can fully use Kubernetes’ potential to drive their application development needs.

The success of adopting a container platform completely depends on your development teams, and whether they are happy to work with the platform. Happy means many things, ranging from the ability to spin up new applications or development pipelines quickly, reduction of operational work, platform resilience, ease of creating new clusters and changing existing ones on-demand, and self-service capabilities. The question remains: how can you best deliver these Enterprise Container Platform features that make your engineers happy?

With the developer self-service capabilities in Otomi, you can create a Service, point to the container image you would like to deploy, and add a hostname for the public URL. Otomi then deploys your app and provides an URL where you can directly access your application. The entire deployment, from scaling and storage to configuring load balancing, SSL termination, and ingress is done automatically.

The DIY patchwork that makes up an Enterprise Container Platform is almost certainly not the best way to move forward, because creating simple platforms that remove complexity is hard. And considering Kubernetes’ complexity in particular, simplicity is very hard to achieve in the Enterprise Container Platform space.

Wrapping up

Kubernetes is the de-facto platform for deploying modern applications to production. However, it’s only a small part of an Enterprise Container Platform. Getting started with a container platform is a daunting task, and there are many options to choose from, from do-it-yourself to fully managed services.

Our recommendation is to look at a commodity, off-the-shelf solution like Otomi Container Platform, a complete Enterprise Container Platform that prevents cloud lock-in, removes the complexity of DIY while still reaping the benefits of easy onboarding and seamless upgrades in the future. With, your developers and cloud engineers can work on high-value business projects, instead of keeping the lights of Kubernetes and the enterprise container platform on. With the developer self-service capabilities in Otomi, teams can fully use Kubernetes’ potential to drive their application development needs.

Latest Articles

Navigating the Evolution: Trends and Transformations in Kubernetes Platforms for 2024

Navigating the Evolution: Trends and Transformations in Kubernetes Platforms for 2024

As we look ahead to 2024, the excitement around building and managing container and Kubernetes platforms is shifting to a more realistic outlook. Companies are realizing that these tasks are more complex than originally thought. In the bigger picture, we can expect things to come together and simplify in the coming year. Let's break it down.

Read more
Mastering Dockerfile USER

Mastering Dockerfile USER

Mastering Dockerfile USER: The Key to seamless Kubernetes Deployment

Read more