How Otomi supports SRE when using Kubernetes

To implement SRE successfully, there needs to be a correct culture, a collaboration between development and operations (and security), skilled people and of course a good technical architecture. This is where Otomi comes in.

What is SRE

SRE focuses on providing a stable, operational system/platform that is both maintainable and scalable. The long-term aim is to optimize service operations through automated processes, with the goal of providing self-service to developers and subsequently freeing up operations to focus on the next issue.

To implement SRE successfully, there needs to be a correct culture, a collaboration between development and operations (and security), implemented processes and skilled people, and of course a good technical architecture. This is where Otomi comes in when working with Kubernetes.

How Otomi supports SRE

Otomi provides a reference configuration (Otomi Values) that can be used as a quick-start to install and configure a complete suite of integrated open source applications, an advanced ingress architecture, multi-tenancy, developer self-service, and implemented security best-practices. The reference configuration can be modified using the Otomi Console and Otomi API, based on a pre-defined value schema. SRE can change and optimize the reference configuration when needed. There are 2 supported options:

  • Standard, using the Otomi values schema to modify the configuration
  • Advanced, customization using overrides

Let’s take a closer look at both options.

Standard

Out-of-the-box, Otomi comes with an extensive values schema (see here). Most of the standard values (platform configuration) can be modified using Otomi Console. Changes made through the console are translated into configuration code (based on the values schema). Schema-supported values that can not be changed using the Otomi Console, can be modified in the Otomi Values repository (default Gitea is installed, but an external repository like Github is also supported). Otomi supports Visual Studio Code integration for autocompletion based on the Otomi schema. The Otomi values schema supports the most common use-cases when working with Kubernetes.

Advanced

For advanced use-cases, configuration values of all integrated open source applications can be customized. Together with the fully integrated observability suite, SRE can pro-actively monitor the resource usage of the integrated open source applications (like Istio and Nginx ingress) and optimize the configuration when needed.

The Otomi values schema, in this case, can be overridden by custom configuration values. Custom configuration values can be all values supported by the charts of the integrated open source applications in Otomi Core.

SRE can use Otomi Console to change configuration settings (like security policies), but can also change the Otomi Values directly using the Otomi values schema and by using overrides. In all cases, the configuration is stored in code (the otomi-values repository).

The following figure shows the configuration values of the nginx-ingress chart.

Line 1-7 are configuration options supported in the Otomi values schema. Line 8-11 are used to add specific (not schema supported) configuration values using overrides (rawValues).

Summary

Otomi provides SRE with a completely implemented reference architecture when using Kubernetes. The architecture by default supports the most common use-cases when working with Kubernetes and allows SRE to finetune/optimize the configuration when needed.

Next to offering a complete reference architecture, Otomi also supports developer self-service. SRE can onboard new development teams in minutes and provide them with a shared space on a cluster together with all the tools they need.

The following figure shows how SRE and developers both use Otomi.

SRE monitors and optimizes the platform, while development teams can take advantage of all Otomi features like Services, Jobs, Secrets, and all integrated team aware applications like Hasicorp Vault, Harbor, Prometheus, Loki, Grafana, Alertmanager, and more.

Latest Articles

Navigating the Evolution: Trends and Transformations in Kubernetes Platforms for 2024

Navigating the Evolution: Trends and Transformations in Kubernetes Platforms for 2024

As we look ahead to 2024, the excitement around building and managing container and Kubernetes platforms is shifting to a more realistic outlook. Companies are realizing that these tasks are more complex than originally thought. In the bigger picture, we can expect things to come together and simplify in the coming year. Let's break it down.

Read more
Mastering Dockerfile USER

Mastering Dockerfile USER

Mastering Dockerfile USER: The Key to seamless Kubernetes Deployment

Read more