Developer self-service for Kubernetes with Otomi

In this blog post, we’ll explain why you would need developer self-service and how you can get started.

Otomi offers developer self-service on top of any Kubernetes cluster. With Otomi, developers can deploy and manage applications, create Kubernetes jobs and cron jobs, create and manage secrets, and publicly expose services with only a couple of clicks. And all of this without writing any Kubernetes YAML manifests. In this blog post, we’ll explain why you would need developer self-service and how you can get started.

Why developer self-service for Kubernetes

The ultimate goal of developer self-service is to have less friction in the development process and ensure that developers can deliver customer value faster. This can be achieved by enabling the separation of concerns for both dev and ops teams. The ops team manages the stack and enforces governance and compliance to security policies and best practices. Dev teams can create new environments on-demand, create and expose services using best practices, use ready-made templatized options, and get direct access to all the tools they need for visibility. Think of it as paving the road towards fast delivery and minimizing risks by providing safeguards and standards. Developers can do what they need to do and do it when they like to. And yes, sometimes not always how they would like to do it. The only challenge here is, building a platform like this takes a lot of time and not all organizations have the resources to do so.

The goal behind the Otomi open-source project was to offer a single deployable package that offers all of this out-of-the-box. Let’s take a closer look at the concepts behind Otomi, and then explain how you can install Otomi to explore it yourself.

Otomi architecture

Otomi consists out of multiple open-source projects and provides a reference configuration that can be used as a quick-start to install and configure a complete suite of integrated open-source applications, an advanced ingress architecture, multi-tenancy, developer self-service, and implemented security best-practices. The reference configuration can be modified using Otomi Console, Otomi API, and Otomi CLI, based on a pre-defined configuration schema (in GIT). For advanced use-cases, the configuration of all integrated open-source applications can be customized.

The Otomi values schema can be overridden with custom configuration values. Custom configuration can be based on all values supported by the open-source integrated charts included in Otomi.

Otomi offers a full set of integrated applications that are ready to use after installing Otomi, to provide developer self-service, observability, build and deploy features, security, connectivity, and application configuration management.

Installing Otomi

Otomi can be installed using Helm. See here for full instructions. For experimentation and evaluation purposes, you can install Otomi with minimal values (just provide the name of your cluster). When installing using minimal values, Otomi uses nip.io for name resolution and automatically generates a CA for certificate creation.

Installing with minimal values is recommended for experimentation purposes only. In all other cases, we recommend using KMS (for encryption of sensitive configuration values) and an external (cloud) DNS service in combination with LetsEncrypt production certificates, or BYO CA.

Basically, the only requirement for Otomi is to have a running Kubernetes cluster. There is also a quickstart available for AWS, Azure, and GCP. For the quickstarts, Terraform is used to provision a managed Kubernetes service (EKS, AKS, or GKE) in your cloud of choice and also to install Otomi with minimal values.

To install Otomi, you’ll need a running Kubernetes cluster. Let’s assume you have the admin credentials of an AKS cluster in Azure with version 1.20.9, with a node pool of 3 Standard_D3_v2 instances, autoscaling enabled (min 3, max 5), Azure CNI configured, RBAC enabled (required) and Azure Policy and Azure Monitor disabled. This is the setup we’ll be using.

In this example, I’ll be installing Otomi with minimal values. Note (again) that this is recommended for experimentation purposes only.

First, create the following values YAML file. You can change the ower and name properties if you like to.


  cluster: 
  owner: myself  
  k8sVersion: '1.20'  
  name: my-cluster  
  provider: azureotomi:  
  adminPassword: '' # Will be automatically generated if not filled-in

Then install the chart:


  helm repo add otomi https://otomi.io/otomi-core
  helm repo update
  helm install -f values.yaml otomi otomi/otomi

The installer job will now install Otomi on your cluster. You can follow the progress of the installer by looking at the log output of the installer job:


kubectl logs jobs/otomi -n default -f

When the installer has finished (which can take around 20 to 30 minutes), copy the URL and the generated password from the bottom of the logs.


2021-11-12T09:26:11.129Z otomi:gen-drone:log gen-drone is finished and the pipeline configuration is written to: /home/app/stack/env/.drone.yml
2021-11-12T09:26:11.129Z otomi:encrypt:debug Skipping encryption
2021-11-12T09:26:11.130Z otomi:commit:info Committing values
2021-11-12T09:26:11.913Z otomi:gitPush:info Starting git push.
2021-11-12T09:26:12.414Z otomi:gitPush:log Otomi values have been pushed to git.
2021-11-12T09:26:12.414Z otomi:commitAndPush:log Successfully pushed the updated values
2021-11-12T09:26:12.713Z otomi:commit:info
    ########################################################################################################################################
    #
    #  To start using Otomi, first follow the post installation steps: https://otomi.io/docs/installation/post-install/
    #  The URL to access Otomi Console is: https://otomi.20.81.68.159.nip.io
    #  The URL to access Keycloak is: https://keycloak.20.81.68.159.nip.io
    #  When no external IDP was configured, please log into Keycloak first to create one or more users and add them either to the 'team-admin' or 'admin' group.
    #  The password of the Keycloak admin user is: ZOdO4Fk8bmYnnb34PedZ
    #
    ########################################################################################################################################
    

The first thing you’ll need to do now is to create a new user in Keycloak and add the user to the otomi-admin group. Go to the provided Keycloak URL and sign in with the user “admin” and the generated password provided in the logs.

Check here for complete instructions on how to create users in Keycloak.

Now you can sign in to the console. Go to the provided URL and sign in with your newly created user.

As you would have noticed, the browser says the connection to this site is not secure. Because we did not use DNS with LetsEncrypt and also did not provide our own CA, Otomi has automatically generated a CA for you. But no worries, you can add the generated CA to your keychain. In the left pane of the console, click on Download CA and add the CA to your keychain:


sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ~/Downloads/ca.crt
    

To start using Otomi, you’ll also need to activate Drone. To enable Drone, open the Drone app (using the shortcut in Otomi Console), and sign in with OpenID Connect using the newly created user. In Drone, you’ll see the repository of the Otomi values created by Otomi.

Now click on Activate, then click Activate Repository, and then click on Save. See here for complete post-installation instructions.

Now you’re ready to create teams, services, secrets, and jobs, and use all the integrated tools for logging, security, metrics, tracing, and much more.

Wrapping up

With Otomi, you can turn any Kubernetes cluster into a complete container platform in minutes instead of months, and at the same time provide developers with self-service. Because of the low entry-level of Otomi for developers, they can start today and learn as they go along. Go to otomi.io to get started

Latest Articles

Navigating the Evolution: Trends and Transformations in Kubernetes Platforms for 2024

Navigating the Evolution: Trends and Transformations in Kubernetes Platforms for 2024

As we look ahead to 2024, the excitement around building and managing container and Kubernetes platforms is shifting to a more realistic outlook. Companies are realizing that these tasks are more complex than originally thought. In the bigger picture, we can expect things to come together and simplify in the coming year. Let's break it down.

Read more
Mastering Dockerfile USER

Mastering Dockerfile USER

Mastering Dockerfile USER: The Key to seamless Kubernetes Deployment

Read more