In this blog post we’ll zoom in into a couple of best practices when using Kubernetes in production, and show how these best practices are supported by Otomi. Note that the best practices in this post are only a few examples. Otomi supports dozens of Kubernetes (security) best practices out-of-the-box. But more on that in a later post 😉
Limit container capabilities
Securely running workloads in Kubernetes can be difficult. Many different settings can impact Kubernetes’ security. This requires significant knowledge to implement correctly. Read-only root file systems, for example, can prevent any attack that depends on installing software or writing to the file system. One of the most powerful tools Kubernetes provides in this area is the security context settings that every Pod and Container manifest can leverage.
Otomi enforces policies to limit container capabilities. Policies can be turned on or off by the platform administrator, and it’s also possible to set default parameters to be used by the policies. The ‘psp-capabilities’ policies can be used to disallow containers to obtain escalated access. See here for a full list of all policies.
Don't deploy code from unknown sources
If you don’t know the provenance of an image, you can’t trust it. To avoid untrusted images being pulled and deployed, make sure images can only be pulled and deployed from trusted repositories.
This Kubernetes security best practice is implemented in Otomi by the ‘psp-allowed-repos’ policy. After installing Otomi on your Kubernetes cluster, sign in into Otomi Console and ga to settings and then policies. Here you’ll see the Allowed repositories policy. Add the allowed repositories to the list and enable the policy. From now on only images from allowed repositories can be deployed.
Scan images for vulnerabilities
One way for attackers to target applications is to exploit known vulnerabilities in common dependency code. That’s why you’ll need tools to spot these vulnerable dependencies. Scanning a container image before it’s been deployed into Kubernetes can help reduce the potential attack surface and stop attackers from stealing data or tampering with your application.
It is advised to automate vulnerability scanning on all images. Otomi incorporates Harbor. In Harbor, you can automatically scan all images for vulnerabilities and also prevent images with vulnerabilities from being deployed.
Specify resource requests and limits
Resource limits define the maximum amount of resources a container can use and resource requests define the amount of resources that will be allocated/reserved for a container that cannot be used by any other container. It’s best practice to always specify resource requests and limit values. Not defining limits can lead to resource contention with other containers and unoptimized consumption of computing resources.
When deploying containers using Otomi Console, resource requests and limits are required to be used. When deploying manually, Otomi will enforce the use of resource requests and limits. Otomi in other words is laying down the guard rails to always comply with best practices.
Monitor service availability
Of course, one of the benefits of using Kubernetes is self-healing. But when problems arise, sending alerts to the appropriate team significantly speeds up identifying the root cause of an issue, allowing teams to resolve incidents quickly.
When you deploy workloads using Otomi Services, Otomi not only creates the Knative service, it also creates a Prometheus Blackbox exporter probe for each service. Prometheus in Otomi is configured to collect metrics about probing requests and will create alerts based on those metrics using Alertmanager Alerts can be sent to Slack, Microsoft Teams, or email. You could say Otomi offers ‘monitoring as code’. Talking about code, let’s go to the last best practice:
Store configuration in version control
It’s recommended to store configuration inside source control for easy version-tracking, rollback, and deployment. There are multiple ways to approach this and the most popular way to do this is using GitOps workflows.
Otomi out-of-the-box works based on GitOps workflows by using Git as a single source of truth for everything that is managed by Otomi. Any divergence between Git and what’s running on the cluster is automatically updated after each commit. When you create teams, change platform configuration (like security policies), or create services and jobs using Otomi Console, then Otomi API generates the configuration code and commits it to Git.
In this blog post, we only touched on a couple of Kubernetes best practices. The goal here however is to show that using Otomi helps adopt all of these (and much more) best practices. If you would like to try out Otomi yourself, go to https://otomi.io to get started.
Why Otomi: Kubernetes becomes more and more popular, but Kubernetes is not necessarily easy to work with. To get the most out of Kubernetes (and to minimize complexity), we have developed Otomi. Otomi offers a full platform experience on top of Kubernetes and comes with a complete suite of pre-configured, integrated, and ready-to-use applications and add-ons. With Otomi, you’ll get both speed and maturity.