Zero trust networking in Kubernetes

In this article, Jehoszafat Zimnowoda, Engineering Manager @ Red Kubes, explains some of the design decisions his team has made for implementing a zero-trust network architecture for Kubernetes.

Written by Jehoszafat Zimnowoda, Engineering Manager @ Red Kubes

Introduction

Controlling network traffic plays a key role in today’s distributed and dynamically changing systems, including Kubernetes. Of course, you can live without strict control of your network traffic but it is like driving a car without wearing seatbelts – everything is fine until an accident happens. In this article, I’ll explain some of the design decisions my team has made for implementing a zero-trust network architecture for Kubernetes.

Zero trust

Zero Trust means: Don't trust any network, including your own. Paraphrasing it into the Kubernetes realm: By default, a Pod cannot access any public or private endpoint and a Pod cannot be accessed by a public or private actor.

Lack of “seatbelts” in a Kubernetes cluster may lead to attacks like downloading malicious software into your Container, data leakage, and DDoS attacks inside the cluster. This is why denying any network traffic by default and applying the least privilege principle is crucial.

There are many ways to implement zero trust in Kubernetes, depending on the technologies used. In Otomi, we use Kubernetes Network policies and the Istio Service Mesh as means to control ingress and egress network traffic.

In Otomi, each application in each team namespace is accompanied by two sets of rules: internal ingress filtering and external egress filtering, as shown in the picture below. Each set of rules is composed of trusted endpoints that are either given by Otomi or user-defined. When I say application I mean a group of Pods with the label app:

I would like to emphasize that leveraging Pod labels is crucial when employing network policies in a Kubernetes cluster because they allow targeting a set of Pods that dynamically are spawned and teardown by the Kubernetes scheduler.

Internal ingress filtering with Kubernetes Network Policies

Otomi defines a set of trusted platform applications for each Pod in the team namespace. These are:

  • Istio Ingress Gateway, so team application can be accessed from the outside of service mesh. This of course happens only if a user decides to make the application public
  • Knative-activator to proxy the first request to Knative service deployed in a team namespace
  • Prometheus, so it can scrape metrics from Pods

A user can also define trusted applications that belong to a team or even another team namespace. It is also possible to trust all applications from a given team. Some practical examples are presented in the next section.

External egress filtering with Istio Service Mesh

Network Policies may be limited by the underlying implementation of the CNI. For example, Calico does not allow to define egress policies for domain names, whereas Cilium does. Since all applications in all team namespaces are automatically deployed with an Istio-proxy sidecar, we decided to tackle external egress filtering by leveraging Istio constructs. First of all, we defined a strict Istio mesh outboundPolicy for each team namespace, then we employ ServiceEntries to define trusted public IP addresses and public domain names. A ServiceEntry can be bound to a namespace or entire service mesh but not to a specific workload. Therefore AuthorizationPolicy rules can be used to make external egress filtering more fine-grained (coming soon in Otomi).

Last but not least, Otomi also allows external egress policies for trusted platform public domains. For example to allow access to public OIDC URL, so applications are able to verify incoming JWT tokens (if needed).

Abstracting away the complexity

Kubernetes Network Policies and Istio Service Mesh concepts may be difficult to understand for a user that just wants to deploy an app and whitelist trusted endpoints. We decided to let users operate on a high-level concept like Kubernetes services, public domain names, and IP addresses. Otomi translates them into manifests that represent Kubernetes resources.

The next 2 figures show how teams can use the self-service feature in Otomi to configure Ingress and Egress network filtering per service:

Configuration as a code

All network policies, ServiceEntries and Sidecar configurations are rendered based on configuration parameters stored in Git. This prevents any dangling policies in the cluster and enables you to follow the just-in-time principle. It will also ensure strict network policies for new services registered in Otomi, so only explicit rules enable ingress or egress traffic.

Practical examples

Note that the configuration code in the next snippets is automatically generated when configuring ingress and egress policies using the self-service feature in Otomi. It is also possible to write this configuration manually by cloning the configuration repository and using Otomi CLI to validate and apply changes.

An application disallows internal ingress traffic and external egress traffic:


teamConfig:
     teams:
        a1:
            services:
                - name: database

A service that needs to access https://httpbin.org domain and public IP 116.203.255.68on port 443:


teamConfig:
     teams:
         a1:
            services:
                - name: database
                  networkPolicy:
                        ingressPrivate:
                              mode: allowOnly
                              allow:
                                  - team: a1
                                    service: c1

A service that needs to access public TCP endpoint:


teamConfig:
       teams:
           a1:
              services:
                - name: c1
                  networkPolicy:
                    egressPublic:
                      - domain: 'httpbin.org'
                        ports:
                          - protocol: HTTPS
                            number: 443
                      - domain: '116.203.255.68'
                        ports:
                          - protocol: TCP
                            number: 443

Take aways

If your organization is planning to employ strict network traffic policy enforcement then you need to give development teams time to adapt. Invest in automation of deploying network policies, service entries, and sidecars configuration. Hide technology details from users and let them operate on high-level concepts like trusted public endpoints and trusted cluster applications.

You may wonder about the implementation of particular pieces. For your convenience, I prepared direct links from the otomi-core Github repository.

Internal ingress filtering:

https://github.com/redkubes/otomi-core/blob/master/charts/team-ns/templates/networkpolicy.yaml

External egress filtering:

https://github.com/redkubes/otomi-core/blob/master/charts/team-ns/templates/istio-serviceentry.yaml

https://github.com/redkubes/otomi-core/blob/master/charts/team-ns/templates/istio-sidecar.yaml


Latest Articles

Navigating the Evolution: Trends and Transformations in Kubernetes Platforms for 2024

Navigating the Evolution: Trends and Transformations in Kubernetes Platforms for 2024

As we look ahead to 2024, the excitement around building and managing container and Kubernetes platforms is shifting to a more realistic outlook. Companies are realizing that these tasks are more complex than originally thought. In the bigger picture, we can expect things to come together and simplify in the coming year. Let's break it down.

Read more
Mastering Dockerfile USER

Mastering Dockerfile USER

Mastering Dockerfile USER: The Key to seamless Kubernetes Deployment

Read more