What is Kubernetes sessions - 2: An overview of cyber security

In this blog we will dive into the evolution of security in Kubernetes, emphasizing the shift from human-centric to machine-centric security measures. It discusses the challenges posed by containerization and cluster computing, and the role of Kubernetes in providing comprehensive security controls.

In the previous article we set out to identify the areas of concern when it comes to utilizing the tools given to us by Kubernetes. But the first thing on our mind should be security. Even though security in (networked) computing is a very broad subject covering many aspects, we think it is important to at least have an overview of its evolution. Because we think it should not be seen as a “specialty” left to others. In order to make security part of every step in design and implementation it helps to see and understand it on the most fundamental level: any actor should be verified and given least privileges needed for the duration of a task to be performed. If these requirements can be made part of software design it is very likely that security comes out strong, especially when monitored and fine-tuned in a feedback loop. But first let us dive into some historical background.

Humans don’t like straightjackets 

We all know stories of organizations taking security very seriously, but having policies in place requiring humans to ask other humans to get temporary access to resources, resulting in permanent workarounds like credentials sharing or network bypasses. If it was possible to get permissions on the fly (because we were vetted already and can thus be trusted to perform the task given to us until proven otherwise) there would be less need to create workarounds. Identification of the permissions requester allows us to oust a person when they go outside their intended scope of operation. Security used to be mostly about humans, and catching social engineering ploys, in which intruders impersonated somebody else to use their (elevated) permissions.

But machines can be hijacked

When the internet became popular and firewalls were invented to block malicious payloads, the domain of monolith applications was mostly seen as a black box from a security perspective. Apps were mostly running on one big computer. When hacking evolved into a lucrative and disruptive business, and 0-day vulnerabilities started to be sold on dark markets, we quickly realized that cyber security should be part of operations. We learned to mitigate intrusions by hardening the machine and minimizing the amount of vulnerable processes and thus the “attack surface”. We tailored the exact permissions app processes would need and set up sensors on the host system that would signal malicious behavior. To limit the blast radius of an intrusion event we started segmenting processes as much as possible. 

The innovations of centralized cloud control

Cloud computing brought many capabilities under one wing enabling centralized role based access control that was easy to audit. And with composable roles it was finally possible to make comprehensive permission schemes. Machine actors became a first class citizen and brought key management systems provisioning short lived (preferably one time only) access tokens just-in-time. Compromised credentials could be prevented from reuse outside of the context in which those were given.

Cloud vendors’ offered tight provisioning apis, allowing pre-validation of declarative configurations encompassing their entire domain. This Software Defined Everything (SDx) approach started changing the way we think and operate. Software Defined Networking controls were unfortunately Layer 4 oriented, forcing us to manage lists of static IPs.

Containerization brought network multiplication and proliferation

Learnings were not lost when containerization entered the fray. But when cluster computing became popular, it introduced a whole new playing field for operators and hackers alike. On a single cluster many apps are deployed simultaneously, each comprising multiple containers working together. Overlay networks and service discovery made it possible for these containers to talk to one another, but the blast radius of vulnerabilities was suddenly multiplied by the number of containers open to others (with more risk for lateral movement), or the outside world. We needed some new form of network segmentation. 

Containerization also brought Functions as a Service, making REST requests spin up a containerized workload on the fly for the duration of the request. In essence creating an isolated service handler that is short lived, making it harder for attackers to gain a foothold. Moving away from long lived services forces attackers to replay and thus script out attacks to be able to continue their explorations, which in theory should make it easier to detect malicious activity.

Kubernetes captures it all (or most of it)

After some years in which orchestration platforms emerged (which were mostly catering to a world of legacy, and tying together point solutions the “unix way”), Kubernetes rose to the top. It laid out an comprehensive SDx approach to managing containerized workloads, capturing the entire lifecycle from ingress to egress, and in a very elegant way from an architecture perspective. Layer 7 networking was incorporated from the ground up thanks to workload labeling, and brought with it the much needed security controls like L7 network policies. Its extensibility through Custom Resource Definitions allowed for third parties to build new constructs, like service meshes. Such meshes already made federation of interconnected geo-located clusters a reality, bringing edge computing as close as possible to the end user. (And increasing the attack surface by factor n, as you can imagine. But I will leave the implications to the reader’s imagination ;)

Conclusion: intrusion is inevitable

Bottom line is this: we should expect malicious takeovers and thus minimize opportunities for escalation. Only when putting ourselves into the mind of an attacker can we configure fitting straightjackets for the actors in our landscape. Every role should be given a least-permissions baseline profile, and any capabilities or access it needs should be designed/reviewed and governed from the start. Kubernetes gives us the handles to do so, as we will see later on in the series. We also need a security aware workforce to minimize and monitor attack surfaces, whether this work is done by humans or AI is a matter of time and shifting costs. Intelligent threat detection and response technology is already widely used by large corporations that can afford such products. The rapid pace with which these solutions are positioned in the market is an indicator of those becoming cheaper in the short term, inevitably leading to free open source solutions at some point. On the other hand, malicious (state) actors might benefit from targeting your organization for many reasons, and already have the capabilities to do so.

Either way, having laid a secure foundation is key.

Latest Articles

Navigating the Evolution: Trends and Transformations in Kubernetes Platforms for 2024

Navigating the Evolution: Trends and Transformations in Kubernetes Platforms for 2024

As we look ahead to 2024, the excitement around building and managing container and Kubernetes platforms is shifting to a more realistic outlook. Companies are realizing that these tasks are more complex than originally thought. In the bigger picture, we can expect things to come together and simplify in the coming year. Let's break it down.

Read more
Mastering Dockerfile USER

Mastering Dockerfile USER

Mastering Dockerfile USER: The Key to seamless Kubernetes Deployment

Read more