
KUBERNETES SECURITY
As organizations transition to cloud native technologies like containers and Kubernetes, the core business challenge remains the same: figuring out how to accelerate development velocity while maintaining security. Even in the world of Kubernetes and containers, these two business objectives are still in tension.
Kubernetes is becoming a mainstream solution for managing how stateless microservices run in a cluster because the technology enables teams to strike a balance between velocity and resilience. It abstracts away just enough of the infrastructure layer to enable developers to deploy freely without sacrificing
important governance and risk controls. But all too often, those governance and risk controls go underutilized. Since everything is working, it’s easy to think that there aren’t any problems. It’s not until you get hit with a DoS attack or a security breach that you realize a Kubernetes deployment was misconfigured or that access control wasn’t properly scoped. Running Kubernetes securely is quite complicated, which can cause headaches for development, security, and operations folks.
KUBERNETES SECURITY CHALLENGES AND BENEFITS
Development teams new to Kubernetes may neglect some critical pieces of deployment configuration. For example, deployments may seem to work just fine without readiness and liveness probes in place or without resource requests and limits, but neglecting these pieces will almost certainly cause headaches down the line. And from a security perspective, it’s not always obvious when a Kubernetes deployment is over-permissioned—often the easiest way to get something working is to give it root access.
Security will always make life a bit harder before it makes it easier. Organizations tend to do things in an insecure way by default, because they don’t know what they don’t know, and Kubernetes is full of these unknown unknowns. It’s easy to think your job is done because the site is up and working. But if you haven’t
tightened up the security posture in a way that adheres to best practices, it’s only a matter of time before you start learning lessons the hard way.
Fortunately, Kubernetes comes with some great built-in security tooling, as well as a robust ecosystem of open source and commercial solutions for hardening your clusters. A well-thought-out security strategy can enable
development teams to move fast while maintaining a strong security profile. Getting this strategy right is why DevSecOps is so important for cloud native application development.
Furthermore, Kubernetes helps security teams formulate a coherent strategy by putting many pieces of computing infrastructure in one place. This makes it much easier for security teams to conceptualize and address potential attack vectors. The pre-Kubernetes attack surface—the number of different ways to break into your infrastructure—is substantially larger than the Kubernetes attack surface. With Kubernetes, everything is under one hood.
Optimizing Kubernetes security, however, is no easy feat, as there’s not one way to handle security in Kubernetes. While it’s best to keep people out of the cluster altogether, that goal is hard to achieve since your engineers need to be able to interact with the cluster itself, and your customers need to be able to interact with the applications the cluster is running.
Kubernetes can’t secure your application code. It won’t prevent your developers from introducing bugs that result in code injection or a leaked secret. But Kubernetes can limit the blast radius of an attack: proper security controls will restrict how far someone can get once they’re inside your cluster. For instance, say an outside attacker has found a vulnerability in your application and gained shell access to its container. If you have a tight security policy, they’ll be stuck— unable to access other containers, applications, or the cluster at large. But if the container is running as root, has access to the host’s filesystem, or has some other security flaw, the attack will quickly spread throughout the cluster. In essence, a well-configured Kubernetes deployment provides an extra layer of security.
BELOW, WE HIGHLIGHT THE FOLLOWING KEY KUBERNETES BEST PRACTICES RELATED TO SECURITY:
• DoS protection
• Updates and patches
• Role-based access
control (RBAC)
• Network policy
• Workload identity
• Secrets
DoS Protection
With Kubernetes you can make sure your applications respond well to bursts in traffic, both legitimate and nefarious. The easiest way to take down a site is to overload it with traffic until it goes down—an attack known as denial-of-service (DoS). Of course, if you see a giant burst of traffic coming from one user, you could just shut off their access. But with a distributed-denial-of-service (DDoS) attack, an attacker who has access to many different machines (which they’ve probably broken into) can bombard a website with seemingly legitimate traffic.
Sometimes these “attacks” aren’t even nefarious—it might just be one of your customers trying to use your API with a buggy script.
Kubernetes allows applications to scale up and down in response to increases in traffic. That’s a huge benefit as increases in traffic (legitimate or nefarious) won’t result in end-users experiencing any degradation of performance. But, if you are attacked, your application will consume more resources in your cluster and you’ll
get the bill.
While services like Cloudflare and Cloudfront serve as a good first line of defense against DoS attacks, a well designed Kubernetes ingress policy can add a second layer of protection. To help mitigate a DDoS threat, you can configure an ingress policy that sets limits on how much traffic a particular user can consume before they get shut off. You can set limits on the number of concurrent connections; the number of requests per second, minute, or hour; the size of request bodies; and even tune these limits for particular hostnames or paths.