As cloud and Kubernetes have become a standard, security remains one of the top inhibitors to modern application development. To reduce security risks, organizations can’t manage access control on a cluster-by-cluster basis. And not finding a scalable approach leads to misconfigurations, vulnerabilities, and failed compliance audits.
Let us travel back in time and picture a fort. Forts were huge with massively thick walls, doors, watch towers and a moat to protect them from attacks. There were several layers of defense to keep attackers at bay. An attacker might swim across the moat but still had to climb the high walls before entering the fort. Thus, an attacker might compromise a single layer, but having several layers makes it difficult for an attacker to enter the fort.
If you observe closely, all the layers of defense did one thing — prevented access to attackers. That’s exactly what you need to protect your applications — several layers of defense that prevent unauthorized access. When it comes to Kubernetes access control, there are many different components to manage. Kubernetes clusters are complex and dynamic in nature which makes them vulnerable and prone to attacks.
This blog explores fundamental considerations when managing access to multiple Kubernetes clusters, which should help you plan better for overall Kubernetes security.
Isolating Your Kubernetes API Server
In a Kubernetes cluster, the control plane controls nodes, nodes control pods, pods control containers, and containers control applications. But what controls the control plane? Kubernetes exposes APIs that let you configure the entire Kubernetes cluster, so securing access to the Kubernetes API is one of the most critical considerations when it comes to Kubernetes security. With Kubernetes being entirely API-driven, controlling and limiting who can access clusters and what actions they are allowed to perform is the first line of defense.
Let’s examine the three steps of Kubernetes access control. Ensuring that network access control and TLS connections are appropriately configured should be your first priority before the authentication process starts.
1. API Authentication
The first step to access control is authenticating a request. Using an external authentication service when possible is recommended. For example, if your organization already manages user accounts using corporate Identity Providers (IdPs), such as Okta, GSuite, and Azure AD to authenticate users. The Kubernetes API server does not guarantee the order authenticators run in so it’s important to ensure that users are only tied to a single authentication method. It’s also important to perform periodic reviews of previously used auth methods and tokens and decommission them if they’re no longer being used.
2. API Authorization
Once authenticated, Kubernetes checks if a request is authorized. Role-based access control (RBAC) is the preferred way to authorize API access. There are four built-in Kubernetes roles by default that you should be aware of — cluster-admin, admin, edit, and view. ClusterRoles can be used to set permissions for cluster resources (eg, nodes), whereas roles can be used for namespace resources (eg, pods). RBAC in Kubernetes comes with a certain amount of complexity and manual effort. More on RBAC in the next section.
3. Admission Control
After successful authentication and authorization to perform specific tasks, the final step is admission control to modify or validate requests. Kubernetes ships several modules to help you define and customize what is allowed to run on your cluster, such as resource request limits and enforce pod security policies. Admission controllers can also be used to expand the Kubernetes API server via webhooks for advanced security such as implementing image scanning.
Role-Based Access Control (RBAC)
One of the reasons Kubernetes is adopted at such a large scale is because of the thriving community and regular updates. One of the key updates that was introduced in Kubernetes 1.6 was role-based access control or RBAC. While the basic authentication and authorization is taken care of by RBAC, the creation and maintenance of roles becomes crucial in multiple cluster environments. If you grant the built-in cluster-admin role to any user, they can virtually do anything in the cluster. Managing and keeping track of roles and access is a challenge.
For organizations with large, multi-cluster environments, there are a lot of resources that are created and deleted, often increasing the risk of having unused or missing roles left unattended. Some inactive role bindings can unexpectedly grant privileges in the future when new roles are created. This happens because role bindings can refer to roles that don’t exist anymore. In the future, if the same role name is used, these unused role bindings can grant privileges that weren’t supposed to be in the first place.
Complex and Dynamic Nature of Clusters
As the number of clusters, roles, and users increase, ensuring control requires proper visibility of users, groups, roles, and permissions. With every new role addition, you will have additional rules to configure. With large organizations, this can mean hundreds and even thousands of rules to manage. And with lack of a centralized system in place to manage all the roles across clusters, it’s an administrator’s worst nightmare.
One of the reasons Kubernetes is popular is because it is inherently scalable. It comes with security tools out of the box that allow both application and the infrastructure to scale based on demand. This means that Kubernetes clusters can be short-lived and be created and destroyed instantly. Every time a cluster is created or destroyed access must be configured for specific users. If access to clusters is not managed properly this can give rise to security vulnerabilities, potentially granting unauthorized access to your entire cluster.
Most of the teams today are spread across different business units within an organization. Oftentimes, developers, testers, business analysts, and consultants are all potentially different working on the same application — each requiring access either to clusters or different components of the same cluster. It’s important that you provide the right level of access to your different users and revoke that access when necessary.
Kubernetes is a well-coordinated system of multiple components like nodes, clusters, pods, containers, volumes and much more. At scale, you could have hundreds of these components spread over multiple clusters across the world. Identifying “who” needs “what” access to “which” resource becomes challenging. It’s only then you realize the need for a Kubernetes security tool that not only seamlessly integrates with your infrastructure but also gives you a secured and unified way of managing access to multiple clusters.