- Network Policy
- Kubernetes
- Zero-trust
Network policies are not the right abstraction (for developers)
We explore the limitations of relying solely on Kubernetes network policies as a solution for achieving zero-trust between pods, identifying multiple flaws that hinder their effectiveness in meeting the demands of real-world use cases, particularly when prioritizing developer experience in a Kubernetes-based platform.
Written By
Ori ShoshanPublished Date
Feb 12 2024Read Time
10 minutesIn this Article
Youâre a platform engineer working on building a Kubernetes-based platform that achieves zero-trust between pods. Developers have to be able to get work done quickly, which means youâre putting a high priority on developer experience alongside zero-trust.
Are Kubernetes network policies good enough? I think there are multiple flaws that prevent network policies, on their own, from being an effective solution for a real-world use case.
Before pointing out the problems, Iâd like to walk you through what I mean when I say zero-trust, as well as a couple of details about how network policies work.
Zero-trust means preventing access from unidentified or unauthorized sources
Network policies can prevent incoming traffic to a destination (a server), or prevent outgoing traffic from a source (a client).
Zero trust inherently means you donât trust any of the sources just because theyâre in your network perimeter, so the only blocking relevant for achieving zero-trust is blocking incoming traffic (âingressâ) from unauthorized sources.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: my-policy
spec:
ingress:
- {} # ingress rules
policyTypes:
- Ingress # policy refers to ingress, but it could also have egress
Letâs talk about network policies
Theyâre namespaced resources and refer to pods by label
Network policies are namespaced resources, and refer to pods by label. Logically, they must live alongside the pods they apply to â in our case, since weâre using ingress policies, that means alongside the servers they protect.
They donât refer directly to specific pods, of course, because pods are ephemeral, but they refer logically to pods by label. This is common in Kubernetes, but introduces problems for network policies. Keep this detail in mind as weâll get back to it later.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: protect-backend
spec:
podSelector:
matchLabels:
app: my-backend # policy will apply to pods labeled app=my-backend, in the same namespace as the policy
ingress:
- from:
- podSelector:
matchLabels:
app: my-client # and allow ingress access from pods labeled app=my-client
policyTypes:
- Ingress
They hold information about multiple sets of pods
The contents of the network policies are effectively an allowlist specifying which other pods can access the pods which the policy protects. But thereâs one big problem there: while the network policy must live with the protected pods, and is updated as part of the protected podsâ lifecycle, it wonât naturally be updated as part of the lifecycle of the client pods accessing the protected pods.
Friction when using network policies
Enabling access between two pods
Whenever a developer for a client pod needs access to a server, they need to get their client pod into the serverâs network policy so itâs allowed to call the server. The developer often cannot manage that network policy themselves, as it usually exists in a namespace they are not responsible for, and deployed with a service they donât own.
The result is that the client developer is dependent on the server team for access that should have been self-service, and the server team is now distracted enabling a client developer even though nothing has really changed from the point of view of the server team â a new client is connecting to their server, thatâs it! There should not be server-side changes required to simply enable another client.
What if you need to roll back the server?
There are also a myriad of second-order problems, which the team at Monzo had learned about through solving for this problem. (Itâs a super well-written blog post; I recommend having a read), such as that rolling back the server would affect whether clients could connect, since it rolled back its network policy.
When a server is rolled back due to an unrelated problem, its network policy may also be rolled back if it is part of the same deployment (e.g. part of the same Helm chart), and break the clients that relied on that version of the network policy! Itâs a reflection of the unhealthy dependency between the client and server teams: while it would make sense that a server-side change that breaks functionality would affect the client, it does not make sense that an unrelated and functionally-non-breaking rollback of the server would affect the client.
How do you know the policy is correct?
Because network policies refer to pod labels, they are difficult to validate statically. Pods are generally not created directly, but instead created by other resources, such as Deployments.
Can you tell whether a network policy will allow access for your service without deploying and trying it out? In fact, just asking the question âwhich services have effective access to service A?â becomes super hard.
Developers donât think of services as pod labels, but they tend to have a developer-friendly name they use. For example, checkoutservice is a friendly name, whereas checkoutservice-dj3847-e120 is not. This may in fact be the value of some label, but thereâs no standard way to discover this name.
So then, how do you take the concept of a service, with its developer-friendly name, and map that to its labels that are referred to by the network policies and, say, its Deployment, to be able to check if it will have access once its new labels are deployed? You could manually do that, as a developer in a single team that understands all the moving parts. However, this is very error-prone, and of course, doesnât apply to a solution a platform engineer could deploy: as a platform engineer, youâd need something automated you could make available to every developer in your organization.
This problem is one that the team at Monzo worked hard at. I recommend giving that blog a read as it is very well-written and also covers other factors of the problem.
How do you refer to pods within network policies?
Earlier, I mentioned that network policies donât refer to pods directly, as theyâre ephemeral, but refer to them by labels. This is common practice in Kubernetes. However, network policies are unique in that they use labels to refer to two (or more) sets of pods that are often owned by different teams in the organization.
This presents unique challenges because, for the network policy to function, the labels referenced by the policy and the labels attached to the pods must be kept in sync, with destructive consequences if you fail to do so â communication will be blocked! The pod labels for the client pods are managed by the client team, while the network policy that refers to them is managed by the server team, so you can see where things can get out of sync.
Network policies are effectively owned by multiple teams
This means that you need coordination between the teams, not only when the network policy is first deployed, but also over time as clients and servers evolve.
What if you have a network policy that allows multiple clients to connect to one server? Now youâve got the server team coordinating with 2 teams.
For each change a client team proposes, the server team needs to not only change network policy rules referring to that client, but also make sure they donât inadvertently affect other clients. This can be a cognitively difficult task, as the server team members normally donât refer to pod labels belonging to other teams, so it may not immediately be clear which labels belong to which team.
This reduces the ability for teams to set internal standards and work independently, and slows down development. If you donât get this right, there can be painful points in the development cycle where changes are fragile and their pace slows to a crawl. The pain may lead to bikeshedding and inter-team politics, as teams argue over how things should be done, and growing frustration as client deployments are delayed as a result of server network policies not yet being updated.
Is everyone in your organization proficient with how network policies work?
In many organizations, this is not the case. Network policies are already error-prone, with destructive consequences for even small mistakes. Asking every developer whose service calls another service to be familiar with network policies may be a tall order, with potential for failed deployments or failed calls that are hard to debug.
What would a good abstraction look like?
A good solution for zero trust should be optimized for that specific outcome, whereas network policies are a bit of a swiss army knife: they arenât just for pod-to-pod traffic, so theyâre not optimized for this use case.
The following 3 attributes are key for a good zero-trust abstraction that actually gets adopted:
Single team ownership: Each resource should only be managed by one team so that client teams can get access independently, and server teams donât need to be involved if no changes are required on their end.
Static analysis should be possible: It should be possible to statically check if a service will have access without first deploying it.
Universal service identities: Services should be referred to using a standard name that is close to or identical to their developer-friendly names, rather than pod labels.
Enter client intents
At Otterize, we believe that client intents satisfy these requirements. Let me explain briefly what they are, and then examine whether they satisfy the above attributes.
A client intents file is simply a list of calls to servers which a given client intends to make. Coupled with a mechanism for resolving service names, the list of client intents can be translated to different authorization mechanisms, such as network policies.
In other words, developers declare what their service intends to access, and that can then be converted to a network policy and the associated set of pod labels.
Hereâs an example of a client intents file (as a Kubernetes custom resource YAML) for a service named client calling another service named server:
apiVersion: k8s.otterize.com/v1alpha2
kind: ClientIntents
metadata:
name: client-intents
spec:
service:
name: client
calls:
- name: server
Letâs see if this is a good abstraction
Now letâs go back and review our criteria for a good zero-trust abstraction:
Does a team own all of, and only, the resources it should be managing?
Client intents files are deployed and managed together with the client, so only the client team owns them. You would deploy the ClientIntents for this client along with the client, e.g. alongside its Deployment resource.
Can access be checked statically?
Since services are first-class identities in client intents (rather than indirectly represented by pod labels), it is trivially possible to query which clients have access to a server, and whether a specific client has access to a server. As an added bonus, all the information for a single client is collected in a single resource in one namespace, instead of being split up across multiple namespaces where the servers are deployed.
Are service identities universal and natural?
Service names are resolved in the same manner across the entire organization, making it easy to reason about whether a specific service has a specific name.
How would a Kubernetes operator that manages these intents work?
When intents are created for a client, the intents operator should automatically create, update and delete network policies, and automatically label client and server pods, to reflect precisely the client-to-server calls declared in client intents files. A single network policy is created per server, and pod labels are dynamically updated for clients when their intents update.
Service names are resolved by recursively getting the owner of a pod until the original owner is found, usually a Deployment, StatefulSet, or other such resource. The name of that resource is used, unless the pod has a service-name annotation which overrides the name, in which case the value of that annotation is used instead.
Try out the intents operator!
It wonât surprise you that we in fact built such an open source implementation, and itâs called the Otterize intents operator. Give it a shot and see if it makes managing network policies easier for you.
In this Article
Like this article?
Sign up for newsletter updates
Resource Library
Read blogs by Otis, run self-paced labs that teach you how to use Otterize in your browser, or read mentions of Otterize in the media.