AWS releases built-in network policy enforcement for AWS EKSReading time: 4 minutes
- Network Policy
AWS has announced built-in support for enforcing Kubernetes network policies for the native VPC CNI. This was one of the most requested features on the AWS containers roadmap. By default, Kubernetes allows all pods to communicate with no restrictions. With Kubernetes network policies, you can restrict traffic, and achieve zero-trust between workloads in your cluster.
Before, you had to deploy a third-party network policy controller, or replace the CNI completely, which can be very complicated for an existing cluster. You probably do want to use the VPC CNI so that your Kubernetes pods can communicate with services and other workloads in the VPC network directly.
However, network policies are difficult to implement.
- It’s an all-or-nothing endeavor - allowing one client will block other clients, unless you allow them.
- You place policies on servers, but really only clients know who they are supposed to connect to.
- It’s difficult to keep labels for many different services in sync so that the network policies allow the correct services, as ownership for different services can be split between different teams in the organization, but you must get it right on the first go or access will be blocked.
- Having many different network policies on a single node can have performance implications, as it results in many different rules that must be evaluated for each packet.
- It’s so hard it might as well be impossible, to know ahead of time and based on analysis of the policies, whether applying a network policy will result in workloads being blocked.
Try out the open source Otterize intents operator and network mapper to solve these problems, as well as manage other kinds of access controls, such as Kafka ACLs, Istio authorization policies, and (coming soon!) AWS RDS PostgreSQL and AWS IAM policies:
- Clear ownership: Declare
ClientIntentsin the same namespace as the client that your team is managing, instead of adding your client to network policies protecting a server owned by another team, in another namespace altogether, allowing each client to declare which servers it needs to call. The intents operator then aggregates client intents per server, and creates a single network policy on the server. This means that only one resource needs to change when one client changes.
apiVersion: k8s.otterize.com/v1alpha2 kind: ClientIntents metadata: name: client spec: service: name: client calls: - name: server
- Automatically generate intents: Use the network mapper to autogenerate client intents based on existing traffic. The network mapper captures DNS traffic in your cluster and generates
ClientIntentsresources for each client, which you can then push to Git and deploy to your cluster.
> otterize network-mapper export apiVersion: k8s.otterize.com/v1alpha2 kind: ClientIntents metadata: name: client spec: service: name: client calls: - name: server apiVersion: k8s.otterize.com/v1alpha2 kind: ClientIntents # [...] # and more! For all clients in your cluster, or all clients in a namespace.
- See what’s happening: Optionally, connect the intents operator and network mapper to Otterize Cloud, displaying a visual map of accesses in your cluster, and seeing which connections are allowed and which are blocked. This information is also available through an API for automation.
4. Do it gradually, not all-or-nothing: Enable shadow mode for the intents operator, which means it will not create network policies yet, but Otterize Cloud will be able to show you what would happen with the current intents and live traffic.
The green line indicates that intents are declared and access would be allowed if the server is protected. The yellow line indicates that access would be blocked if the server was protected, but is not blocked right now.
- Enable enforcement when ready, service-by-service or for the entire cluster: When you’re ready to protect a single server, create a
ProtectedServiceresource for that server, which will create a default-deny network policy for the service as well as allowing access from client which have
ClientIntents. If you’re ready to protect your entire cluster, switch the intents operator to active mode, which will create network policies for all clients with
apiVersion: k8s.otterize.com/v1alpha2 kind: ProtectedService metadata: name: server-protectedservice spec: name: server
Become one of the platform teams who have deployed this to staging in 15 minutes, and to production in days. There is zero configuration.
Deploy the intents operator and network mapper.
helm repo add otterize https://helm.otterize.com helm repo update helm install otterize otterize/otterize-kubernetes -n otterize-system --create-namespace
Install the CLI and autogenerate intents:
brew install otterize/otterize/otterize-cli
$ otterize mapper export apiVersion: k8s.otterize.com/v1alpha2 kind: ClientIntents metadata: name: client namespace: otterize-tutorial-eks spec: service: name: client calls: - name: server
And apply your intents:
$ otterize mapper export | kubectl apply -f - # or commit into your Helm chart for a real deployment
Want to see it in action? Check out a mini-tutorial that walks you through setting up an EKS cluster and trying out managing network policies with Otterize.
Tomer GreenwaldApr 18, 2023
Access control done rightRead more
David G. SimmonsAug 14, 2023
- Network Policy
Bite-size Otterize: moving fast and (never) breaking things
Protecting Kubernetes services one-by-oneRead more
Uri SaridDecember 10, 2022
Generative AI, ChatGPT, and intentsRead more