Blog

    Kubernetes egress vs ingress policies: common pitfalls and how to avoid them

    Written By
    Nic Vermandé
    Published Date
    Sep 26 2024
    Read Time
    7 minutes

    In Kubernetes, network policies are essential for controlling communication between pods. However, there are some common mistakes when configuring both ingress and egress policies, leading to frustrating communication failures. One of the most frequent errors is applying the same logic to egress as to ingress, without accounting for external dependencies like DNS or load balancers.


    Solution Workflow for Common Errors

    DNS Resolution Failures Due to Missing Egress Rules

    • Identify the Issue: If you encounter errors like Could not resolve host or lookup <service-name> on 10.96.0.10:53: no such host, it’s likely that your egress policy is missing DNS rules.

    • Test DNS Connectivity: Run nslookup example.com from within the affected pod. If DNS fails, review your egress policies.

    • Update Network Policy: Modify the egress policy to include DNS access:

      apiVersion: networking.k8s.io/v1
       kind: NetworkPolicy
       metadata:
         name: allow-dns-egress
       spec:
         podSelector:
           matchLabels:
             app: my-app
         policyTypes:
         - Egress
         egress:
         - to:
             - namespaceSelector:
                 matchLabels:
                   kubernetes.io/metadata.name: kube-system
               ports:
               - protocol: UDP
                 port: 53
               - protocol: TCP
                 port: 53


    • Verify the Fix: Retry the DNS query using nslookup or curl to confirm resolution.

    External Traffic and Load Balancer IPs

    • Identify the Issue: Errors like dial tcp <IP>:443: connect: no route to host indicate that egress is being blocked due to specific IP restrictions.

    • Test Connectivity: Use ping or curl to the domain name of the external service. If it fails, check the destination IP changes.

    • Update Network Policy: Avoid hardcoding IPs. Instead, configure a dynamic policy or use Otterize to observe domain access and translate it into IP-based rules.

    Intra-Cluster Communication and Missing DNS

    • Identify the Issue: Errors such as lookup <internal-service> on 10.96.0.10:53: server misbehaving point to missing egress rules for DNS or other services.

    • Test Internal Service Communication: Run curl or ping to the internal service name. Failure means DNS is not allowed.

    • Update Policy: Ensure egress policies allow access to CoreDNS and the required internal services.

    Over-Complicating by Enabling Both Ingress and Egress Policies

    • Identify the Issue: Errors like connection timed out or connection refused suggest mismatches between ingress and egress policies.

    • Review Both Policies: Make sure ingress and egress are correctly configured for both the client and the target pods.

    • Update and Synchronize: Adjust both policies so they align properly, ensuring traffic can flow as expected. Use tools like Otterize to automate policy consistency.

    DNS Resolution Failures Due to Missing Egress Rules


    When configuring egress network policies, it’s easy to forget that services often need DNS resolution. Pods need access to CoreDNS, which runs on port 53, for internal or external name resolution. If your egress policy doesn’t allow this, you’ll see errors such as Could not resolve host: example.com. Or in application logs:

    Error: Get "http://<service-name>": dial tcp: lookup <service-name> on 10.96.0.10:53: no such host

       

    To fix this, ensure that your egress policies allow UDP and TCP traffic on port 53 to the CoreDNS service in the kube-system namespace. These errors are very common in Kubernetes setups where DNS traffic is inadvertently blocked due to restrictive egress policies.


    External Traffic and Load Balancer IPs

    • Unlike ingress traffic, which usually comes through services with fixed DNS names, egress traffic can be complex when leaving the cluster. When making requests to external services, the destination IP may change due to load balancers. If your egress policy is configured only to specific IP addresses, this can lead to inconsistent failures when those IPs change.

    • Error messages in such cases might look like: Error: dial tcp <IP>:443: connect: no route to host

    It’s key to ensure that egress rules are not overly specific for external services. Instead of hardcoding IP addresses, Otterize can help by observing the domain names accessed by your pods and automatically translating them into the required IPs, updating network policies dynamically as those IPs change.


    Intra-Cluster Communication and Missing DNS

    • Pods often need to communicate with other services inside the cluster. A typical mistake is to assume that setting an ingress policy for a service is enough, while forgetting to set a corresponding egress policy for the clients. Without proper egress rules, DNS traffic may be blocked, and services will fail to discover each other.

    • You may see errors such as: error: lookup <internal-service> on 10.96.0.10:53: server misbehaving

    The solution is to ensure that egress policies allow traffic to CoreDNS and also to the intended services within the cluster.


    Over-Complicating by Enabling Both Ingress and Egress Policies

    • Enabling both ingress and egress policies can make communication rules more difficult to manage, as they need to be aligned perfectly to allow intended traffic. If ingress allows traffic to a pod, but the egress from the client is blocked, the connection will still fail.

    • A common pitfall is that people do not synchronize ingress and egress rules properly, leading to mismatches that result in silent failures. You might encounter errors like: error: connection timed out or error: connection refused



      Otterize can simplify this by automatically detecting necessary flows and ensuring that both ingress and egress rules are updated and consistent.

    How Otterize Can Help


    Managing ingress and egress policies can become complicated, especially when dealing with dynamic environments that have external dependencies. Otterize’s Intent Operator and Network Mapper automate the entire process, ensuring that:

    • Egress Connections to External Internet: Otterize detects egress connections and translates domain names into IP addresses, automatically updating your network policies as external IPs change. This means fewer dial tcp: no route to host errors.

    • Internal DNS Access: Otterize makes sure that egress policies always include access to CoreDNS, so DNS resolution works smoothly without manual intervention. Say goodbye to could not resolve host errors.

    • Synchronization of Ingress and Egress: Otterize keeps ingress and egress policies in sync to prevent mismatches and connection issues, giving you a consistent security model without the manual hassle.

    Frequently Searched Error Messages

    • curl: (6) Could not resolve host

    • dial tcp <IP>: connect: no route to host

    • lookup <service-name> on 10.96.0.10:53: no such host

    • connection timed out

    • connection refused

    These error messages are common when dealing with network policies in Kubernetes. Otterize can help ensure your network policies are correctly configured to avoid these problems altogether.


    Automate Network Policies with Otterize

    Manually managing network policies can be challenging, especially in dynamic environments. 

    Otterize's Intents Operator and Network Mapper can help automate the creation and management of these policies, ensuring your pods have the connectivity they need without manual tweaking.

    Here’s a quick tutorial on how to deploy Otterize and start creating Network Policies, and a longer and detailed guide on mastering cloud-native packets flows in your cluster.


    Ready to make network policies a breeze?


    Stop stressing over network policy details. Let Otterize handle the heavy lifting with ClientIntents, and get back to focusing on your app’s real business.

    Stay connected and learn more


    If you found this article helpful, we'd love to hear from you! Here are some ways to stay connected and dive deeper into Otterize

    🏠 Join our community


    Stay ahead of the curve by joining our growing Slack community. Get the latest Otterize updates and discuss real-world use cases with peers.


    🌐 Explore our resources


    Take a sneak peek at our datasheets:

    Or continue your exploration journey with our blogs and tutorials, or self-paced labs.

    Don't be a stranger – your insights and questions are valuable to our community!

    Like this article?

    Sign up for newsletter updates

    By subscribing you agree to with our Privacy Policy and to receive updates from us.
    Share article
    Resource Library

    Read blogs by Otis, run self-paced labs that teach you how to use Otterize in your browser, or read mentions of Otterize in the media.

    • Kubernetes
    • Network Policy
    • AWS
    • IAM
    Jan 27 2025
    New year, new features

    We have some exciting announcements for the new year! New features for both security and platform teams, usability improvements, performance improvements, and more! All of the features that have been introduced recently, in one digest.

    • Kubernetes
    • Zero-trust
    • IBAC
    • Automation
    • Startups
    • Podcasts
    • Network Policy
    • PCI
    Dec 11 2024
    First Person Platform E04 - Ian Evans on security as an enabler for financial institutions

    The fourth episode of First Person Platform, a podcast: platform engineers and security practitioners nerd out with Ori Shoshan on access controls, Kubernetes, and platform engineering.

      Oct 31 2024
      Kubernetes Liveness Probe Failed: Connection Refused