Kubernetes Network Traffic Flow

Understanding network communication in CSP and On-premise environments

Featured image



Overview

Understanding Kubernetes network traffic flow is crucial for proper cluster architecture, security, and troubleshooting. This guide explores networking in both Cloud Service Provider (CSP) and on-premise environments, detailing how traffic moves through different components of a Kubernetes cluster.

Kubernetes Networking Principles

Kubernetes networking is built on several key principles:

  • Every Pod gets its own IP address
  • Pods can communicate with all other pods without NAT
  • Agents on a node can communicate with all pods on that node
  • The Pod network is flat - all pods can reach each other directly


Network Traffic Paths

Kubernetes networking involves several distinct traffic paths, each with its own characteristics and components.

graph TD A[External Traffic] --> B[Load Balancer / Ingress] B --> C[Services] C --> D[Pods] D --> E[Containers] F[Pod-to-Pod Traffic] --> D G[Pod-to-Service Traffic] --> C H[External Egress Traffic] --> I[Node Network] D --> H style A fill:#f9f,stroke:#333,stroke-width:2px style E fill:#bbf,stroke:#333,stroke-width:2px style C fill:#ffa,stroke:#333,stroke-width:2px


Network Environments and Traffic Flow


CSP Environment Traffic Flow

In cloud service provider environments (AWS, GCP, Azure, etc.), Kubernetes leverages cloud infrastructure for external connectivity.

graph LR A[Internet] --> B[Cloud Load Balancer] B --> C[Ingress Controller or
Service LoadBalancer] C --> D{Routing} D --> E[Pod 1] D --> F[Pod 2] D --> G[Pod 3] style A fill:#f9f,stroke:#333,stroke-width:2px style B fill:#bbf,stroke:#333,stroke-width:2px style C fill:#ffa,stroke:#333,stroke-width:2px

Key Components in CSP Environments

  1. External Load Balancer:
    • Provided by the cloud provider (e.g., AWS ELB/ALB, GCP Load Balancer, Azure Load Balancer)
    • Located outside the Kubernetes cluster in the cloud provider’s infrastructure
    • Automatically provisioned when a Service of type LoadBalancer is created
    • Routes external traffic to nodes in the cluster
  2. Ingress Controller/Service:
    • Ingress Controllers handle HTTP/HTTPS traffic routing based on hostnames and paths
    • Services of type LoadBalancer expose non-HTTP protocols
    • Receive traffic from the cloud load balancer and route it to the appropriate pods
  3. Pods:
    • End destinations for network traffic
    • Run application containers
    • Each has a unique IP address within the cluster


On-premise Environment Traffic Flow

On-premise Kubernetes deployments require different networking approaches since they don’t have cloud provider integration.

graph LR A[Internet] --> B[On-premise Load Balancer
or Router] B --> C[Ingress Controller
NodePort or MetalLB] C --> D{Routing} D --> E[Pod 1] D --> F[Pod 2] D --> G[Pod 3] style A fill:#f9f,stroke:#333,stroke-width:2px style B fill:#bbf,stroke:#333,stroke-width:2px style C fill:#ffa,stroke:#333,stroke-width:2px

Key Components in On-premise Environments

  1. External Load Balancer/Router:
    • Self-managed hardware or software load balancer (e.g., F5, NGINX, HAProxy)
    • Located within the on-premise data center
    • Manually configured to route traffic to Kubernetes nodes
  2. Ingress Controller/Service:
    • Similar role as in CSP environments
    • Often exposed via NodePort services or with MetalLB
    • Receive traffic from the external load balancer and route it to appropriate pods
  3. Pods:
    • Functionally identical to pods in CSP environments
    • Network connectivity may be implemented differently depending on CNI plugin choice


Ingress Controllers in Different Environments

Ingress Controllers in CSP Environments

In cloud environments, ingress controllers are tightly integrated with cloud provider load balancing services.

CSP Ingress Controller Integration

When you deploy an Ingress Controller in a CSP environment:

  1. The Ingress Controller pod(s) are deployed in the cluster
  2. A Service of type LoadBalancer is created for the Ingress Controller
  3. The CSP automatically provisions an external load balancer and assigns it a public IP
  4. The CSP configures the load balancer to route traffic to the Ingress Controller
  5. The Ingress Controller handles routing based on Ingress resources
# Example AWS ALB Ingress Controller configuration
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: example-service
            port:
              number: 80

Ingress Controllers in On-premise Environments

In on-premise environments, the LoadBalancer service type doesn’t automatically receive an external IP without additional components.

On-premise Challenges

In on-premise environments, several challenges must be addressed:

  • No automatic external IP assignment for LoadBalancer services
  • Manual configuration of external load balancers
  • IP address management
  • Network routing between external networks and the Kubernetes cluster

On-premise deployments typically use one of these approaches:

  1. NodePort Services:
    • Expose the Ingress Controller on a port on each node (30000-32767 range)
    • Configure external load balancer to distribute traffic to these ports across nodes
  2. MetalLB or similar solutions:
    • Enables LoadBalancer service type in on-premise environments
    • Provides automatic IP assignment from a configured pool



MetalLB for On-premise Load Balancing

MetalLB is a popular solution for implementing LoadBalancer services in on-premise Kubernetes clusters.

graph LR A[Client] --> B[MetalLB Managed
IP Address] B --> C[MetalLB Speaker
Layer 2 or BGP] C --> D[Kubernetes
Service LoadBalancer] D --> E[Target Pods] style B fill:#bbf,stroke:#333,stroke-width:2px style C fill:#ffa,stroke:#333,stroke-width:2px

MetalLB Configuration

MetalLB works in two modes:

  1. Layer 2 Mode: Uses ARP (IPv4) or NDP (IPv6) to claim IP addresses on the local network
  2. BGP Mode: Establishes BGP peering sessions with network routers for more advanced configurations
# Example MetalLB configuration
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: production-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.1.100-192.168.1.150

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2-advertisement
  namespace: metallb-system
spec:
  ipAddressPools:
  - production-pool

When a LoadBalancer service is deployed with MetalLB:

  1. MetalLB assigns an IP from the configured pool
  2. The speaker component advertises the IP on the local network
  3. Traffic sent to that IP is routed to the appropriate service
  4. The service distributes traffic to the pods


Component Communication in Kubernetes

Kubernetes networking enables several types of communication between different components in a cluster.


Container-to-Container Communication

Containers within the same Pod share the same network namespace, enabling localhost communication.

graph TD subgraph "Pod" A[Container A
Port 8080] --- B[Container B
Port 9090] A --> |localhost:9090| B B --> |localhost:8080| A end style A fill:#bbf,stroke:#333,stroke-width:2px style B fill:#ffa,stroke:#333,stroke-width:2px

Key Characteristics:


Pod-to-Pod Communication

Kubernetes ensures all pods can communicate directly with each other regardless of their node location.

graph TD subgraph "Node 1" A[Pod A
10.244.1.2] B[Pod B
10.244.1.3] end subgraph "Node 2" C[Pod C
10.244.2.2] D[Pod D
10.244.2.3] end A <--> B A <--> C A <--> D B <--> C B <--> D C <--> D style A fill:#bbf,stroke:#333,stroke-width:2px style D fill:#ffa,stroke:#333,stroke-width:2px

Key Characteristics:


Pod-to-Service Communication

Services provide stable endpoints for pods, handling internal load balancing and service discovery.

graph TD A[Client Pod
10.244.1.2] --> |ClusterIP
10.96.0.10| B[Kubernetes Service] B --> C[Endpoint 1
10.244.2.2] B --> D[Endpoint 2
10.244.3.3] B --> E[Endpoint 3
10.244.1.4] style A fill:#bbf,stroke:#333,stroke-width:2px style B fill:#ffa,stroke:#333,stroke-width:2px

Key Components:

Internet-to-Service Communication

External traffic enters the cluster through several different mechanisms.

graph TD A[Internet
Client] --> B{Access Type} B --> |LoadBalancer| C[Cloud
Load Balancer] C --> G[Service
LoadBalancer] B --> |NodePort| D[Any Node
Port 30000-32767] D --> H[Service
NodePort] B --> |Ingress| E[Ingress
Controller] E --> I[Ingress
Resource] I --> J[Service
ClusterIP] G --> K[Target Pods] H --> K J --> K style A fill:#f9f,stroke:#333,stroke-width:2px style K fill:#bbf,stroke:#333,stroke-width:2px

Access Types:

  1. LoadBalancer Service:
    • Automatically provisions a cloud load balancer in CSP environments
    • Requires MetalLB or similar solution in on-premise deployments
    • Provides stable external IP address
  2. NodePort Service:
    • Opens a specific port on all nodes
    • Allows access via any node’s IP address
    • Port range is limited to 30000-32767 by default
  3. Ingress:
    • HTTP/HTTPS-based routing layer
    • Enables hostname and path-based routing
    • Can provide SSL termination, authentication, and more sophisticated traffic management


Network Components and Features

Network Components Table

Component Purpose Communication Type Implementation Examples
🛠️ Container Application runtime environment localhost (127.0.0.1) Docker, containerd, CRI-O
📦 Pod Basic scheduling unit, shared network namespace Pod IP (e.g., 10.244.1.2) kubelet, container runtime
🖥️ Service Stable endpoint for pod sets ClusterIP (e.g., 10.96.0.10) kube-proxy, CoreDNS
🌐 Ingress HTTP/HTTPS routing to services HTTP(S) host/path based NGINX, Traefik, Contour
🔌 CNI Plugin Pod networking implementation Pod-to-pod networking Calico, Flannel, Cilium
⚖️ Load Balancer External traffic distribution External IP to Service Cloud LB, MetalLB, HAProxy

Key Network Features

Kubernetes Network Features by Layer
  • Pod Networking (Layer 3)
    - Flat network space
    - No NAT between pods
    - Unique IP per pod
    - Implemented by CNI plugins
  • Service Networking (Layer 4)
    - Stable endpoints via ClusterIP
    - Load balancing across pods
    - Service discovery via DNS
    - Implemented by kube-proxy and CoreDNS
  • Application Routing (Layer 7)
    - HTTP/HTTPS routing via Ingress
    - Path and hostname based routing
    - SSL termination
    - Authentication options


Advanced Networking Concepts

Network Policies

Network Policies provide firewall-like rules for pod-to-pod communication, enabling micro-segmentation.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080

Service Mesh

Service meshes like Istio, Linkerd, and Consul provide advanced networking capabilities for microservices:

Feature Description
Traffic Management Fine-grained routing, traffic splitting, A/B testing, canary deployments
Security Mutual TLS, certificate management, authentication, authorization
Observability Distributed tracing, metrics collection, traffic visualization
Reliability Circuit breaking, retries, timeouts, fault injection

DNS in Kubernetes

CoreDNS provides service discovery within Kubernetes clusters:


Troubleshooting Kubernetes Networking

Common networking issues and diagnostic commands:

# Check if pods can reach each other
kubectl exec -it <pod-name> -- ping <other-pod-ip>

# Check DNS resolution from a pod
kubectl exec -it <pod-name> -- nslookup <service-name>

# Examine service configuration
kubectl describe service <service-name>

# Check endpoints for a service
kubectl get endpoints <service-name>

# Test connectivity to a service
kubectl exec -it <pod-name> -- curl <service-name>:<port>

# Check ingress configuration
kubectl describe ingress <ingress-name>

# View network policies
kubectl get networkpolicies


Key Points

Kubernetes Networking Summary
  • Architecture Differences
    - CSP environments leverage cloud load balancers
    - On-premise requires solutions like MetalLB
    - Both share common internal networking principles
  • Traffic Flow
    - External → Ingress/Service → Pods → Containers
    - Pod-to-Pod communication across nodes
    - Service abstraction for stable endpoints
  • Implementation Considerations
    - CNI plugin selection affects performance and features
    - NetworkPolicy support varies by CNI plugin
    - Service mesh adds overhead but provides advanced features



References