Kubernetes Ingress Controllers
A Kubernetes Ingress is a set of rules that exposes cluster services externally. For an Ingress to handle traffic and function, Kubernetes uses an Ingress controller resource that implements Ingress rules within the cluster. Unlike other controllers, Kubernetes does not start an Ingress controller automatically. Rather, it lets administrators choose one or multiple Ingress controllers within a cluster.
While Kubernetes maintains the NGINX (Kubernetes managed), AWS Load Balancer Controller, and GCP Load Balancing, the option to choose an advanced controller based on specific use cases is equally supported.
What is an Ingress in Kubernetes?
Through an Ingress, cluster administrators set up traffic routing rules without exposing node services or creating load balancers.
Ingress in Kubernetes is primarily comprised of two components:
- Ingress API object: The Kubernetes API object that describes the desired state for exposing cluster services;
- Ingress controller: A cluster resource deployed within the cluster, which actually implements rules specified by the Ingress API object.
The Ingress controller reads and processes information from the Ingress object and implements the configurations within the cluster.
What Does a Kubernetes Ingress Configuration Look Like?
Just like any other Kubernetes resource, an Ingress object includes fields for apiVersion, kind, and metadata. The object’s name is a valid DNS subdomain name and relies on annotations to configure advanced options specific to the Ingress controller. The YAML spec file also includes the information needed to configure proxy servers and load balancers.
The configuration specifications for a minimal Kubernetes Ingress resource would look similar to this:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: darwin-minimal-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /testpath pathType: Prefix backend: service: name: test port: number: 80
Ingress enables cluster administrators to direct HTTP(S) traffic by matching incoming requests against specific rules. Each Ingress rule contains the following specifications:
- An optional host to whom the rules will apply. If no host is specified, the Ingress resource applies the rules to all incoming HTTP traffic;
- A list of paths associated with the backend services;
- The combination of port and service names that describes the backend in custom resource definitions.
Advanced Kubernetes Ingress Controllers
Though the standard Kubernetes Ingress resource defines basic load balancing and traffic routing capabilities, it is considered insufficient for most production workloads. To help with this, advanced Ingress controllers include scalable traffic management capabilities, while helping to implement resilient load balancing and seamless development release cycles.
While there is a plethora of advanced Ingress controllers that offer useful features and support different use cases, the list below considers the following factors: protocol support, API gateway features, enterprise support, and advanced traffic management.
1. NGINX Ingress Controller
On account of its proven reputation for technical innovation and ease of use, the NGINX Ingress controller remains the most popular traffic management solution for Kubernetes and containerized applications. NGINX provides load balancing, caching, a web application firewall (WAF), and API gateway for Kubernetes clusters, and is mostly used as a simple reverse proxy for dynamic workloads.
It is also important to note that there are two separate NGINX Ingress controller projects: community managed and NGINX managed. While both are equally popular, the latter is owned and managed by NGINX and comes with both free and premium options.
The NGINX Ingress object includes several features for production-grade Kubernetes environments, such as:
- Performance monitoring and visibility: Helps detect unusual activities, enabling quick and efficient troubleshooting;
- Ingress resources: Includes a number of features to simplify header manipulation, sophisticated routing, TLS authentication, and circuit breaking for Kubernetes traffic management;
- Secured configurations: Leverages RBAC to simplify identity and access management for the seamless adaption of an existing configuration with new integrations; also enforces agility through self-service guardrail security settings.
NGINX Ingress Controller: Use Cases
- Centralized traffic routing: The NGINX Plus controller integrates seamlessly with the NGINX service mesh to simplify the management of both Ingress and Egress traffic from a single dashboard. The service mesh operates on layer 7, allowing for a simple intelligent cluster and application traffic management.
- Load balancing for multiple environments: NGINX Plus enables Kubernetes administrators to utilize health checks, session persistence, and global server load balancing to balance network traffic across hybrid environments. The advanced controller balances HTTP, UDP, and TCP traffic, while allowing dynamic infrastructure reconfiguration without the need for a restart.
- Zonal isolation: As resource consumption is bound to a local environment, the NGINX controller processes requests on each node separately, making it easy to impose cluster limits.
NGINX Ingress Controller: Video
2. Istio Ingress Gateway
The Istio Ingress Gateway is built on Envoy, which proxies data-plane traffic for simpler flow control. Operating at the edge of the Istio service mesh that receives incoming HTTP/TCP requests, the gateway then passes these requests to the Istio Kubernetes Ingress object that manages Ingress traffic using custom routing rules. These rules enable the easy control of API calls and HTTP traffic between cluster services, while simplifying fundamental traffic routing configurations, such as timeouts, circuit breakers, retries, and advanced deployment rollouts.
Istio Ingress Gateway: Use Cases
- Dynamic Ingress control: Istio includes load balancing and service discovery capabilities that offer fine-grained control over cluster-bound traffic. Istio’s traffic management API makes it possible to apply special rules for Ingress traffic, enabling dynamic distribution of traffic across the load balancing pool. This also makes it possible to apply a custom load balancing policy for traffic destined toward a particular set of services in the Istio service mesh.
- Observability: The Istio Ingress Gateway generates telemetry for every service within the cluster. Cluster administrators can gain visibility into how each service behaves, making it easy to troubleshoot and optimize application performance.
- API access control: Istio Ingress includes security integrations for transparent TLS encryption, security policies, and identity management. Istio comes with airtight safety controls by default, eliminating the need to modify code or adopt additional components for improved security posture.
Istio Ingress Gateway: Video
3. Emissary
Formerly known as Ambassador, Emissary-ingress is an open-source Kubernetes API gateway that relies on a declarative self-service deployment model. The gateway is built on the Envoy proxy to provide advanced traffic routing functions, such as automatic retries, rate limiting, circuit breakers, and load balancing. Emissary integrates with popular service mesh, distributed tracing, and observability solutions, so administrators can stay on top of Kubernetes application performance.
Emissary-ingress Controller: Use Cases
- Service mesh management: The Emissary-ingress controller connects multiple service meshes, such as Linkerd, Istio, and Consul, and enables seamless management from a single interface.
- Routing edge traffic: The Emissary-ingress object uses route rules specified in the CRDs to create mapping objects. The Ingress routes enable administrators to redirect hosts and URL paths to cluster services from the edge.
- Live diagnostics: Emissary includes the K8s Initializer to offer a preconfigured application stack that enables distributed tracing for all services. The initializer helps administrators understand the topology and interaction between cluster services, enabling rapid troubleshooting and diagnostics.
Emissary-Ingress Controller: Video
4. Traefik Ingress Controller
Traefik is an HTTP reverse proxy and load balancing platform that configures itself dynamically for Kubernetes service networking. To do this, the Ingress controller listens to the Kubernetes API, then automatically generates routes to connect external requests to services and update configurations without requiring restarts. Along with leveraging Let’s Encrypt to offer TLS security for incoming requests, Traefik provides performance metrics through major observability platforms, such as StatsD, Prometheus, InfluxDb, and Datadog, for easier monitoring.
Traefik Ingress Controller: Use Cases
- TLS certificate generation/renewal: Traefik integrates with various Automated Certificate Management Environments (ACMEs), such as Let’s Encrypt, to automatically generate and manage TLS certificates for safe HTTP(S) connections.
- Monitoring metrics, logs, and traces: Traefik enables comprehensive observability for microservices architectures by offering insights into logs and metrics. It also enables audit tracing and correlation for enhanced observability/troubleshooting.
- Configuration and service discovery: Traefik includes Providers as infrastructure components that scan the Kubernetes API to collect important traffic routing data. The platform dynamically updates these routes when there is a change in configuration, ensuring automatic configuration discovery.
Traefik Ingress Controller: Video
Advanced Ingress Controllers: When to Use Them
NGINX | Istio | Emissary | Traefik | |
---|---|---|---|---|
When to use * | When there's a need for central load balancing and a traffic routing solution across multiple environments | During A/B testing and canary deployments that require dynamic traffic routing | In clusters spanning multiple service meshes that should be managed from a single interface | For traffic shadowing and canary deployments |
Features worth mentioning |
|
|
|
|
Pros |
|
|
|
|
Cons |
|
|
|
*Quick Note: In most cases, the Ingress controllers covered above can be used for a number of environments and workloads. The use cases discussed in this post depict the most suitable applications and are not limited to the ones covered.
Conclusion
The Kubernetes Ingress object allows cluster administrators to provide routing rules that guide access to cluster services. These rules outline different specifications used to expose containerized applications outside the cluster. While there are multiple ways of directing incoming HTTP(S) traffic to applications in the cluster, Ingress is the most efficient, since it eliminates the need to create different load balancers.
This article explored some of the advanced Ingress solutions for Kubernetes that support various use cases. A production-grade Kubernetes ecosystem relies on several external and internal services to work in tandem. Though the Kubernetes-managed NGINX Ingress controller is a popular option, it is strongly recommended to diligently assess your requirements and choose the controller that best supports your use case.
0 Comments
Recommended Comments
There are no comments to display.