Are All Kubernetes Ingresses the Same?

The simple answer is yes and no, but the real answer is more complicated. There has been a lot written on this topic, and I am taking a shot at making this area more understandable.

Before getting started, it’s important that I point out a key fact. k8s Upstream does not provide an Ingress. Like components such as service load balancers and storage, it simply provides the API that the controller should consume to create the functionality described in the k8s resource. An Ingress consists of the controller watching k8s APIs and the proxy engine that is programmed by that controller to affect forwarding.

The k8s Access Pattern

The original access pattern consisted of the Service Loadbalancers and Ingresses. The LoadBalancer attracts traffic to the cluster by adding IP addresses and the Ingress distributes requests to PODs over the CNI network. While they were designed to be used together, they can be used independently.

  • Service LoadBalancer only access. Traffic is attracted to the cluster nodes and sent to PODs either directly or via kubeproxy depending on configuration. It’s best to think of the Service Loadbalancer as an L3/4 function. It uses IP to forward traffic, and when traffic is received at a single node where a targeted POD is not present, it depends on kubeproxy to get to the other nodes. However, a LoadBalancer-only solution can provide a uniform access solution for a k8s cluster.
  • Ingress-only access. The Ingress depends on another mechanism to get traffic to the Ingress proxy POD. The ingress distributes traffic in the cluster based on its HTTP routing rules. These rules program a proxy engine in the ingress pod and as it is running in the cluster has direct access to the PODs using the CNI. The Service API has mechanisms other than the LoadBalancer to get traffic to the ingress, but they are all specific to or require configuration of external infrastructure.

Both LoadBalancer and Ingress depend on the Service API, the LoadBalancer is a Service type and the Ingress uses the service to define request Endpoints.

Cloud Provider Ingresses

To make things more confusing, Cloud Providers did not follow the design pattern. Cloud providers integrate their infrastructure resources with Kubernetes using a Cloud Controller, not via independent LoadBalancer, Ingress or Storage controllers.

The Cloud providers already had Load Balancers they called NLB operating at L3/4 so they mapped well to the Service LoadBalancer API. However in addition they had a product they called an Application Load Balancer, or ALB. One of the providers decided that they should map the ALB with the Ingress resource and the others followed suit. However, unlike other k8s ingresses, the ALB is outside the cluster as its a reuse of the network tools used for virtualization.

Most importantly the cloud provider Ingress implementation allocates IP addresses, a task that is normally undertaken in the Service API often using LoadBalancer type. Therefore, in a cloud provider, each is an independent entity. A cloud LoadBalancer is not paired with a cloud Ingress, however, a cloud LoadBalancer can be paired with an in-cluster Ingress.

Are Ingresses API Gateways?

Not really. An API gateway includes functionality that the Ingress API does not support, therefore strictly speaking an Ingress is not an API gateway. There are quite a few functions but a great example is header matching, often used in development A/B testing. The Ingress resource does not support this basic function.

Are All Ingresses the Same?

Yes and no… Any k8s Ingress controllers that are configured using the Ingress API are the same, in that they all have the same functionality. They may be implemented with different proxy engines, but the Ingress API defines the available functionality.

The Truth About Ingresses

With the exception of the CNCF NGNIX Ingress which only supports the Ingress API, all Ingresses are different.

Because the Ingress API lacks key functionality, every Ingress has a unique configuration model. You cannot apply a configuration for a Solo Ingress to an Ambassador Ingress; they use a configuration defined in a Custom Resource Definition (CRD) unique to each implementation.

The expanded functionality required to enable Ingress controllers to operate as an API gateway is well-understood and uniform across the “Ingresses,” however, each Ingress developer has a different view on how to configure an Ingress, each attempting to make it simple according to their definition of simple and target audience.

Examining Ingress Configuration Differences

There are lots of Ingress controllers, and I have looked at a lot of them, however, there are far more than I could accurately compare. I picked three popular Ingresses — Solo.io’s GlooEdge, Traefik’s Proxy, and Kong Kubernetes Ingress — and looked at the Custom Resource Definition-based configuration that qualifies each of these as an API Gateway. Before I go on, they are all great products, none of them have got this wrong, they are all just different.

To show the difference, I’ll include a simple configuration, not possible with the k8s Ingress API that includes URL rewriting. The URL presented by the ingress is /pre and is remapped to /backend the target URL in the POD. Each Custom Resource is easily identified in the apiVersion.

GlooEdge

apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  name: default
  namespace: gloo-system
spec:
  virtualHost:
    domains:
    - '*'
    routes:
    - matchers:
      - exact: /pre
      options:
        prefixRewrite: /backend
      routeAction:
        single:
          upstream:
            name: ingress-test-8080
            namespace: gloo-system

Solo’s engineers have clearly put a lot of thought into their configuration. The creation of resources can be undertaken using their CLI, glooctl. The controllers dynamically identify k8s services and create resources called upstreams that are used for route targets. The VirtualService Custom Resource (CR) contains the routes and the function we require not available in the ingress API prefixRewrite.

There is clearly a lot of focus on application frontend and backend developers at Solo. While it’s not shown in the simple configuration above, these guys have done some clever stuff. Their product has the ability to add OpenAPI references as configuration object that can be used to configure the API gateway and document/publish the API.

Gloo Edge Proxy Engine. Envoy is the proxy engine used by Gloo Edge. It’s a modern, well-maintained, open-source proxy engine.

Traefik Proxy

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: simpleingressroute
spec:
  entryPoints:
    - web
  routes:
  - match: PathPrefix(`/pre`)
    kind: Rule
    services:
    - name: ingress-test
      port: 8080
    middlewares:
    - name: replace-path
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: replace-path
spec:
  replacePath:
    path: /backend

The team here seems much more focused on Traefik Proxy functionality. Their product existed prior to Kubernetes and they have taken their configuration model and translated it into a CRD. The rewrite function is contained in the Middleware CR referenced by the IngressRoute CR. I’m sure it’s great for those who use Traefik elsewhere, but reading their documentation is a chore.

If your focus is URL routing and you have lots of other non-Kubernetes environments and are looking for something that will run in all of them, Traefik may be a good choice.

Traefik Proxy Engine. The proxy engine used is written in Go and specific to Traefik.

Kong Kubernetes Ingress

apiVersion: v1
kind: Service
metadata:
  annotations:
    konghq.com/override: demo-customization
  name: ingress-test
  
spec:
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: ingress-test-server
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    konghq.com/override: demo-customization
  name: demo
spec:
  ingressClassName: kong
  rules:
  - http:
      paths:
      - path: /pre
        pathType: ImplementationSpecific
        backend:
          service:
            name: ingress-test
            port:
              number: 8080
---
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: demo-customization
route:
  methods:
  - GET
  strip_path: true
proxy:
  path: /backend/

Kong is the big guy in API gateways (couldn’t resist), their gateway platform has been around longer than Kubernetes and this is an integration of that platform with Kubernetes. The team here has taken a very different path in configuration. They do have CRDs, but instead of replacing the Ingress object with a CR, they modify the Ingress and service object with annotations referencing their CR.

Both the Service object and the Ingress object are annotated to reference the KongIngress CR. The Ingress is referenced so it can be modified with the strip_path parameter, and the Service is annotated so it can reference the proxy parameter. These each could have been placed in its own KongIngress CR.

I applaud the Kong engineers for attempting this configuration model. It makes the Kong Kubernetes Ingress arguably more compatible with the k8s configuration. However, the execution of this is more difficult than the concept and could be considered confusing.

Kong Proxy Engine. A fork of the somewhat aging, Opensource NGINX proxy engine is used by Kong.

Is Anyone Trying to Simplify This?

Yes, the GatewayAPI Special Interest Group in Kubernetes is has been working on a new API. The GatewayAPI will provide all of the traffic and request management functionality to implement API Gateways within and outside the cluster.

apiVersion: gateway.networking.k8s.io/v1alpha2
kind: HTTPRoute
metadata:
  name: http-filter-redirect
spec:
  parentRefs:
  - group: gateway.neworking.k8s.io
    kind: Gateway
    name: uswest-gtwapi
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: /pre
      filters:
      - type: RequestRedirect
        requestRedirect:
          path:
            type: ReplacePrefixMatch
            replacePrefixMatch: /backend
      backendRefs:
      - name: ingress-test
        weight: 1
        port: 8080

All three of the in-cluster Ingress providers above have experimental support for the GatewayAPI today, and in the near term, the GatewayAPI will be promoted to beta making it available by default from upstream k8s.

Cloud Providers

The Gateway API also addresses the confusion caused by Cloud Providers implementing the Ingress with a proxy external to the cluster. The GatewayAPI supports external gateways creating per namespace gateways on demand. There are currently two implementations:

You can learn more about the GatewayAPI at https://gateway-api.sigs.k8s.io/ and the list of current implementation status at https://gateway-api.sigs.k8s.io/implementations.

.

Leave a Comment