Skip to content

ADR-005: Use ingress-nginx as the Ingress Controller

Status

Accepted

Date

2026-03-07

Context

A Kubernetes Ingress controller is needed to route HTTP/S traffic to services within the cluster. The controller must be CNCF-compatible, well-documented, and representative of production environments.

Decision

Use ingress-nginx (CNCF project) deployed via its official Helm chart with NodePort service type on Minikube.

Service type configuration:

controller:
  service:
    type: NodePort     # No 'minikube tunnel' required
    nodePorts:
      http: 30080
      https: 30443

To switch to LoadBalancer (more realistic):

minikube tunnel  # run in a separate terminal
# Change service.type to LoadBalancer in ingress-nginx.yaml

Alternatives Considered

Tool Reason Not Chosen
Traefik Default in k3s; excellent UI; but less commonly used in enterprise K8s; ingress-nginx is more representative
NGINX (NGINX Inc.) Commercial variant; ingress-nginx is the community/CNCF version and more suitable for learning open source patterns
Contour (Envoy-based) CNCF Incubating; excellent for learning Envoy concepts (Phase 3+ candidate)
HAProxy Ingress Less ecosystem integration; not a CNCF project
Gateway API (built-in) The future standard; but ingress-nginx has a simpler learning path and wider real-world adoption today

Consequences

Positive

  • Most widely used ingress controller — patterns learned here apply to the majority of production clusters
  • Simple annotation-based configuration compatible with cert-manager
  • Prometheus metrics endpoint built in (useful for Phase 2 observability)
  • NodePort mode requires no extra tooling on Minikube

Negative

  • NodePort exposes fixed high-numbered ports (30080/30443) — less realistic than cloud LoadBalancer
  • Does not implement the new Kubernetes Gateway API (considered for Phase 3)

Trade-offs

Ecosystem breadth and learning transferability are prioritised over learning cutting-edge Gateway API patterns (which will be added in Phase 3).


Amendment — 2026-03-21: Switched to LoadBalancer (macOS Docker Driver)

Change: Service type changed from NodePort to LoadBalancer.

Reason: On macOS with the Docker driver, Minikube runs inside Docker's internal bridge network (192.168.49.x). This network is not routable from the macOS host — even with the correct /etc/hosts entry, connections to <minikube-ip>:30080 time out. The NodePort approach documented above does not work on macOS Docker driver.

Fix: LoadBalancer type + minikube tunnel assigns 127.0.0.1 as the external IP, routing traffic through the tunnel to the cluster. Services are now accessible on port 80 with no port suffix.

Updated configuration (infrastructure/controllers/ingress-nginx.yaml):

controller:
  service:
    type: LoadBalancer   # was NodePort
    # nodePorts removed — tunnel handles port mapping

Updated /etc/hosts — point to 127.0.0.1, not $(minikube ip):

127.0.0.1  grafana.local keycloak.local

minikube tunnel must be running (keep in a dedicated terminal):

sudo minikube tunnel

Impact on URLs: All service URLs no longer include :30080. http://grafana.local and http://keycloak.local work directly on port 80.

See ingress-nginx how-to for full setup instructions.