Kubernetes Resource Requests Are a Massive Footgun

If you have Kubernetes workloads that configure Resource Requests on Pods or Containers there’s a footgun “hidden” in a sentence in the documentation (kudos if you spot it immediately):

[…] The kubelet also reserves at least the request amount of that system resource specifically for that container to use. […]

This means Resource Requests actually reserve the requested amount of resources exclusively. To emphasize: this is not a fairness measure in case of over-provisioning! So, if there are Resource Requests you can’t “overprovision” your node/cluster … hell, the new pod won’t even be scheduled although your node is sitting idle. 😵😓

By the time you find out why and have patched the offending resources you’ll be swearing up and down. 🤬

Oh … and wait till you see what the Internat has to say about Resource Limits. 😰

Configuring Custom Ingress Ports With Cilium

This is just a note for anyone looking for a solution to this problem.

While it’s extremely easy with the Kubernetes’ newer Gateway API via listeners on Gateway resources it seems the Ingress resources were always meant to be used with (global?) default ports … mainly 80 and 443 for HTTP and HTTPS respectively. So every Ingress Controller seems to have their own “side-channel solution” that leverages some resource metadata to convey this information. For Cilium this happens to be the sparsely documented ingress.cilium.io/host-listener-port annotation.

So your Ingress definition should look something like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ...
  namespace: ...
  annotations:
    ingress.cilium.io/host-listener-port: 1234
spec:
  ingressClassName: cilium
  rules:
  - http: ...