Creator of this small website
Apr 13, 2019 3 min read

Kubernetes, nginx-ingress, EKS and public IPs

thumbnail for this post


This little post is to show how to get the public IP of your client/visitor when using the nginx-ingress on EKS. I’ve been reading a lot of things on this topic in the last days, with many different things said. My problem was simple : how to filter out the incoming traffic, IP based, for some of my kubernetes exposed services. For example, I wanted to restrict access to the development version of an API to only a bunch of trusted IPs

The different approaches

I’ve digged through several approaches and here is why I ended up with this one. First, let’s list the constraints : * being able to filter on the public IP of the visitor * being able to use Let’s Encrypt certificates (even if I tried another approach) via cert-manager * being able to use external-dns for DNS records (this is the easiest one, since it does not rely on the ingress)

I’ve looking at traefik-ingress, alb-ingress and nginx-ingress. I usually install things through helm, with a file to hold all customized values at install/upgrade. In-house apps also have their own chart, deployed through helm.

The road blockers


If you put aside the need to add some quite big policy to your worker nodes (thanks terraform EKS module to ease that) and to tag some resources (like subnets which is not convenient when creation is made separately from application deployment), alb ingress has 2 flaws in my plan : no integration with LE (let’s see below the almot-workaround) and since I create my ingresses alongside my app, I ended up with as many ALBs that I had services. My wallet will not accept that.

For the non-integration of alb-ingress with LE, you can wind up this by creating a wildcard certificate in ACM and serve a “flat” DNS zone/namespace for your apps. That’s not an elegant solution though.


Following the documentation I haven’t been able to get anything else than the default 404 backend of this ingress, so nothing was working, not even the http-01 challenge for LE. I have been fiddling around to debug the why but did not found out the solution :(


This ingress is probably the most common one over the place. It’s super easy to find articles about it, but many of them are outdated and some of them are just copy/paste from other ones with names changed. The most advertised solution to get the client real IP (instead of the load balancer one) is to change the externalTrafficPolicy but I was not able to get this working on my EKS cluster and found that quite hacky. So I searched and enabled proxy protocol on the ELB and in nginx-ingress to have the same functionnality without relying on this setting. Here are the excerpts of configuration used to have it working.

In helm chart for nginx ingress I used the following :

    use-proxy-protocol: "true"
    enabled: true
    annotations: "*" "60"
  # enable prometheus metrics
    enabled: true
    enabled: true
      enabled: true

And in my application ingress definition :

apiVersion: extensions/v1beta1
kind: Ingress
  name: {{ .Release.Name }}-ingress
  annotations: "*" nginx a.b.c.d/mask {{ }} letsencrypt-prod

And there you go ! - [] - - [14/Apr/2019:13:05:05 +0000] "GET .... "

(This is a script kiddy IP, no one cares)