K8S with Traefik and HAP not getting real client IP on pods - kubernetes

I have an external LB using HAProxy in order to have a HA k8s cluster. My cluster is in K3s from Rancher and it's using Traefik LB internally.
I'm currently facing an issue where in my pods I'm getting the Traefik IP instead of the real client IP.
HAP Configuration:
# Ansible managed
defaults
maxconn 1000
mode http
log global
option dontlognull
log stdout local0 debug
option httplog
timeout http-request 5s
timeout connect 5000
timeout client 2000000
timeout server 2000000
frontend k8s
bind *:6443
bind *:80
bind *:443
mode tcp
option tcplog
use_backend masters-k8s
backend masters-k8s
mode tcp
balance roundrobin
server master01 master01.k8s.int.ntw
server master02 master02.k8s.int.ntw
# end Ansible managed
Traefik Service:
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: kube-system
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
labels:
app: traefik
app.kubernetes.io/managed-by: Helm
chart: traefik-1.81.0
heritage: Helm
release: traefik
spec:
clusterIP: 10.43.250.142
clusterIPs:
- 10.43.250.142
externalTrafficPolicy: Local
ports:
- name: http
nodePort: 32232
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30955
port: 443
protocol: TCP
targetPort: https
selector:
app: traefik
release: traefik
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 10.0.1.1
- ip: 10.0.1.11
- ip: 10.0.1.12
- ip: 10.0.1.2
With this configurations I never get my real IP on the pods, during some research I saw that people recommend to use send-proxy in the HAP like this:
server master01 master01.k8s.int.ntw check send-proxy-v2
server master02 master02.k8s.int.ntw check send-proxy-v2
But when I do so, all my cluster communication returns ERR_CONNECTION_CLOSED.
If I'm looking at it correctly, this means its going from the HAP to the cluster and the cluster is somewhere rejecting the traffic.
Any clues what I'm missing here?
Thanks

Well you have two options.
use proxy protocol
use X-Forwarded-For header
Option 1: proxy protocol
This option requires that both sites HAProxy and Traefik uses the proxy protocol, that's the reason why the people recommend "send-proxy-v2".
This requires also that all other clients which want to connect to Traefik MUST also use the proxy protocol, if the clients does not use the proxy protocol then you will get what you get a communication error.
As you configured HAProxy in TCP mode is this the only option to get the client ip to Traefik.
Option 2: X-Forwarded-For
I personally would use this option because it makes it possible to connect to traefik with any http(s) client.
This requires that you use the HTTP mode in haproxy and some more parameters like option forwardfor.
# Ansible managed
defaults
maxconn 1000
mode http
log global
option dontlognull
log stdout local0 debug
option httplog
timeout http-request 5s
timeout connect 5s
timeout client 200s
timeout server 200s
# send client ip in the x-forwarded-for header
option forwardfor
frontend k8s
bind *:6443 v4v6 alpn h2,http/1.1 ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/
bind *:80 v4v6 alpn h2,http/1.1 ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/
bind *:443 v4v6 alpn h2,http/1.1 ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/
use_backend masters-k8s
backend masters-k8s
balance roundrobin
server master01 master01.k8s.int.ntw check
server master02 master02.k8s.int.ntw check
# end Ansible managed
In the File "/etc/haproxy/letsencryptauthorityx3.pem" are the CA's for the backends, in the directory "/etc/ssl/haproxy/" are the certificates for the frontends.
Please take a look into the documentation about the crt keyword.
You have also to configure traefik to allow the header from haproxy forwarded-headers

Related

HAProxy peering in kubernetes

Background
Due to our application needs to use sticky tables for a custom header, we decided to use HAProxy, our layout looks as follows:
Nginx Ingress -> HAproxy service -> headless services of stateful application
So far stickiness works fine, but there is a scenario where if handled by the other HAproxy replica, it fails. We are trying to use peers to address this problem.
I use bitnami helm chart to deploy it, this is my values file:
metadata:
chartName: bitnami/haproxy
chartVersion: 0.3.7
service:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: 8080
- name: peers
protocol: TCP
port: 10000
targetPort: 10000
containerPorts:
- name: http
containerPort: 8080
- name: https
containerPort: 8080
- name: peers
containerPort: 10000
configuration: |
global
log stdout format raw local0 debug
defaults
mode http
option httplog
timeout client 10s
timeout connect 5s
timeout server 10s
timeout http-request 10s
log global
resolvers default
nameserver dns1 172.20.0.10:53
hold timeout 30s
hold refused 30s
hold valid 10s
resolve_retries 3
timeout retry 3s
peers hapeers
peer $(MY_POD_IP):10000 # I attempted to do something like this
peer $(REPLICA_2_IP):10000 #
frontend stats
bind *:8404
stats enable
stats uri /
stats refresh 10s
frontend myfrontend
mode http
option httplog
bind *:8080
default_backend webservers
backend webservers
mode http
log stdout local0 debug
stick-table type string len 64 size 1m expire 1d peers hapeers
stick on req.hdr(MyHeader)
server s1 headless-service-1:8080 resolvers default check port 8080 inter 5s rise 2 fall 20
server s2 headless-service-2:8080 resolvers default check port 8080 inter 5s rise 2 fall 20
server s3 headless-service-3:8080 resolvers default check port 8080 inter 5s rise 2 fall 20
replicaCount: 2
extraEnvVars:
- name: LOG_LEVEL
value: debug
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
From what I read in HAProxy documentation, it requires the peers IP's, which in this case are the replicas IPs. However, the configmap does not allow injecting IPs from the HAProxy replicas.
I also thought of using a initContainer to modify the haproxy.cfg at deployment time with the correct IPs, but the volume is read-only and I would have to alter a fork of the chart to customize it.
If anyone has an idea of a different approach or workaround, I would appreciate the comments. Thanks!
...the configmap does not allow injecting IPs from the HAProxy replicas.
HAProxy's configuration supports environment variables. Eg. peer $(MY_POD_IP):10000 => peer ${MY_POD_IP}:10000

HAProxy Not Working with Kubernetes NodePort for Backend (Bare Metal)

I have a host running HAProxy already. It's been up and running since before I did anything with Kubernetes. It works flawlessly as a reverse proxy and SSL terminator for various web things in various Docker containers on various other host machines.
Now I have a Kubernetes cluster up and running across some of those other machines. I've created the NodePort Service that exposes port 30080 on each worker node, as follows:
apiVersion: v1
kind: Service
metadata:
name: snginx
labels:
app: nginx
spec:
type: NodePort
externalTrafficPolicy: Local #Cluster or Local
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30080
From the machine running HAProxy (which is not part of the cluster), I can curl the NodePort successfully ( curl 10.0.0.20:30080 ), and I get "Welcome to nginx!..." However, if I set that NodePort as a backend in HAProxy, I get a 503 "No server is available", and HAProxy traffic log says:
localhost haproxy[18489]: [redactedIP]:49399 [30/Aug/2021:19:24:00.764] http-in~ load/load 66/0/-1/-1/67 503 212 - - SC-- 1/1/0/0/3 0/0 "GET / HTTP/1.1"
The haproxy admin log says:
Aug 30 20:07:13 localhost haproxy[18839]: Server load/load is DOWN, reason: Layer4 connection problem, info: "General socket error (Permission denied)"
However, I've disabled the firewall with
sudo systemctl disable --now firewalld
and verified the status is not running. Also, SELinux was disabled when I installed the cluster. Also, I can ping 10.0.0.20 just fine.
"load" is the hostname I'm using for testing load balancing (i.e. load.mydomain.com).
Also, if I use PAT on my physical router to route directly to that NodePort, from outside the building, it works as expected.
What gives? What's the difference between the proxied request and curl?
Thank you.
SELinux is the difference. That is, SELinux on the HAProxy host (not a cluster node):
"SELinux only allows the web server to make outbound connections to a limited set of ports"
That is, you can't make an outbound http request to any port in the NodePort range (30000-32768) without opening that port on the "client", which is the HAProxy server in this case.
sudo semanage port --add --type http_port_t --proto tcp 30080

Trying to understand the purpose of HAProxy with Kubernetes in this guide

Can someone please skim over this guide and tell me the use case of HAProxy in this guide?
Install and configure a multi-master Kubernetes cluster with kubeadm
I've gone through the guide and set this up. Everything is working properly between my Kubernetes cluster and HAProxy, from what I can tell.
HAProxy has been set up on a VM separate from my Kubernetes cluster. The HAProxy IP is 10.1.160.170.
I was hoping to visit my HAProxy IP and be redirected to one of my Kuberenetes nodes that is being load balanced. This isn't the case.
I can set up an Nginx deployment with:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
Then create the service:
kubectl expose deployment my-nginx --port=80 --type=NodePort
user#KUBENODE01:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 93d
my-nginx NodePort 10.108.33.134 <none> 80:30438/TCP 46s
If I now try and visit my HAProxy IP 10.1.160.170, I'm not redirected to my Kubernetes node on port 30438.
user#computer:~/nginx_testing$ curl http://10.1.160.170
curl: (7) Failed to connect to 10.1.160.170 port 80: Connection refused
user#computer:~/nginx_testing$ curl https://10.1.160.170
curl: (7) Failed to connect to 10.1.160.170 port 443: Connection refused
user#computer:~/nginx_testing$ curl 10.1.160.170:30438
curl: (7) Failed to connect to 10.1.160.170 port 30438: Connection refused
Is HAProxy not meant to forward connection requests to the actual Kubernetes nodes in this article?
I've also tried this with the service type LoadBalancer.
Here is my haproxy.cfg:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:EC>
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend kubernetes
# bind 10.1.160.170:80
bind 10.1.160.170:6443
# http-request redirect scheme https unless { ssl_fc }
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server kubenode01 10.1.160.79:6443 check fall 3 rise 2
server kubenode02 10.1.160.80:6443 check fall 3 rise 2
server kubenode03 10.1.160.81:6443 check fall 3 rise 2
The port 6443 is for k8s API server.
kubectl access this API server to do its work.
In k8s scenario with one master, you can access k8s API with that masters node IP.
But in k8s scenario with 3 master which is considered HA setup you should use load balancing even you can still access any of master directly because thats the whole point.
For example in HA setup you should set your server address to HAProxy IP in your kubeconfig file so your kubectl commands will be redirect to one of the masters which is healthy, by HAProxy

traefik v2 redirect TCP requests

we habe following situation.
We are using Kerberos on a tomcat running in a docker swarm as reverse proxy we use traefik also hosted in docker swarm.
Kerberos needs to passthrough the SSL request via TCP router. This works no problem. But now we want to redirect requests to port 80 also to tcp router port 443?
We tried a lot, but no success at all :-(
Here is the traefik configuration
labels:
traefik.tcp.routers.example.entrypoints: https
traefik.tcp.services.example.loadbalancer.server.port: '8443'
traefik.tcp.routers.example.service: example
traefik.tcp.routers.example.tls.passthrough: 'true'
traefik.constraint-label: traefik-public
traefik.tcp.routers.example.rule: HostSNI(`example.com`)
traefik.docker.network: traefik-public
traefik.enable: 'true'
traefik.tcp.routers.example.tls: 'true'
regards
Meex

kubernetes expose services with Traefik 2.x as ingress with CRD

What do i have working
I have a Kubernetes cluster as follow:
Single control plane (but plan to extend to 3 control plane for HA)
2 worker nodes
On this cluster i deployed (following this doc from traefik https://docs.traefik.io/user-guides/crd-acme/):
A deployment that create two pods :
traefik itself: which will be in charge of routing with exposed port 80, 8080
whoami:a simple http server thats responds to http requests
two services
traefik service:
whoami servic:
One traefik IngressRoute:
What do i want
I have multiple services running in the cluster and i want to expose them to the outside using Ingress.
More precisely i want to use the new Traefik 2.x CDR ingress methods.
My ultimate goal is to use new traefiks 2.x CRD to expose resources on port 80, 443, 8080 using IngressRoute Custom resource definitions
What's the problem
If i understand well, classic Ingress controllers allow exposition of every ports we want to the outside world (including 80, 8080 and 443).
But with the new traefik CDR ingress approach on it's own it does not exports anything at all.
One solution is to define the Traefik service as a loadbalancer typed service and then expose some ports. But you are forced to use the 30000-32767 ports range (same as nodeport), and i don't want to add a reverse proxy in front of the reverse proxy to be able to expose port 80 and 443...
Also i've seed from the doc of the new igress CRD (https://docs.traefik.io/user-guides/crd-acme/) that:
kubectl port-forward --address 0.0.0.0 service/traefik 8000:8000 8080:8080 443:4443 -n default
is required, and i understand that now. You need to map the host port to the service port.
But mapping the ports that way feels clunky and counter intuitive. I don't want to have a part of the service description in a yaml and at the same time have to remember that i need to map port with kubectl.
I'm pretty sure there is a neat and simple solution to this problem, but i can't understand how to keep things simple. Do you guys have an experience in kubernetes with the new traefik 2.x CRD config?
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80
targetPort: 8000
- protocol: TCP
name: admin
port: 8080
targetPort: 8080
- protocol: TCP
name: websecure
port: 443
targetPort: 4443
selector:
app: traefik
have you tried to use tragetPort where every request comes on 80 redirect to 8000 but when you use port-forward you need to always use service instead of pod
You can try to use LoadBalancer service type for expose the Traefik service on ports 80, 443 and 8080. I've tested the yaml from the link you provided in GKE, and it's works.
You need to change the ports on 'traefik' service and add a 'LoadBalancer' as service type:
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80 <== Port to receive HTTP connections
- protocol: TCP
name: admin
port: 8080 <== Administration port
- protocol: TCP
name: websecure
port: 443 <== Port to receive HTTPS connections
selector:
app: traefik
type: LoadBalancer <== Define the type load balancer
Kubernetes will create a Loadbalancer for you service and you can access your application using ports 80 and 443.
$ curl https://35.111.XXX.XX/tls -k
Hostname: whoami-5df4df6ff5-xwflt
IP: 127.0.0.1
IP: 10.60.1.11
RemoteAddr: 10.60.1.13:55262
GET /tls HTTP/1.1
Host: 35.111.XXX.XX
User-Agent: curl/7.66.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.60.1.1
X-Forwarded-Host: 35.111.XXX.XX
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: traefik-66dd84c65c-4c5gp
X-Real-Ip: 10.60.1.1
$ curl http://35.111.XXX.XX/notls
Hostname: whoami-5df4df6ff5-xwflt
IP: 127.0.0.1
IP: 10.60.1.11
RemoteAddr: 10.60.1.13:55262
GET /notls HTTP/1.1
Host: 35.111.XXX.XX
User-Agent: curl/7.66.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.60.1.1
X-Forwarded-Host: 35.111.XXX.XX
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-66dd84c65c-4c5gp
X-Real-Ip: 10.60.1.1
Well after some time i've decided to put an haproxy in front of the kubernetes Cluster. It's seems to be the only solution ATM.