Trying to understand the purpose of HAProxy with Kubernetes in this guide - kubernetes

Can someone please skim over this guide and tell me the use case of HAProxy in this guide?
Install and configure a multi-master Kubernetes cluster with kubeadm
I've gone through the guide and set this up. Everything is working properly between my Kubernetes cluster and HAProxy, from what I can tell.
HAProxy has been set up on a VM separate from my Kubernetes cluster. The HAProxy IP is 10.1.160.170.
I was hoping to visit my HAProxy IP and be redirected to one of my Kuberenetes nodes that is being load balanced. This isn't the case.
I can set up an Nginx deployment with:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
Then create the service:
kubectl expose deployment my-nginx --port=80 --type=NodePort
user#KUBENODE01:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 93d
my-nginx NodePort 10.108.33.134 <none> 80:30438/TCP 46s
If I now try and visit my HAProxy IP 10.1.160.170, I'm not redirected to my Kubernetes node on port 30438.
user#computer:~/nginx_testing$ curl http://10.1.160.170
curl: (7) Failed to connect to 10.1.160.170 port 80: Connection refused
user#computer:~/nginx_testing$ curl https://10.1.160.170
curl: (7) Failed to connect to 10.1.160.170 port 443: Connection refused
user#computer:~/nginx_testing$ curl 10.1.160.170:30438
curl: (7) Failed to connect to 10.1.160.170 port 30438: Connection refused
Is HAProxy not meant to forward connection requests to the actual Kubernetes nodes in this article?
I've also tried this with the service type LoadBalancer.
Here is my haproxy.cfg:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:EC>
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend kubernetes
# bind 10.1.160.170:80
bind 10.1.160.170:6443
# http-request redirect scheme https unless { ssl_fc }
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server kubenode01 10.1.160.79:6443 check fall 3 rise 2
server kubenode02 10.1.160.80:6443 check fall 3 rise 2
server kubenode03 10.1.160.81:6443 check fall 3 rise 2

The port 6443 is for k8s API server.
kubectl access this API server to do its work.
In k8s scenario with one master, you can access k8s API with that masters node IP.
But in k8s scenario with 3 master which is considered HA setup you should use load balancing even you can still access any of master directly because thats the whole point.
For example in HA setup you should set your server address to HAProxy IP in your kubeconfig file so your kubectl commands will be redirect to one of the masters which is healthy, by HAProxy

Related

port forwarding on microk8s on mac m1

I'm freshman on microk8s, and I'm trying out things by deploying a simple apache2 to see things working on my Mac M1:
◼ ~ $ microk8s kubectl run apache --image=ubuntu/apache2:2.4-22.04_beta --port=80
pod/apache created
◼ ~ $ microk8s kubectl get pods
NAME READY STATUS RESTARTS AGE
apache 1/1 Running 0 5m37s
◼ ~ $ microk8s kubectl port-forward pod/apache 3000:80
Forwarding from 127.0.0.1:3000 -> 80
but:
◼ ~ $ curl http://localhost:3000
curl: (7) Failed to connect to localhost port 3000 after 5 ms: Connection refused
I've also tried to use a service:
◼ ~ $ microk8s kubectl expose pod apache --type=NodePort --port=4000 --target-port=80
service/apache exposed
◼ ~ $ curl http://localhost:4000
curl: (7) Failed to connect to localhost port 4000 after 3 ms: Connection refused
I guess I'm doing something wrong?
For some reason I haven't figured it out, if I port-forward right within the VM by opening a shell via multipass, it does work. Next, you simply have to point to the VM's IP:
within a VM's shell:
ubuntu#microk8s-vm:~$ sudo microk8s kubectl port-forward service/hellopg 8080:8080 --address="0.0.0.0"
Forwarding from 0.0.0.0:8080 -> 8080
Handling connection for 8080
ubuntu#microk8s-vm:~$ ifconfig enp0s1
enp0s1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.64.2 netmask 255.255.255.0 broadcast 192.168.64.255
inet6 fde3:1a04:ba31:1209:5054:ff:fea9:9cf4 prefixlen 64 scopeid 0x0<global>
from the host:
curl http://192.168.64.2:8080/hello 7
{"status": "how you doing?", "env_var":"¡hola mundo!"}
it works. I guess the command via microk8s is not executed properly within the machine? If anybody can explain this I'll update the question
Microk8’s acts the same as kuberentes. So, it's better to create a Service with NodePort. This would expose your apache.
apiVersion: v1
kind: Service
metadata:
name: my-apache
spec:
type: NodePort
selector:
app: apache
ports:
- port: 80
targetPort: 80
nodePort: 30004
Change the selector as per your requirement. For more detailed information to create NodePort service refer to this official document
You can use ingress as well. But in your case only for testing you can go with NodePort
I think the easiest way for you to test it would be adding: externalIPs to you service.
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- 192.168.56.100 #your cluster IP
Happy coding!

HAProxy peering in kubernetes

Background
Due to our application needs to use sticky tables for a custom header, we decided to use HAProxy, our layout looks as follows:
Nginx Ingress -> HAproxy service -> headless services of stateful application
So far stickiness works fine, but there is a scenario where if handled by the other HAproxy replica, it fails. We are trying to use peers to address this problem.
I use bitnami helm chart to deploy it, this is my values file:
metadata:
chartName: bitnami/haproxy
chartVersion: 0.3.7
service:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: 8080
- name: peers
protocol: TCP
port: 10000
targetPort: 10000
containerPorts:
- name: http
containerPort: 8080
- name: https
containerPort: 8080
- name: peers
containerPort: 10000
configuration: |
global
log stdout format raw local0 debug
defaults
mode http
option httplog
timeout client 10s
timeout connect 5s
timeout server 10s
timeout http-request 10s
log global
resolvers default
nameserver dns1 172.20.0.10:53
hold timeout 30s
hold refused 30s
hold valid 10s
resolve_retries 3
timeout retry 3s
peers hapeers
peer $(MY_POD_IP):10000 # I attempted to do something like this
peer $(REPLICA_2_IP):10000 #
frontend stats
bind *:8404
stats enable
stats uri /
stats refresh 10s
frontend myfrontend
mode http
option httplog
bind *:8080
default_backend webservers
backend webservers
mode http
log stdout local0 debug
stick-table type string len 64 size 1m expire 1d peers hapeers
stick on req.hdr(MyHeader)
server s1 headless-service-1:8080 resolvers default check port 8080 inter 5s rise 2 fall 20
server s2 headless-service-2:8080 resolvers default check port 8080 inter 5s rise 2 fall 20
server s3 headless-service-3:8080 resolvers default check port 8080 inter 5s rise 2 fall 20
replicaCount: 2
extraEnvVars:
- name: LOG_LEVEL
value: debug
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
From what I read in HAProxy documentation, it requires the peers IP's, which in this case are the replicas IPs. However, the configmap does not allow injecting IPs from the HAProxy replicas.
I also thought of using a initContainer to modify the haproxy.cfg at deployment time with the correct IPs, but the volume is read-only and I would have to alter a fork of the chart to customize it.
If anyone has an idea of a different approach or workaround, I would appreciate the comments. Thanks!
...the configmap does not allow injecting IPs from the HAProxy replicas.
HAProxy's configuration supports environment variables. Eg. peer $(MY_POD_IP):10000 => peer ${MY_POD_IP}:10000

HAProxy Not Working with Kubernetes NodePort for Backend (Bare Metal)

I have a host running HAProxy already. It's been up and running since before I did anything with Kubernetes. It works flawlessly as a reverse proxy and SSL terminator for various web things in various Docker containers on various other host machines.
Now I have a Kubernetes cluster up and running across some of those other machines. I've created the NodePort Service that exposes port 30080 on each worker node, as follows:
apiVersion: v1
kind: Service
metadata:
name: snginx
labels:
app: nginx
spec:
type: NodePort
externalTrafficPolicy: Local #Cluster or Local
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30080
From the machine running HAProxy (which is not part of the cluster), I can curl the NodePort successfully ( curl 10.0.0.20:30080 ), and I get "Welcome to nginx!..." However, if I set that NodePort as a backend in HAProxy, I get a 503 "No server is available", and HAProxy traffic log says:
localhost haproxy[18489]: [redactedIP]:49399 [30/Aug/2021:19:24:00.764] http-in~ load/load 66/0/-1/-1/67 503 212 - - SC-- 1/1/0/0/3 0/0 "GET / HTTP/1.1"
The haproxy admin log says:
Aug 30 20:07:13 localhost haproxy[18839]: Server load/load is DOWN, reason: Layer4 connection problem, info: "General socket error (Permission denied)"
However, I've disabled the firewall with
sudo systemctl disable --now firewalld
and verified the status is not running. Also, SELinux was disabled when I installed the cluster. Also, I can ping 10.0.0.20 just fine.
"load" is the hostname I'm using for testing load balancing (i.e. load.mydomain.com).
Also, if I use PAT on my physical router to route directly to that NodePort, from outside the building, it works as expected.
What gives? What's the difference between the proxied request and curl?
Thank you.
SELinux is the difference. That is, SELinux on the HAProxy host (not a cluster node):
"SELinux only allows the web server to make outbound connections to a limited set of ports"
That is, you can't make an outbound http request to any port in the NodePort range (30000-32768) without opening that port on the "client", which is the HAProxy server in this case.
sudo semanage port --add --type http_port_t --proto tcp 30080

K8S with Traefik and HAP not getting real client IP on pods

I have an external LB using HAProxy in order to have a HA k8s cluster. My cluster is in K3s from Rancher and it's using Traefik LB internally.
I'm currently facing an issue where in my pods I'm getting the Traefik IP instead of the real client IP.
HAP Configuration:
# Ansible managed
defaults
maxconn 1000
mode http
log global
option dontlognull
log stdout local0 debug
option httplog
timeout http-request 5s
timeout connect 5000
timeout client 2000000
timeout server 2000000
frontend k8s
bind *:6443
bind *:80
bind *:443
mode tcp
option tcplog
use_backend masters-k8s
backend masters-k8s
mode tcp
balance roundrobin
server master01 master01.k8s.int.ntw
server master02 master02.k8s.int.ntw
# end Ansible managed
Traefik Service:
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: kube-system
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
labels:
app: traefik
app.kubernetes.io/managed-by: Helm
chart: traefik-1.81.0
heritage: Helm
release: traefik
spec:
clusterIP: 10.43.250.142
clusterIPs:
- 10.43.250.142
externalTrafficPolicy: Local
ports:
- name: http
nodePort: 32232
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30955
port: 443
protocol: TCP
targetPort: https
selector:
app: traefik
release: traefik
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 10.0.1.1
- ip: 10.0.1.11
- ip: 10.0.1.12
- ip: 10.0.1.2
With this configurations I never get my real IP on the pods, during some research I saw that people recommend to use send-proxy in the HAP like this:
server master01 master01.k8s.int.ntw check send-proxy-v2
server master02 master02.k8s.int.ntw check send-proxy-v2
But when I do so, all my cluster communication returns ERR_CONNECTION_CLOSED.
If I'm looking at it correctly, this means its going from the HAP to the cluster and the cluster is somewhere rejecting the traffic.
Any clues what I'm missing here?
Thanks
Well you have two options.
use proxy protocol
use X-Forwarded-For header
Option 1: proxy protocol
This option requires that both sites HAProxy and Traefik uses the proxy protocol, that's the reason why the people recommend "send-proxy-v2".
This requires also that all other clients which want to connect to Traefik MUST also use the proxy protocol, if the clients does not use the proxy protocol then you will get what you get a communication error.
As you configured HAProxy in TCP mode is this the only option to get the client ip to Traefik.
Option 2: X-Forwarded-For
I personally would use this option because it makes it possible to connect to traefik with any http(s) client.
This requires that you use the HTTP mode in haproxy and some more parameters like option forwardfor.
# Ansible managed
defaults
maxconn 1000
mode http
log global
option dontlognull
log stdout local0 debug
option httplog
timeout http-request 5s
timeout connect 5s
timeout client 200s
timeout server 200s
# send client ip in the x-forwarded-for header
option forwardfor
frontend k8s
bind *:6443 v4v6 alpn h2,http/1.1 ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/
bind *:80 v4v6 alpn h2,http/1.1 ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/
bind *:443 v4v6 alpn h2,http/1.1 ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/
use_backend masters-k8s
backend masters-k8s
balance roundrobin
server master01 master01.k8s.int.ntw check
server master02 master02.k8s.int.ntw check
# end Ansible managed
In the File "/etc/haproxy/letsencryptauthorityx3.pem" are the CA's for the backends, in the directory "/etc/ssl/haproxy/" are the certificates for the frontends.
Please take a look into the documentation about the crt keyword.
You have also to configure traefik to allow the header from haproxy forwarded-headers

Connection Refused between Kubernetes pods in the same cluster

I am new to Kubernetes and I'm working on deploying an application within a new Kubernetes cluster.
Currently, the service running has multiple pods that need to communicate with each other. I'm looking for a general approach to go about debugging the issue, rather than getting into the specifies of the service as the question will become much too specific.
The pods within the cluster are throwing an error:
err="Get \"http://testpod.mynamespace.svc.cluster.local:8080/": dial tcp 10.10.80.100:8080: connect: connection refused"
Both pods are in the same cluster.
What are the best steps to take to debug this?
I have tried running:
kubectl exec -it testpod --namespace mynamespace -- cat /etc/resolv.conf
And this returns:
search mynamespace.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal
Which I found here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
First of all, the following pattern:
my-svc.my-namespace.svc.cluster-domain.example
is applicable only to FQDNs of Services, not Pods which have the following form:
pod-ip-address.my-namespace.pod.cluster-domain.example
e.g.:
172-17-0-3.default.pod.cluster.local
So in fact you're querying cluster dns about FQDN of the Service named testpod and not about FQDN of the Pod. Judging by the fact that it's being resolved successfully, such Service already exists in your cluster but most probably is misconfigured. The fact that you're getting the error message connection refused can mean the following:
your Service FQDN testpod.mynamespace.svc.cluster.local has been successfully resolved
(otherwise you would receive something like curl: (6) Could not resolve host: testpod.default.svc.cluster.local)
you've reached successfully your testpod Service
(otherwise, i.e. if it existed but wasn't listening on 8080 port, you're trying to connect to, you would receive timeout e.g. curl: (7) Failed to connect to testpod.default.svc.cluster.local port 8080: Connection timed out)
you've reached the Pod, exposed by testpod Service (you've been sussessfully redirected to it by the testpod Service)
but once reached the Pod, you're trying to connect to incorect port and that's why the connection is being refused by the server
My best guess is that your Pod in fact listens on different port, like 80 but you exposed it via the ClusterIP Service by specifying only --port value e.g. by:
kubectl expose pod testpod --port=8080
In such case both --port (port of the Service) and --targetPort (port of the Pod) will have the same value. In other words you've created a Service like the one below:
apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
And you probably should've exposed it either this way:
kubectl expose pod testpod --port=8080 --targetPort=80
or with the following yaml manifest:
apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 80
Of course your targetPort may be different than 80, but connection refused in such case can mean only one thing: target http server (running in a Pod) refuses connection to 8080 port (most probably because it isn't listening on it). You didn't specify what image you're using, whether it's a standard nginx webserver or something based on your custom image. But if it's nginx and wasn't configured differently it listens on port 80.
For further debug, you can attach to your Pod:
kubectl exec -it testpod --namespace mynamespace -- /bin/sh
and if netstat command is not present (the most likely scenario) run:
apt update && apt install net-tools
and then check with netstat -ntlp on which port your container listens on.
I hope this helps you solve your issue. In case of any doubts, don't hesitate to ask.