HaProxy Ingress Controller - what is the process of add a pod? - kubernetes

On a Kubernetes cluster when using HaProxy as an ingress controller. How will the HaProxy add a new pod when the old pod has died.
Does it can make sure that the pod is ready to get traffic into.
Right now I am using a readiness probe and liveness probe. I know that the order in Kubernetes to use a new pod would be first Liveness probe --> Readiness probe --> 6/6 --> pod is ready.
So will it use the same Kubernetes mechanism using HaProxy Ingress Controller ?

Short answer is: Yes, it is!
From documentation:
The most demanding part is syncing the status of pods, since the environment is highly dynamic and pods can be created or destroyed at any time. The controller feeds those changes directly to HAProxy via the HAProxy Data Plane API, which reloads HAProxy as needed.
HAProxy ingress don't take care of the pod healthy, it is responsible to receive the external traffic and forward for the correct kubernetes services.
Kubelet uses liveness and probes to know when to restart a container, it means that you must define liveness, readiness in pod definition.
See more about container probes in pod lifecycle documentation.
The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.

Related

Kubernetes scaling pods by number of active connections

I have a kubernetes cluster that runs some legacy containers (windows containers) .
To simplify , let's say that the container can handle max 5 requests at a time something like
handleRequest(){
requestLock(semaphore_Of_5)
sleep(2s)
return "result"
}
So the cpu is not spiked . I need to scale based on nr of active connections
I can see from the documentation https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables
You can use Pod readiness probes to verify that backend Pods are working OK, so that kube-proxy in iptables mode only sees backends that test out as healthy. Doing this means you avoid having traffic sent via kube-proxy to a Pod that’s known to have failed.
So there is a mechanism to make pods available for routing new requests but it is the livenessProbe that actually mark the pod as unhealthy and subject to restart policy. But my pods are just busy. They don't need restarting.
How can I increase the nr of pods in this case ?
You can enable HPA for the deployment.
You can autoscale on the no of requests metrics and perform autoscaling on this metric.
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects
I would also recommend to configure liveness probe failureThreshold and timeoutSeconds, check if it helps.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes

traefik kubernetes crd health check

I am using Traefik 2.1 with kubernetes CRD's. My setup is very similar to the user guide. In my application, I have defined a livenessProbe and readinessProbe on the deployment. I assumed that traefik would route requests to the kubernetes load balancer, and kubernetes would know if the pod was ready or not. Kubernetes would also restart the container if the livenessProbe failed. Is there a default healthCheck for kubernetes CRD's? Does Traefik use the load balancer provided by the kubernetes service, or does it get the IP's for the pods underneath the service and route directly to them? Is it recommended to use a healthCheck with Traefik CRD's? Is there a way to not have to repeat the config for the readinessProbe and Traefik CRD healthCheck? Thank you
Is there a default healthCheck for kubernetes CRD's?
No
Does Traefik use the load balancer provided by the kubernetes service,
or does it get the IP's for the pods underneath the service and route
directly to them?
No. It directly gets IPs from the endpoint object
Is there a way to not have to repeat the config for the readinessProbe
and Traefik CRD healthCheck ?
Traefik will update its configuration when it sees that endpoint object do not have IPs which happens when liveness/readiness probe fails.So you can configure readiness and liveness probe on your pods and expect traefik do honour that.
Is there a way to not have to repeat the config for the readinessProbe
and Traefik CRD healthCheck
The benefit of using CRD approach is its dynamic in nature. Even If you are using the CRD along with health check mechanism provided by the CRD , the liveness and rediness probe of pods are still necessary for kubernetes to restart the pods and not send traffic to the pods from other pods which uses the kubernetes service corresponding to those pods.

Load Balancing between PODS

Is there a way to do active and passive load balancing between 2 PODs of a micro-service. Say I have 2 instance(PODs) running of Micro-service, which is exposed using a K8s service object. Is there a way to configure the load balancing such a way that one pod will always get the request and when that pod is down , the other pod will start receiving the request?
I have ingress object also on top of that service.
This is what the Kubernetes Service object does, which you already mentioned you are using. Make sure you set up a readiness probe in your pod template so that the system can tell when your app is healthy.

Traefik health checks via kubernetes annotation

I want setup Traefik backend health check via Kubernetes annotation, but looks like Kubernetes Ingress does not support that functionality according to official documentation.
Is any particular reason why Traefik does not support that functionality for Kubernetes Ingress? I'm wondering because Mesos support health checks for a backend.
I know that in Kubernetes you can configure readiness/liveness probe for the pods, but I have leader/follower fashion service, so Traefik should route the traffic only to the leader.
UPD:
The only leader can accept the connection from Traefik; a follower will refuse the connection.
I have two readiness checks in my mind:
Service is up and running, and ready to be elected as a leader (kubernetes readiness probe)
Service is up and running and promoted as a leader (traefik health check)
Traefik relies on Kubernetes to provide an indication of the health of the underlying pods to ascertain whether they are ready to provide service. Kubernetes exposes two mechanisms in a pod to communicate information to the orchestration layer:
Liveness checks to provide an indication to Kubernetes when the process(es) running in the pod have transitioned to a broken state. A failing liveness check will cause Kubernetes to destroy the pod and recreate it.
Readiness checks to determine when a pod is ready to provide service. A failing readiness check will cause the Endpoint Controller to remove the pod from the list of endpoints of any services it provides. However, it will remain running.
In this instance, you would expose information to Traefik via a readiness check. Configure your pods with a readiness check which fails if they are in a state in which they should not receive any traffic. When the readiness state changes, Kubernetes will update the list of endpoints against any services which route traffic to the pod to add or remove the pod. Traefik will accordingly update its view of the world to add or remove the pod from the list of endpoints backing the Ingress.
There is no reason for this model to be incompatible with your master/follower architecture, provided each pod can ascertain whether it is the master or follower and provide an appropriate indication in its readiness check. However, without taking special care, there will be races between the master/follower state changing and Kubernetes updating its endpoints, as readiness probes are only made periodically. I recommend assuming this will be the case and building-in logic to reject requests received by non-master pods.
As a future consideration to increase robustness, you might split the ingress layer of your service from the business logic implementing the master/follower system, allowing all instances to communicate with Traefik and enqueue work for consideration by whatever is the "master" node at this point.

Kubernetes - which pod receives request from load balancer?

I have a load balancer service for a deployment having 3 pods. When I do a rolling udpate(changing the image) by the following command :
kubectl set image deployment/< deployment name > contname=< image-name >
and hit the service continuously, it gives a few connection refused in between. I want to check which pods it is related to. In other words, is it possible to see which request is served by which pods (without going inside the pods and checking the logs in them)? Also, Is this because of a race condition, as in when a pod might have got a request and had just been terminated before receiving that(almost simultaneously - resulting in no response)?
Have you configured liveness and readiness probes for you Pods? The service will not serve traffic to a Pod unless it thinks it is healthy, but without health checks it won't know for certain if it is ready.