traefik kubernetes crd health check - kubernetes

I am using Traefik 2.1 with kubernetes CRD's. My setup is very similar to the user guide. In my application, I have defined a livenessProbe and readinessProbe on the deployment. I assumed that traefik would route requests to the kubernetes load balancer, and kubernetes would know if the pod was ready or not. Kubernetes would also restart the container if the livenessProbe failed. Is there a default healthCheck for kubernetes CRD's? Does Traefik use the load balancer provided by the kubernetes service, or does it get the IP's for the pods underneath the service and route directly to them? Is it recommended to use a healthCheck with Traefik CRD's? Is there a way to not have to repeat the config for the readinessProbe and Traefik CRD healthCheck? Thank you

Is there a default healthCheck for kubernetes CRD's?
No
Does Traefik use the load balancer provided by the kubernetes service,
or does it get the IP's for the pods underneath the service and route
directly to them?
No. It directly gets IPs from the endpoint object
Is there a way to not have to repeat the config for the readinessProbe
and Traefik CRD healthCheck ?
Traefik will update its configuration when it sees that endpoint object do not have IPs which happens when liveness/readiness probe fails.So you can configure readiness and liveness probe on your pods and expect traefik do honour that.
Is there a way to not have to repeat the config for the readinessProbe
and Traefik CRD healthCheck
The benefit of using CRD approach is its dynamic in nature. Even If you are using the CRD along with health check mechanism provided by the CRD , the liveness and rediness probe of pods are still necessary for kubernetes to restart the pods and not send traffic to the pods from other pods which uses the kubernetes service corresponding to those pods.

Related

How to solve "Ingress Error: Some backend services are in UNHEALTHY state"?

I am working on deploying a certain pod to GKE but I am having an unhealthy state for my backend services.
The deployment went through via helm install process but the ingress reports a certain warning error that says Some backend services are in UNHEALTHY state. I have tried to access the logs but do not know exactly what to look out for. Also, I already have liveness and readiness probes running.
What could I do to make the ingress come back to a healthy state? Thanks
Picture of warning error on GKE UI
Without more details it is hard to determine the exact cause.
As first point I want to mention, that your error message is Some backend services are in UNHEALTHY state, not All backend services are in UNHEALTHY state. It indicates that only a few of your backends are affected.
There might be tons of reasons, if you are using GCP Ingress or Nginx Ingress, your configuration of externalTrafficPolicy, if you are using preemptive nodes, your livenessProbe and readinessProbe, health checks, etc.
In your scenario, only a few backends are affected, the only thing with current information I can suggest you some debug options.
Using $ kubectl get po -n <namespace> check if all your pods are working correctly, that all containers within pods are Ready and pod status is Running. Eventually check logs of suspicious pod $ kubectl logs <podname> -c <containerName>. In general you should check all pods the load balancer is pointing to,
Confirm if livenessProbe and readinessProbe are configured properly and response is 200,
Describe your ingress $ kubectl describe ingress <yourIngressName> and check backends,
Check if you've configured your health checks properly according to GKE Ingress for HTTP(S) Load Balancing - Health Checks guide.
If you still won't be able to solve this issue with above debug options, please provide more details about your env with logs (without private information).
Useful links:
kubernetes unhealthy ingress backend
GKE Ingress shows unhealthy backend services
In GKE you can define BackendConfig. To define custom health checks. you can configure this using the below link to make the ingress backend to be in a HEALTHY state.
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#direct_health
If you have kubectl access to your pods, you can run kubectl get pod, and then kubctl logs -f <pod-name>.
Review the logs and find the error(s).

Ingress gateway on each node

Each node runs the same pods and all the nodes do the same. I am using Istio ingress gateway with the NodePort. I need traffic that enters NodePort to be routed to pods not leaving the node.
I am unable to run istio-ingressgateway on each node to do that. Is it possible for each node to route its own traffic?
Bare-metal, k8s 1.19.4, Istio 1.8
Issue
As #Jonas mentioned in comments
The problem is that there is just one istio-ingressgateway pod on node1 and all the traffic from node2 have to come to node1
Solution
You can use kubectl scale to scale your ingress gateway replicas. Below command will create 3 ingress gateway pods instead of just one.
kubectl scale --replicas=3 deployment/istio-ingressgateway -n istio-system
Additionally you can set this up with istio operator replicaCount value.
Note that if you use cloud there might be hpa configured and it might immediately scales back up the pods. There is github issue about that. You can also set hpa min and max replicas with istio.

Accessing application in kubernetes cluster through ingress

I have a cluster setup locally. I have configure ingress controller with traefik v2.2. I have deployed my application and configured the ingress. Ingress service will query the clusterIP. I have configured my DNS with the A record of master node.
Now the problem is i am unable to access the application through ingress when the A record is set to master node. I have accessed the shell of ingress controller pod in all the and tried to curl the clusterIP. I cannot get response from the pod in master node but the pods in worker node give me the response i want. Also I can access my application with A record is set to any of the worker node.
I have tried to disable my firewalld service and tried but its same.
Did i miss anything while configuring?
Note: I have spin off my cluster with kubeadm.
Thank You.

How does k8 traffic flow internally?

I have ingress and service with LB. When traffic coming from outside it hits ingress first and then does it goes to pods directly using ingress LB or it goes to service and get the pod ip via selector and then goes to pods? If it's first way, what is the use of services? And which kind, services or ingress uses readinessProbe in the deployment?
All the setup is in GCP
I am new to K8 networks.
A service type LoadBalancer is a external source provided by your cloud and are NOT in Kubernetes cluster. They can work forwarding the request to your pods using node selector, but you can't for example make path rules or redirect, rewrites because this is provided by an Ingress.
Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a selector (see below for why you might want a Service without a selector).
Internet
|
[ LoadBalancer ]
--|-----|--
[ Services ]
--| |--
[ Pod1 ] [ Pod2 ]
When you use Ingress, is a component controller by a ingress controller that is basically a pod configured to handle the rules you defined.
To use ingress you need to configure a service for your path, and then this service will reach the pods with configures selectors. You can configure some rules based on path, hostname and them redirect for the service you want. Like this:
Internet
|
[ Ingress ]
--|-----|--
[ Services ]
--| |--
[ Pod1 ] [ Pod2 ]
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
This article has a good explanation between all ways to expose your service.
The readnessProbe is configured in your pod/deployment specs, and kubelet is responsible to evaluate your container healthy.
The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.
kube-proxy is the responsible to foward the request for the pods.
For example, if you have 2 pods in different nodes, kube-proxy will handle the firewall rules (iptables) and distribute the traffic between your nodes. Each node in your cluster has a kube-proxy running.
kube-proxy can be configured in 3 ways: userspace mode, iptables mode and ipvs mode.
If kube-proxy is running in iptables mode and the first Pod that’s selected does not respond, the connection fails. This is different from userspace mode: in that scenario, kube-proxy would detect that the connection to the first Pod had failed and would automatically retry with a different backend Pod.
References:
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/concepts/services-networking/ingress/
Depends whether your LoadBalancer service exposes the Ingress controller or your application Pods (the first is the correct approach).
The usual way to use Services and Ingresses is like this:
LoadBalancer Service -> Ingress -> ClusterIP Service -> Pods
In that case, the traffic from the Internet first hits the load balancer of your cloud provider (created by the LoadBalancer Service), which forwards it to the Ingress controller (which is one or multiple Pods running NGINX in your cluster), which in turn forward it to your application Pods (by getting the Pods' IP addresses from the ClusterIP Service).
I'm not sure if you currently have this constellation:
Ingress -> LoadBalancer Service -> Pods
In that case, you don't need a LoadBalancer Service there. You need only a ClusterIP Service behind an Ingress, and then you typically expose the Ingress with a LoadBalancer Service.

HaProxy Ingress Controller - what is the process of add a pod?

On a Kubernetes cluster when using HaProxy as an ingress controller. How will the HaProxy add a new pod when the old pod has died.
Does it can make sure that the pod is ready to get traffic into.
Right now I am using a readiness probe and liveness probe. I know that the order in Kubernetes to use a new pod would be first Liveness probe --> Readiness probe --> 6/6 --> pod is ready.
So will it use the same Kubernetes mechanism using HaProxy Ingress Controller ?
Short answer is: Yes, it is!
From documentation:
The most demanding part is syncing the status of pods, since the environment is highly dynamic and pods can be created or destroyed at any time. The controller feeds those changes directly to HAProxy via the HAProxy Data Plane API, which reloads HAProxy as needed.
HAProxy ingress don't take care of the pod healthy, it is responsible to receive the external traffic and forward for the correct kubernetes services.
Kubelet uses liveness and probes to know when to restart a container, it means that you must define liveness, readiness in pod definition.
See more about container probes in pod lifecycle documentation.
The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.