Kubernetes - Sending message to all pods from service or from Ingress - kubernetes

Is it possible to send requests to all pods behind a service/ingress controller based on the requests?
My requirement is to send requests to all the pods if the request is /send/all.
Thanks

It's not possible because ingress controller can't do this (for sure nginx and GLBC based ingress can't do it, bud due to the way how to http works I assume this is the case for all ingress controllers).
Depending what your exact case is you have few options.
If your case is just monitoring and you can afford using control on number of request sending to your pods you can just set http liveness probe for your pods. Then you will be sure that if pod doesn't return correct response k8s won't send traffic to it.
If you need to trigger some action on all pods you have few options:
Use messaging - for example you can use rabbitmq chart to deploy rabbitmq and write some application that will handle your traffic.
Using DB - create some app that will set some flag in DB abd add some logic to your app to monitor the flag, or create cron job and to monitor the flag and trigger and trigger required actions on pods (in this case you can use service account to give your cron job pod to k8s API to list pods.

Related

Load Balancing between PODS

Is there a way to do active and passive load balancing between 2 PODs of a micro-service. Say I have 2 instance(PODs) running of Micro-service, which is exposed using a K8s service object. Is there a way to configure the load balancing such a way that one pod will always get the request and when that pod is down , the other pod will start receiving the request?
I have ingress object also on top of that service.
This is what the Kubernetes Service object does, which you already mentioned you are using. Make sure you set up a readiness probe in your pod template so that the system can tell when your app is healthy.

How does the failover mechanism work in kubernetes service?

According to some of the tech blogs (e.g. Understanding kubernetes networking: services), k8s service dispatch all the requests through iptable rules.
What if one of the upstream pods crashed when a request happened to be routed on that pods.
Is there a failover mechanism in kubernetes service?
Will the request will be forwarded to next pod automatically?
How does kubernetes solve this through iptable?
Kubernetes offers a simple Endpoints API that is updated whenever the set of Pods in a Service changes. For non-native applications, Kubernetes offers a virtual-IP-based bridge to Services which redirects to the backend Pods
Here is the detail k8s service & endpoints
So your answer is endpoint Object
kubectl get endpoints,services,pods
There are liveness and readiness checks which decides if the pod is able to process the request or not. Kubelet with docker has mechanism to control the life cycle of pods. If the pod is healthy then its the part of the endpoint object.

Traefik health checks via kubernetes annotation

I want setup Traefik backend health check via Kubernetes annotation, but looks like Kubernetes Ingress does not support that functionality according to official documentation.
Is any particular reason why Traefik does not support that functionality for Kubernetes Ingress? I'm wondering because Mesos support health checks for a backend.
I know that in Kubernetes you can configure readiness/liveness probe for the pods, but I have leader/follower fashion service, so Traefik should route the traffic only to the leader.
UPD:
The only leader can accept the connection from Traefik; a follower will refuse the connection.
I have two readiness checks in my mind:
Service is up and running, and ready to be elected as a leader (kubernetes readiness probe)
Service is up and running and promoted as a leader (traefik health check)
Traefik relies on Kubernetes to provide an indication of the health of the underlying pods to ascertain whether they are ready to provide service. Kubernetes exposes two mechanisms in a pod to communicate information to the orchestration layer:
Liveness checks to provide an indication to Kubernetes when the process(es) running in the pod have transitioned to a broken state. A failing liveness check will cause Kubernetes to destroy the pod and recreate it.
Readiness checks to determine when a pod is ready to provide service. A failing readiness check will cause the Endpoint Controller to remove the pod from the list of endpoints of any services it provides. However, it will remain running.
In this instance, you would expose information to Traefik via a readiness check. Configure your pods with a readiness check which fails if they are in a state in which they should not receive any traffic. When the readiness state changes, Kubernetes will update the list of endpoints against any services which route traffic to the pod to add or remove the pod. Traefik will accordingly update its view of the world to add or remove the pod from the list of endpoints backing the Ingress.
There is no reason for this model to be incompatible with your master/follower architecture, provided each pod can ascertain whether it is the master or follower and provide an appropriate indication in its readiness check. However, without taking special care, there will be races between the master/follower state changing and Kubernetes updating its endpoints, as readiness probes are only made periodically. I recommend assuming this will be the case and building-in logic to reject requests received by non-master pods.
As a future consideration to increase robustness, you might split the ingress layer of your service from the business logic implementing the master/follower system, allowing all instances to communicate with Traefik and enqueue work for consideration by whatever is the "master" node at this point.

What does Traefik do to connections to deleted Pods?

Imagine you have a k8s cluster set up with Traefik as an Ingress Controller.
An HTTP API application is deployed to the cluster (with an ingress resource) that is able to handle SIGTERM and does not exit until all active requests are handled.
Let's say you deploy the application with 10 replicas, get some traffic to it and scale the deployment down to 5 replicas. Those 5 Pods will be pulled out from the matching Service resource.
For those 5 Pods, the application will receive SIGTERM and start the graceful shutdown.
The question is, what will Traefik do with those active connections to the pulled out 5 Pods?
Will it wait until all the responses are received from the 5 Pods and not send any traffic to them during and after that?
Will it terminate the ongoing connections to those 5 Pods and forget about them?
Traefik will do the first: it will gracefully let those pending, in-flight requests finish but not forward any further requests to the terminating pods.
To add some technical background: once a pod is deemed to terminate from Kubernetes' perspective, the Endpoints controller (also part of the Kubernetes control plane) will remove its IP address from the associated endpoints object. Traefik watches for updates on the endpoints, receives a notification, and updates its forwarding rules accordingly. So technically, it is not able to forward any further traffic while those final requests will continue to be served (by previously established goroutines from Go's http package).

How to get prometheus to monitor kubernetes service?

I'd like to monitor my Kubernetes Service objects to ensure that they have > 0 Pods behind them in "Running" state.
However, to do this I would have to first group the Pods by service and then group them further by status.
I would also like to do this programatically (e.g. for each service in namespace ...)
There's already some code that does this in the Sensu kubernetes plugin: https://github.com/sensu-plugins/sensu-plugins-kubernetes/blob/master/bin/check-kube-service-available.rb but I haven't seen anything that shows how to do it with Prometheus.
Has anyone setup kubernetes service level health checks with Prometheus? If so, how did you group by service and then group by Pod status?
The examples I have seen for Prometheus service checks relied on the blackbox exporter:
The blackbox exporter will try a given URL on the service. If that succeeds, at least one pod is up and running.
See here for an example: https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml in job kubernetes-service-endpoints
The URL to probe might be your liveness probe or something else. If your services don't talk HTTP, you can make the blackbox exporter test other protocols as well.