Aggregate Kubernetes liveness probe responses - kubernetes

My application has a /health Http endpoint, configured for Kubernetes liveness check probe. The API returns a json, containing the health indicators.
Kubernetes only cares about the returned http status, but I would like to store the json responses in Prometheus for monitoring purposes.
Is it possible to catch the response once Kubernetes calls the API? I do not want to add the feature to the application itself but use an external component.
What is the recommended way of doing it?

Answering to what you've asked:
Make a sidecar that calls localhost:port/health every N seconds and stores the most recent reply. N should be equal to the prometheus scraping interval for accurate results.
A sidecar then exposes the most recent reply in the form of metric in /metrics endpoint, on a separate port of a pod. You could use https://github.com/prometheus/client_python to implement the sidecar. Prometheus exporter sidecar is actually a widely used pattern, try searching for it.
Point Prometheus to service /metrics endpoint, which is now served by a sidecar, on a separate port. You will need a separate port in a Service object, to point to your sidecar port. The scraping interval can be adjusted at this stage, to be in sync with your N, or otherwise, just adjust N. For scrape_config details refer to: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#service
If you need an automatic Prometheus target discovery, say you have a bunch of deployments like this and their number varies - refer to my recent answer: https://stackoverflow.com/a/64269434/923620
Proposing a simpler solution:
In your app ensure you log everything you return in /health
Implement centralized logging (log aggregation): https://kubernetes.io/docs/concepts/cluster-administration/logging/
Use a log processor, e.g. ELK, to query/analyze the results.

Related

Kubernetes-services load balancing

I have read this question which is very similar to what I am asking, but still wanted to write a new question since the accepted answer there seems very incomplete and also potentially wrong.
Basically, it seems like there is some missing or contradictory information regarding built in load-balancing for regular Kubernetes Services (I am not talking about LoadBalancer services). For example, the official Cilium documentation states that "Kubernetes doesn't come with an implementation of Load Balancing". In addition, I couldn't find any information in the official Kubernetes documentation about load balancing for internal services (there was only a section discussing this under ingresses).
So my question is - how does load balancing or distribution of requests work when we make a request from within a Kubernetes cluster to the internal address of a Kubernetes service?
I know there's a Kubernetes proxy on each node that creates the DNS records for such services, but what about services that span multiple pods and nodes? There's got to be some form of request distribution or load-balancing, or else this just wouldn't work at all, no?
A standard Kubernetes Service provides basic load-balancing. Even for a ClusterIP-type Service, the Service has its own cluster-internal IP address and DNS name, and forwards requests to the collection of Pods specified by its selector:.
In normal use, it is enough to create a multiple-replica Deployment, set a Service to point at its Pods, and send requests only to the Service. All of the replicas will receive requests.
The documentation discusses the implementation of internal load balancing in more detail than an application developer normally needs. Unless your cluster administrator has done extra setup, you'll probably get round-robin request routing – the first Pod will receive the first request, the second Pod the second, and so on.
... the official Cilium documentation states ...
This is almost certainly a statement about external load balancing. As a cluster administrator (not a programmer) a "plain" Kubernetes installation doesn't include an external load-balancer implementation, and a LoadBalancer-type Service behaves identically to a NodePort-type Service.
There are obvious deficiencies to round-robin scheduling, most notably if you do wind up having individual network requests that take a long time and a lot of resource to service. As an application developer the best way to address this is to make these very-long-running requests run asynchronously; return something like an HTTP 201 Created status with a unique per-job URL, and do the actual work in a separate queue-backed worker.

How to supply external metrics into HPA?

Problem setting. Suppose I have 2 pods, A and B. I want to be able to dynamically scale pod A based on some arbitrary number from some arbitrary source. Suppose that pod B is such a source: for example, it can have an HTTP server with an endpoint which responds with the number of desired replicas of pod A at the moment of request. Or maybe it is an ES server or a SQL DB (does not matter).
Question. What kubernetes objects do I need to define to achieve this (apart from HPA)? What configuration should HPA have to know that it needs to look up B for current metric? How should API of B look like (or is there any constraints?)?
Research I have made. Unfortunately, the official documentation does not say much about it, apart from declaring that there is such a possibility. There are also two repositories, one with some go boilerplate code that I have trouble building and another one that has no usage instructions whatsoever (though allegedly does fulfil the "external metrics over HTTP" requirement).
By having a look at the .yaml configs in those repositories, I have reached a conclusion that apart from Deployment and Service one needs to define an APIService object that registers the external or custom metric in the kubernetes API and links it with a normal service (where you would have your pod) and a handful of ClusterRole and ClusterRoleBinding objects. But there is no explanation about it. Also I could not even list existing APIServices with kubectl in my local cluster (of 1.15 version) like other objects.
The easiest way will be to feed metrics into Prometheus (which is a commonly solved problem), and then setup a Prometheus-based HPA (also a commonly solved problem).
1. Feed own metrics to Prometheus
Start with Prometheus-Operator to get the cluster itself monitored, and get access to ServiceMonitor objects. ServiceMonitors are pointers to services in the cluster. They let your pod's /metrics endpoint be discovered and scraped by a prometheus server.
Write a pod that reads metrics from your 3rd party API and shows them in own /metrics endpoint. This will be the adapter between your API and Prometheus format. There are clients of course: https://github.com/prometheus/client_python#exporting
Write a Service of type ClusterIP that represents your pod.
Write a ServiceMonitor that points to a service.
Query your custom metrics thru Prometheus dashboard to ensure this stage is done.
2. Setup Prometheus-based HPA
Setup Prometheus-Adapter and follow the HPA walkthrough.
Or follow the guide https://github.com/stefanprodan/k8s-prom-hpa
This looks like a huge pile of work to get the HPA. However, only the adapter pod is a custom part here. Everything else is a standard stack setup in most of the clusters, and you will get many other use cases for it anyways.

Kubernetes microservices monitoring & alerting

I have a bunch of microservices running in a kubernetes cluster where each microservice implements a basic health check over HTTP.
e.g for the endpoint /health each service will return a HTTP response 200 if that particular service is currently healthy or some other HTPP 4xx / 5xx code (and possible additional info) if not healthy.
I see Kubernetes has its own built in concepth of a HTTP health check https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request
Unfortunatley its not quite what I want. I like to be able to trigger an alert (and record the state of the health check request) in some database so I can quickly check what state all my services are in as well as alerting on any services in an unhealthy state.
I'm wondering are there existing tools or approaches in Kubernetes I should use for this sort of thing? Or will need to build some custom solution for this.
Was considering having a general "HealthCheck" service which each microservice would register with when started. That way the "HealthCheck" service would monitor the health of each service as well as trigerring alerts for any issues it finds.
I would caution against trying to build your own in-house monitoring solution. There are considerable drawbacks to that approach.
If all you need is external service HTTP health checks, then many existing monitoring solutions will do fine. You can either install a traditional IT solution like Zabbix or Nagios. Or use a SAS like Datadog and others.
There are also blackbox extensions for Prometheus, which is very popular among K8s users.
Many of these options require a learning curve of some steepness.

Is the Istio metric "istio_requests_total" the number of served requests?

Istio has several default metrics, such as istio_requests_total, istio_request_bytes, istio_tcp_connections_opened_total. Istio envoy proxy computes and exposes these metrics. On the Istio website, it shows that istio_requests_total is a COUNTER incremented for every request handled by an Istio proxy.
We made some experiments where we let a lot of requests go through the Istio envoy to reach a microservice behind Istio envoy, and at the same time we monitored the metric from Istio envoy. However, we found that istio_requests_total does not include the requests that have got through Istio envoy to the backend microservice but their responses have not arrived at Istio envoy from the backend microservice. In other words, istio_requests_total only includes the number of served requests, and does not include the requests in flight.
My question is: is our observation right? Why does istio_requests_total not include the requests in flight?
As mentioned here
The default metrics are standard information about HTTP, gRPC and TCP requests and responses. Every request is reported by the source proxy and the destination proxy as well and these can provide a different view on the traffic. Some requests may not be reported by the destination (if the request didn't reach the destination at all), but some labels (like connection_security_policy) are only available on the destination side. Here are some of the most important HTTP metrics:
istio_requests_total is a COUNTER that aggregates request totals between Kubernetes workloads, and groups them by response codes, response flags and security policy.
As mentioned here
When Mixer collects metrics from Envoy, it assigns dimensions that downstream backends can use for grouping and filtering. In Istio’s default configuration, dimensions include attributes that indicate where in your cluster a request is traveling, such as the name of the origin and destination service. This gives you visibility into traffic anywhere in your cluster.
Metric to watch: requests_total
The request count metric indicates the overall throughput of requests between services in your mesh, and increments whenever an Envoy sidecar receives an HTTP or gRPC request. You can track this metric by both origin and destination service. If the count of requests between one service and another has plummeted, either the origin has stopped sending requests or the destination has failed to handle them. In this case, you should check for a misconfiguration in Pilot, the Istio component that routes traffic between services. If there’s a rise in demand, you can correlate this metric with increases in resource metrics like CPU utilization, and ensure that your system resources are scaled correctly.
Maybe it's worth to check envoy docs about that, because of what's written here
The queries above use the istio_requests_total metric, which is a standard Istio metric. You can observe other metrics, in particular, the ones of Envoy (Envoy is the sidecar proxy of Istio). You can see the collected metrics in the insert metric at cursor drop-down menu.
Based on above docs I agree with that what #Joel mentioned in comments
I think you're correct, and I imagine the "why" is because of response flags that are expected to be found on the metric labels. This can be written only when a response is received. If they wanted to do differently, I guess it would mean having 2 different counters, one for request sent and one for response received.

liveness probes for manually created Endpoints

Is this a thing?
I have some legacy services which will never run in Kubernetes that I currently make available to my cluster by defining a service and manually uploading an endpoints object.
However, the service is horizontally sharded and we often need to restart one of the endpoints. My google-fu might be weak, but i can't figure out if Kubernetes is clever enough to prevent the Service from repeatedly trying the dead endpoint?
The ideal behavior is that the proxy should detect the outage, mark the endpoint as failed, and at some point when the endpoint comes back re-admit it into the full list of working endpoints.
BTW, I understand that at present, liveness probes are HTTP only. This would need to be a TCP probe because it's a replicated database service that doesn't grok HTTP.
I think the design is for the thing managing the endpoint addresses to add/remove them based on liveness. For services backed by pods, the pod IPs are added to endpoints based on the pod's readiness check. If a pod's liveness check fails, it is deleted and its IP removed from the endpoint.
If you are manually managing endpoint addresses, the burden is currently on you (or your external health checker) to maintain the addresses/notReadyAddresses in the endpoint.