Kubernetes / Prometheus Metrics Mismatch - kubernetes

I have an application running in Kubernetes (Azure AKS) in which each pod contains two containers. I also have Grafana set up to display various metrics some of which are coming from Prometheus. I'm trying to troubleshoot a separate issue and in doing so I've noticed that some metrics don't appear to match up between data sources.
For example, kube_deployment_status_replicas_available returns the value 30 whereas kubectl -n XXXXXXXX get pod lists 100 all of which are Running, and kube_deployment_status_replicas_unavailable returns a value of 0. Also, if I get the deployment in question using kubectl I'm seeing the expected value.
$ kubectl get deployment XXXXXXXX
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
XXXXXXXX 100 100 100 100 49d
There are other applications (namespaces) in the same cluster where all the values correlate correctly so I'm not sure where the fault may be or if there's any way to know for sure which value is the correct one. Any guidance would be appreciated. Thanks

Based on having the kube_deployment_status_replicas_available metric I assume that you have Prometheus scraping your metrics from kube-state-metrics. It sounds like there's something quirky about its deployment. It could be:
Cached metric data
And/or simply it can't pull current metrics from the kube-apiserver
I would:
Check the version that you are running for kube-state-metrics and see if it's compatible with your K8s version.
Restart the kube-state-metrics pod.
Check the logs kubectl logskube-state-metrics`
Check the Prometheus logs
If you don't see anything try starting Prometheus with the --log.level=debug flag.
Hope it helps.

Related

Grafana consolidate pod metrics

I have a Kubernetes Pod which serves metrics for prometheus.
Once in a while I update the release and thus the pod gets restarted.
Prometheus safes the metrics but labels it according to the new pod name:
this is by prometheus' design, so its ok.
but if I display this data with grafana, Im getting this (the pods ahve been redeployed twice):
So for example the metric "Registered Users" now has 3 different colors because the source from it comes from 3 diffferent pods
I have some options. Maybe disregard the pod name in prometheus, but I consider that bad practise because I dont want to lose data.
So I think I have to consolidate this in grafana. But how I can I tell Grafana that I want to merge all data with container-name api-gateway-narkuma and disregard the label pods?
You can do something like
max(users) without (instance, pod)

How can I get correct metric for pod status?

I'm trying to get the pod status in Grafana through Prometheus in a GKE cluster.
kube-state-metrics has been installed together with Prometheus by using the prometheus-community/prometheus and grafana Helm charts.
I tried to know the pod status through kube_pod_status_phase{exported_namespace=~".+-my-namespace", pod=~"my-server-.+"}, but I get only "Running" as a result.
In other words, in the obtained graph I can see only a straight line at the value 1 for the running server. I can't get when the given pod was pending or in another state different from Running.
I am interested in the starting phase, after the pod is created, but before it is running.
Am I using the query correctly? Is there another query or it could be due to something in the installation?
If you mean the Pending status for the Pod, I think you should use instead kube_pod_status_phase{exported_namespace=~".+-my-namespace", pod=~"my-server-.+", phase="Pending"} . Not sure what it does when you don't put the phase in your request but I suspect it just renders the number of Pods whatever the state is. In your case is always 1.

How to find metrics about CPU/MEM for the pod running on a Kubernetes cluster on Prometheus

I have Prometheus setup via Helm from Terraform and it's is configured to connect to my Kubernetes cluster. I open my Prometheus but I am not sure which metric to choose from the list to be able to view the CPU/MEM of running pods/jobs.
Here are all the pods running with the command (test1 is the kube namespace):
kubectl -n test1 get pods
podsrunning
When, I am on Prometheus, I see many metrics related to CPU, but not sure which one to choose:
prom1
I tried to choose one, but the namespace = prometheus and it uses prometheus-node-exporter and I don't see my cluster or my namespace test1 anywhere here.
prom2
Could you please help me? Thank you very much in advance.
UPDATE SCREENSHOT
UPDATE SCREENSHOT
I need to concentrate on this specific namespace, normally with the command:
kubectl get pods --all-namespaces | grep hermatwin
I see the first line with namespace = jobs I think this is namespace.
No result when set calendar to last Friday:
UPDATE SCREENSHOT April 20
I tried to select 2 days with starting date on last Saturday 17 April but I don't see any result:
ANd, if I remove (namespace="jobs") condition, I don't see any result either:
I tried to rerun the job (simulation jobs) again just now and tried to execute the prometheus query while the job was still running mode but I don't get any result :-( Here you can see my jobs where running.
I don't get any result:
When using simple filter, just container_cpu_usage_seconds_total, I can see the namespace="jobs"
node_cpu_seconds_total is a metric from node-exporter, the exporter that brings machine statistics and its metrics are prefixed with node_. You need metrics from cAdvisor, this one produces metrics related to containers and they are prefixed with container_:
container_cpu_usage_seconds_total
container_cpu_load_average_10s
container_memory_usage_bytes
container_memory_rss
Here are some basic queries for you to get started. Be ready that they may require tweaking (you may have different label names):
CPU Utilisation Per Pod
sum(irate(container_cpu_usage_seconds_total{container!="POD", container=~".+"}[2m])) by (pod)
RAM Usage Per Pod
sum(container_memory_usage_bytes{container!="POD", container=~".+"}) by (pod)
In/Out Traffic Rate Per Pod
Beware that pods with host network mode (not isolated) show traffic rate for the whole node. * 8 is to convert bytes to bits for convenience (MBit/s, GBit/s, etc).
# incoming
sum(irate(container_network_receive_bytes_total[2m])) by (pod) * 8
# outgoing
sum(irate(container_network_transmit_bytes_total[2m])) by (pod) * 8

How to get number of pods running in prometheus

I am scraping the kubernetes metrics from prometheus and would need to extract the number of running pods.
I can see container_last_seen metrics but how should i get no of pods running. Can someone help on this?
If you need to get number of running pods, you can use a metric from the list of pods metrics https://github.com/kubernetes/kube-state-metrics/blob/master/docs/pod-metrics.md for that (To get the info purely on pods, it'd make sens to use pod-specific metrics).
For example if you need to get the number of pods per namespace, it'll be:
count(kube_pod_info{namespace="$namespace_name"}) by (namespace)
To get the number of all pods running on the cluster, then just do:
count(kube_pod_info)
Assuming you want to display that in Grafana according to your question tags, from this Kubernetes App Metrics dashboard for example:
count(count(container_memory_usage_bytes{container_name="$container", namespace="$namespace"}) by (pod_name))
You can just import the dashboard and play with the queries.
Depending on your configuration/deployment, you can adjust the variables container_name and namespace, grouping by (pod_name) and count'ing it does the trick. Some other label than pod_name can be used as long as it's shared between the pods you want to count.
If you want to see only the number of "deployed" pods in some namespace, you can use the solutions in previous answers.
My use case was to see the current running pods in some namespace and below is my solution:
'min_over_time(sum(group(kube_pod_container_status_ready{namespace="BC_NAME"}) by (pod,uid)) [5m:1m]) OR on() vector(0)'
Please replace BC_NAME with your namespace name.
The timespan provides you fine the data.
If no data found - no pod currently running it returns '0'

Fetching Stackdriver Monitoring TimeSeries data for a pod running on a k8s cluster on GKE using the REST API

My objective is to fetch the time series of a metric for a pod running on a kubernetes cluster on GKE using the Stackdriver TimeSeries REST API.
I have ensured that Stackdriver monitoring and logging are enabled on the kubernetes cluster.
Currently, I am able to fetch the time series of all the resources available in a cluster using the following filter:
metric.type="container.googleapis.com/container/cpu/usage_time" AND resource.labels.cluster_name="<MY_CLUSTER_NAME>"
In order to fetch the time series of a given pod id, I am using the following filter:
metric.type="container.googleapis.com/container/cpu/usage_time" AND resource.labels.cluster_name="<MY_CLUSTER_NAME>" AND resource.labels.pod_id="<POD_ID>"
This filter returns an HTTP 200 OK with an empty response body. I have found the pod ID from the metadata.uid field received in the response of the following kubectl command:
kubectl get deploy -n default <SERVICE_NAME> -o yaml
However, when I use the Pod ID of a background container spawned by GKE/Stackdriver, I do get the time series values.
Since I am able to see Stackdriver metrics of my pod on the GKE UI, I believe I should also get the metric values using the REST API.
My doubts/questions are:
Am I fetching the Pod ID of my pod correctly using kubectl?
Could there be some issue with my cluster setup/service deployment due to which I'm unable to fetch the metrics?
Is there some other way in which I can get the time series of my pod using the REST APIs?
I wouldn't rely on kubectl get deploy for pod ids. I would get them with something like kubectl -n default get pods | grep <prefix-for-your-pod> | awk '{print $1}'
I don't think so, but the best way to find out is opening a support ticket with GCP if you have any doubts.
Not that I'm aware of, Stackdriver is the monitoring solution in GCP. Again, you can check with GCP support. There are other tools that you can use to get metrics from Kubernetes like Prometheus. There are multiple guides on the web on how to set it up with Grafana on k8s. This is one for example.
Hope it helps!
Am I fetching the Pod ID of my pod correctly using kubectl?
You could use JSONpath as output with kubectl, in this case iterating over the Pods and fetching the metadata.name and metadata.uid fields:
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.uid}{"\n"}{end}'
which will output something like this:
nginx-65899c769f-2j775 d4fr5t6-bc2f-11e8-81e8-42010a84011f
nginx2-77b5c9d48c-7qlps 4f5gh6r-bc37-11e8-81e8-42010a84011f
Could there be some issue with my cluster setup/service deployment due to which I'm unable to fetch the metrics?
As #Rico mentioned in his answer, contacting the GCP support could be a way forward if you don't get further with the troubleshooting, see below.
Is there some other way in which I can get the time series of my pod using the REST APIs?
You could use the APIs Explorer or the Metrics Explorer from within the Stackdriver portal. There's some good troubleshooting tips here with a link to the APIs Explorer. In the Stackdriver Metrics Explorer it's fairly easy to reassemble the filter you've used using dropdown lists to choose e.g. a particular pod_id.
Taken from the Troubleshooting the Monitoring guide (linked above) regarding an empty HTTP 200 response on filtered queries:
If your API call returns status code 200 and an empty response, there
are several possibilities:
If your call uses a filter, then the filter might not have matched anything. The filter match is case-sensitive. To resolve filter
problems, start by specifying only one filter component, such as
metric.type, and see if you get results. Add the other filter
components one-by-one.
If you are working with a custom metric, you might not have specified the project where your custom metric is defined.*
I found this link when reading through the documentation of the Monitoring API. That link will get you to the APIs Explorer with some pre-filled fields, change these accordingly and add your own filter.
I have not tested more using the REST API at the moment but hopefully this could get you forward.