I process a kubernetes proxy on my local machine through kubectl proxy.
And I deployed heapster onto my kubernetes environment as well as influxdb and grafana.
I can see the metrics of filesystem usage retrieved by grafana.
However, I cannot get the filesystem usage through heapster REST API through:
Please help me to check if there is any misconfigured or url wrong or other issue?
Thanks.
In some pods there is no such type of metrics as filesystem/usage:
Here is an example of available metrics list for etcd-minikube pod:
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/heapster/proxy/api/v1/model/namespaces/kube-system/pods/etcd-minikube/metrics/
$ curl http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/heapster/proxy/api/v1/model/namespaces/kube-system/pods/etcd-minikube/metrics/
[
"network/rx_errors_rate",
"cpu/usage_rate",
"network/rx_errors",
"memory/request",
"memory/page_faults_rate",
"network/rx_rate",
"network/tx_errors_rate",
"memory/limit",
"network/rx",
"memory/major_page_faults_rate",
"uptime",
"memory/rss",
"memory/working_set",
"restart_count",
"network/tx_errors",
"cpu/request",
"cpu/limit",
"network/tx",
"memory/usage",
"network/tx_rate"
]
In this example, there is no filesystem/usage in the list.
If I try to get it, I'll get exactly the same result to the one you’ve posted in the question:
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/heapster/proxy/api/v1/model/namespaces/kube-system/pods/etcd-minikube/metrics/filesystem/usage
$ curl http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/heapster/proxy/api/v1/model/namespaces/kube-system/pods/etcd-minikube/metrics/filesystem/usage
{
"metrics": [],
"latestTimestamp": "0001-01-01T00:00:00Z"
}
Therefore, check the available options for the pod using URL similar to the first example.
Related
I have a Problem with the Kubernetes Dashboard.
I use actually the Managed Kubernetes Service AKS and created a Kubernetes Cluster with following Setup:
Kubernetes-Version 1.20.9
1 Worker Node with Size Standard_DS2_v2
It starts successfully with the automatic configuration of coredns, corednsautoscaler, omsagent-rs, tunnelfront and the metrics-sever.
After that i applied three deployments for my services, which all are deployed successfully.
Now, i want to get access to the Kubernetes Dashboard. I used the instruction which is described on https://artifacthub.io/packages/helm/k8s-dashboard/kubernetes-dashboard.
After that I call kubectl proxy to access the dashboard via the url: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.
After i use my kubeconfig-File to Sign to Kubernetes Dashboard i get following output and nor cpu neither memory usage is displayed.
When i execute kubectl describe kubernetes-dashboard pod i get following:
And the logs from the pod say following:
Internal error occurred: No metric client provided. Skipping metrics.
2021/12/11 19:23:04 [2021-12-11T19:23:04Z] Outcoming response to 127.0.0.1:43392 with 200 status code
2021/12/11 19:23:04 Internal error occurred: No metric client provided. Skipping metrics.
2021/12/11 19:23:04 [2021-12-11T19:23:04Z] Outcoming response to 127.0.0.1:43392 with 200 status code
2021/12/11 19:23:04 Internal error occurred: No metric client provided. Skipping metrics.
... I used the instruction which is described on https://artifacthub.io/packages/helm/k8s-dashboard/kubernetes-dashboard.
The dashboard needs a way to "cache" a small window of metrics collected from the metrics server. The instruction provided there doesn't have this enabled. You can run the following to install/upgrade kubernetes-dashboard with metrics scraper enabled:
helm upgrade -i kubernetes-dashboard/kubernetes-dashboard --name my-release \
--set=service.externalPort=8080,resources.limits.cpu=200m,metricsScraper.enabled=true
I have an application running in Kubernetes (Azure AKS) in which each pod contains two containers. I also have Grafana set up to display various metrics some of which are coming from Prometheus. I'm trying to troubleshoot a separate issue and in doing so I've noticed that some metrics don't appear to match up between data sources.
For example, kube_deployment_status_replicas_available returns the value 30 whereas kubectl -n XXXXXXXX get pod lists 100 all of which are Running, and kube_deployment_status_replicas_unavailable returns a value of 0. Also, if I get the deployment in question using kubectl I'm seeing the expected value.
$ kubectl get deployment XXXXXXXX
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
XXXXXXXX 100 100 100 100 49d
There are other applications (namespaces) in the same cluster where all the values correlate correctly so I'm not sure where the fault may be or if there's any way to know for sure which value is the correct one. Any guidance would be appreciated. Thanks
Based on having the kube_deployment_status_replicas_available metric I assume that you have Prometheus scraping your metrics from kube-state-metrics. It sounds like there's something quirky about its deployment. It could be:
Cached metric data
And/or simply it can't pull current metrics from the kube-apiserver
I would:
Check the version that you are running for kube-state-metrics and see if it's compatible with your K8s version.
Restart the kube-state-metrics pod.
Check the logs kubectl logskube-state-metrics`
Check the Prometheus logs
If you don't see anything try starting Prometheus with the --log.level=debug flag.
Hope it helps.
I'm using Kubernetes 1.11 on Digital Ocean, when I try to use kubectl top node I get this error:
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
but as stated in the doc, heapster is deprecated and no longer required from kubernetes 1.10
If you are running a newer version of Kubernetes and still receiving this error, there is probably a problem with your installation.
Please note that to install metrics server on kubernetes, you should first clone it by typing:
git clone https://github.com/kodekloudhub/kubernetes-metrics-server.git
then you should install it, WITHOUT GOING INTO THE CREATED FOLDER AND WITHOUT MENTIONING AN SPECIFIC YAML FILE , only via:
kubectl create -f kubernetes-metrics-server/
In this way all services and components are installed correctly and you can run:
kubectl top nodes
or
kubectl top pods
and get the correct result.
For kubectl top node/pod to work you either need the heapster or the metrics server installed on your cluster.
Like the warning says: heapster is being deprecated so the recommended choice now is the metrics server.
So follow the directions here to install the metrics server
My objective is to fetch the time series of a metric for a pod running on a kubernetes cluster on GKE using the Stackdriver TimeSeries REST API.
I have ensured that Stackdriver monitoring and logging are enabled on the kubernetes cluster.
Currently, I am able to fetch the time series of all the resources available in a cluster using the following filter:
metric.type="container.googleapis.com/container/cpu/usage_time" AND resource.labels.cluster_name="<MY_CLUSTER_NAME>"
In order to fetch the time series of a given pod id, I am using the following filter:
metric.type="container.googleapis.com/container/cpu/usage_time" AND resource.labels.cluster_name="<MY_CLUSTER_NAME>" AND resource.labels.pod_id="<POD_ID>"
This filter returns an HTTP 200 OK with an empty response body. I have found the pod ID from the metadata.uid field received in the response of the following kubectl command:
kubectl get deploy -n default <SERVICE_NAME> -o yaml
However, when I use the Pod ID of a background container spawned by GKE/Stackdriver, I do get the time series values.
Since I am able to see Stackdriver metrics of my pod on the GKE UI, I believe I should also get the metric values using the REST API.
My doubts/questions are:
Am I fetching the Pod ID of my pod correctly using kubectl?
Could there be some issue with my cluster setup/service deployment due to which I'm unable to fetch the metrics?
Is there some other way in which I can get the time series of my pod using the REST APIs?
I wouldn't rely on kubectl get deploy for pod ids. I would get them with something like kubectl -n default get pods | grep <prefix-for-your-pod> | awk '{print $1}'
I don't think so, but the best way to find out is opening a support ticket with GCP if you have any doubts.
Not that I'm aware of, Stackdriver is the monitoring solution in GCP. Again, you can check with GCP support. There are other tools that you can use to get metrics from Kubernetes like Prometheus. There are multiple guides on the web on how to set it up with Grafana on k8s. This is one for example.
Hope it helps!
Am I fetching the Pod ID of my pod correctly using kubectl?
You could use JSONpath as output with kubectl, in this case iterating over the Pods and fetching the metadata.name and metadata.uid fields:
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.uid}{"\n"}{end}'
which will output something like this:
nginx-65899c769f-2j775 d4fr5t6-bc2f-11e8-81e8-42010a84011f
nginx2-77b5c9d48c-7qlps 4f5gh6r-bc37-11e8-81e8-42010a84011f
Could there be some issue with my cluster setup/service deployment due to which I'm unable to fetch the metrics?
As #Rico mentioned in his answer, contacting the GCP support could be a way forward if you don't get further with the troubleshooting, see below.
Is there some other way in which I can get the time series of my pod using the REST APIs?
You could use the APIs Explorer or the Metrics Explorer from within the Stackdriver portal. There's some good troubleshooting tips here with a link to the APIs Explorer. In the Stackdriver Metrics Explorer it's fairly easy to reassemble the filter you've used using dropdown lists to choose e.g. a particular pod_id.
Taken from the Troubleshooting the Monitoring guide (linked above) regarding an empty HTTP 200 response on filtered queries:
If your API call returns status code 200 and an empty response, there
are several possibilities:
If your call uses a filter, then the filter might not have matched anything. The filter match is case-sensitive. To resolve filter
problems, start by specifying only one filter component, such as
metric.type, and see if you get results. Add the other filter
components one-by-one.
If you are working with a custom metric, you might not have specified the project where your custom metric is defined.*
I found this link when reading through the documentation of the Monitoring API. That link will get you to the APIs Explorer with some pre-filled fields, change these accordingly and add your own filter.
I have not tested more using the REST API at the moment but hopefully this could get you forward.
Without using Heapster is there any way to collect like CPU or Disk metrics about a node within a Kubernetes cluster?
How does Heapster even collect those metrics in the first place?
Kubernetes monitoring is detailed in the documentation here, but that mostly covers tools using heapster.
Node-specific information is exposed through the cAdvisor UI which can be accessed on port 4194 (see the commands below to access this through the proxy API).
Heapster queries the kubelet for stats served at <kubelet address>:10255/stats/ (other endpoints can be found in the code here).
Try this:
$ kubectl proxy &
Starting to serve on 127.0.0.1:8001
$ NODE=$(kubectl get nodes -o=jsonpath="{.items[0].metadata.name}")
$ curl -X "POST" -d '{"containerName":"/","subcontainers":true,"num_stats":1}' localhost:8001/api/v1/proxy/nodes/${NODE}:10255/stats/container
...
Note that these endpoints are not documented as they are intended for internal use (and debugging), and may change in the future (we eventually want to offer a more stable versioned endpoint).
Update:
As of Kubernetes version 1.2, the Kubelet exports a "summary" API that aggregates stats from all Pods:
$ kubectl proxy &
Starting to serve on 127.0.0.1:8001
$ NODE=$(kubectl get nodes -o=jsonpath="{.items[0].metadata.name}")
$ curl localhost:8001/api/v1/proxy/nodes/${NODE}:10255/stats/summary
...
I would recommend using heapster to collect metrics. It's pretty straight forward. However, in order to access those metrics, you need to add "type: NodePort" in hepaster.yml file. I modified the original heapster files and you can found them here. See my readme file how to access metrics. More metrics are available here.
Metrics can be accessed via a web browser by accessing http://heapster-pod-ip:heapster-service-port/api/v1/model/metrics/cpu/usage_rate. The Same result can be seen by executing following command.
$ curl -L http://heapster-pod-ip:heapster-service-port/api/v1/model/metrics/cpu/usage_rate