I have installed kube v1.11, since heapster is depreciated I am using matrics-server. Kubectl top node command works.
Kubernetes dashboard looking for heapster service. What is the steps to configure dashboard to use materics server services
2018/08/09 21:13:43 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
Thanks
SR
This must be the week for asking that question; it seems that whatever is deploying heapster is omitting the Service, which one can fix as described here -- or the tl;dr is just: create the Service named heapster and point it at your heapster pods.
As of today Kubernetes dashboard doesn't support matrics-server and it is expected to be released very soon with new release of kubernetes dashboard.
You can follow https://github.com/kubernetes/dashboard/issues/2986
Related
I have Installed Ambassador Api gateway on AWS EKS cluster. It's working as expected.
Now I'd like to integrate Istio service mesh.
I'm following the steps given in the ambassador's official documentation.
https://www.getambassador.io/docs/edge-stack/latest/howtos/istio/#istio-integration.
But after Istio integration some ambassador pods are keep crashing.
At a time only 1 pod shows healthy out of 3.
Note: Istio side car are integrated successfully in all ambassador pods. and I have tried with Ambassador 2.1.1 & 2.1.2. But both has same issue. I'm not able to keep all ambassador pod healthy.
My EKS version is v1.19.13-eks
Below are the error:
time="2022-03-02 12:30:17.0687" level=error msg="Post \"http://localhost:8500/_internal/v0/watt?url=http%3A%2F%2Flocalhost%3A9696%2Fsnapshot\": dial tcp 127.0.0.1:8500: connect: connection refused" func=github.com/datawire/ambassador/v2/cmd/entrypoint.notifyWebhookUrl file="/go/cmd/entrypoint/notify.go:124" CMD=entrypoint PID=1 THREAD=/watcher
Please do let me know if the above documentation is not sufficient for Istio integration with Ambassador on AWS EKS
Edit 1: In further investigation I found the issue comes when I tried to integrate Istio with PeerAuthentication STRICT mode. There is no such issue with default (permissive) mode.
But another issue comes when enable the STRICT mode, and now it's failing to connect with redis service
After some investigation and testing I find out the way to integrate Istio with Ambassador with PeerAuthentication STRICT mode.
the fix :
update the REDIS_URL env variable with https
from:
REDIS_URL: ambassador-redis:6379
to
REDIS_URL: https://ambassador-redis:6379
You can find mentions of that resource in the following Questions: 1, 2. But I am not able to figure out what is the use of this resource.
Yes, it's true, the provided (in comments) link to the documentation might be confusing so let me try to clarify you this.
As per the official documentation the apiserver proxy:
is a bastion built into the apiserver
connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
runs in the apiserver processes
client to proxy uses HTTPS (or http if apiserver so configured)
proxy to target may use HTTP or HTTPS as chosen by proxy using available information
can be used to reach a Node, Pod, or Service
does load balancing when used to reach a Service
So answering your question - setting node/proxyresource in clusterRole allows k8s services access kubelet endpoints on specific node and path.
As per the official documentation:
There are two primary communication paths from the control plane
(apiserver) to the nodes. The first is from the apiserver to the
kubelet process which runs on each node in the cluster. The second is
from the apiserver to any node, pod, or service through the
apiserver's proxy functionality.
The connections from the apiserver to the kubelet are used for:
fetching logs for pods
attaching (through kubectl) to running pods
providing the kubelet's port-forwarding functionality
Here are also few running examples of using node/proxy resource in clusterRole:
How to Setup Prometheus Monitoring On Kubernetes Cluster
Running Prometheus on Kubernetes
It is hard to find in Kubernetes official documents any information about this sub-resource.
In context of RBAC, the format node/proxy can be used to grant access to the sub-resource named proxy for node resource. Also the same access can be granted for pods and services.
We can see it from the output of available resourses from the Kubernetes API server (API Version: v1.21.0):
===/api/v1===
...
nodes/proxy
...
pods/proxy
...
services/proxy
...
Detailed information about usage of proxy sub-resource can be found in The Kubernetes API (depends on the version you use) - section Proxy Operations for every mentioned resource: pods, nodes, services.
When trying to do HorizontalPodAutoscaling I'm getting (failed to get memory utilization: unable to get metrics for resource memory: no metrics returned from resource metrics API) how can solve this problem.
As far as I understand before using hpa you have to install metrics-server. More in below docs and links.
Before you begin
This example requires a running Kubernetes cluster and kubectl, version 1.2 or later. metrics-server monitoring needs to be deployed in the cluster to provide metrics via the resource metrics API, as Horizontal Pod Autoscaler uses this API to collect metrics. The instructions for deploying this are on the GitHub repository of metrics-server, if you followed getting started on GCE guide, metrics-server monitoring will be turned-on by default.
Kubernetes metrics server
Metrics Server collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler.
Additional links:
https://www.cloudtechnologyexperts.com/autoscaling-microservices-in-kubernetes-with-horizontal-pod-autoscaler/
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
https://www.youtube.com/watch?v=dALta9zQkfs
I'm using mongodb-exporter for store/query the metrics via prometheus. I have set up a custom metric server and storing values for that .
That is the evidence of prometheus-exporter and custom-metric-server works compatible .
Query:
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/*/mongodb_mongod_wiredtiger_cache_bytes"
Result:
{"kind":"MetricValueList","apiVersion":"custom.metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/%2A/mongodb_mongod_wiredtiger_cache_bytes"},"items":[{"describedObject":{"kind":"Pod","namespace":"monitoring","name":"mongo-exporter-2-prometheus-mongodb-exporter-68f95fd65d-dvptr","apiVersion":"/v1"},"metricName":"mongodb_mongod_wiredtiger_cache_bytes","timestamp":"TTTTT","value":"0"}]}
In my case when I create a hpa for this custom metrics from mongo exporter, hpa return this error to me :
failed to get mongodb_mongod_wiredtiger_cache_bytes utilization: unable to get metrics for resource mongodb_mongod_wiredtiger_cache_bytes: no metrics returned from resource metrics API
What is the main issue on my case ? I have checked all configs and flow is looking fine, but where is the my mistake .
Help
Thanks :)
In comments you wrote that you have enabled external.metrics, however in original question you had issues with custom.metrics
In short:
metrics supports only basic metric like CPU or Memory.
custom.metrics allows you to extend basic metrics to all Kubernetes objects (http_requests, number of pods, etc.).
external.metrics allows to gather metrics which are not Kubernetes objects:
External metrics allow you to autoscale your cluster based on any
metric available in your monitoring system. Just provide a metric
block with a name and selector, as above, and use the External metric
type instead of Object
For more detailed description, please check this doc.
Minikube
To verify if custom.metrics are enabled you need to execute command below and check if you can see any metrics-server... pod.
$ kubectl get pods -n kube-system
...
metrics-server-587f876775-9qrtc 1/1 Running 4 5d1h
Second way is to check if minikube have enabled metrics-server by
$ minikube addons list
...
- metrics-server: enabled
If it is disabled just execute
$ sudo minikube addons enable metrics-server
✅ metrics-server was successfully enabled
GKE
Currently at GKE heapster and metrics-server are turn on as default but custom.metrics are not supported by default.
You have to install prometheus adapter or stackdriver.
Kubeadm
Kubeadm do not include heapster or metrics server at the beginning. For easy installation, you can use this YAML.
Later you have to install prometheus adapter.
Apply custom.metrics
It's the same for Minikube, Kubeadm, GKE.
Easiest way to apply custom.metrics is to install prometheus adapter via Helm.
After helm installation you will be able to see note:
NOTES:
my-release-prometheus-adapter has been deployed.
In a few minutes you should be able to list metrics using the following command(s):
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
As additional information, you can use jq to get more user friendly output.
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq .
I'd like to monitor my Kubernetes Service objects to ensure that they have > 0 Pods behind them in "Running" state.
However, to do this I would have to first group the Pods by service and then group them further by status.
I would also like to do this programatically (e.g. for each service in namespace ...)
There's already some code that does this in the Sensu kubernetes plugin: https://github.com/sensu-plugins/sensu-plugins-kubernetes/blob/master/bin/check-kube-service-available.rb but I haven't seen anything that shows how to do it with Prometheus.
Has anyone setup kubernetes service level health checks with Prometheus? If so, how did you group by service and then group by Pod status?
The examples I have seen for Prometheus service checks relied on the blackbox exporter:
The blackbox exporter will try a given URL on the service. If that succeeds, at least one pod is up and running.
See here for an example: https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml in job kubernetes-service-endpoints
The URL to probe might be your liveness probe or something else. If your services don't talk HTTP, you can make the blackbox exporter test other protocols as well.