I have Kubernetes v1.12.1 installed on my cluster.
I downloaded the metrics-server from the following repo:
https://github.com/kubernetes-incubator/metrics-server
and then run the following command:
kubectl create -f metrics-server/deploy/1.8+/
and then I tried autoscaling a deployment using:
kubectl autoscale deployment example-app-tier --min 1 --max 3 --cpu-percent 70 --spacename example
but the targets here shows unkown/70%
kubectl get hpa --spacename example
NAMESPACE NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
example example-app-tier Deployment/example-app-tier <unknown>/70% 1 3 1 3h35m
and when I try running the kubectl top nodes or pods I get an error saying:
error: Metrics not available for pod default/pi-ss8j6, age: 282h48m5.334137739s
So I'm looking for any tutorial that helps me step by step enabling autoscaling using metrics-server or Prometheus (and not Heapster as it is deprecated and will not be supported anymore)
Thank you!
you need to register your metrics server with API server and make sure they communicate.
https://github.com/kubernetes/kubernetes/issues/59438
If it is done already , you need to check the help for the kubectl top command in your version of k8s , the command may default to use heapster , and you may need to tell it to use the new service instead.
https://github.com/kubernetes/kubernetes/pull/56206
from the help command it looks like it is not yet ported to new metric server and still looking for heapster by default.
C02W84XMHTD5:tmp iahmad$ kubectl top node --help
Display Resource (CPU/Memory/Storage) usage of nodes.
The top-node command allows you to see the resource consumption of nodes.
Aliases:
node, nodes, no
Examples:
# Show metrics for all nodes
kubectl top node
# Show metrics for a given node
kubectl top node NODE_NAME
Options:
--heapster-namespace='kube-system': Namespace Heapster service is located in
--heapster-port='': Port name in service to use
--heapster-scheme='http': Scheme (http or https) to connect to Heapster as
--heapster-service='heapster': Name of Heapster service
-l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l
key1=value1,key2=value2)
Usage:
kubectl top node [NAME | -l label] [options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
note: I am using 1.10 , maybe in your version , the options are different
Related
I have kubernets cluster in gcp with docker container runtime. I am trying to change docker container runtime into containerd. Following steps shows what I did.
New node pool added ( nodes with containerd )
drained old nodes
Once I perform above steps I am getting " Pod is blocking scale down because it has local storage " warning message.
You need to add the once annotation to POD so that cluster autoscaler can remove that POD from POD safe to evict.
cluster-autoscaler.kubernetes.io/safe-to-evict": "true"
above annotation, you have to add in into POD.
You can read more at : https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler-visibility#cluster-not-scalingdown
NoScaleDown example: You found a noScaleDown event that contains a
per-node reason for your node. The message ID is
"no.scale.down.node.pod.has.local.storage" and there is a single
parameter: "test-single-pod". After consulting the list of error
messages, you discover this means that the "Pod is blocking scale down
because it requests local storage". You consult the Kubernetes Cluster
Autoscaler FAQ and find out that the solution is to add a
"cluster-autoscaler.kubernetes.io/safe-to-evict": "true" annotation to
the Pod. After applying the annotation, cluster autoscaler scales down
the cluster correctly.
For further clarification, you can use this command to update the pod's annotation:
kubectl annotate pod <podname> -n <namespace> "cluster-autoscaler.kubernetes.io/safe-to-evict=true"
Had the same error when using Gitlab + Autodevops + GoogleCloud.
The issue is the cm_acme pods's that are spun up to answer the letsencrypt challenges.
e.g. we have pods like this
cm-acme-http-solver-d2tak
hanging around in our cluster so the cluster won't downsize until these pods are destroyed.
A simple
kubectl get pods -A | grep cm-acme
will list all the pods that need to be destroyed with
kubectl delete pod -n {namespace} {pod name}
Kubectl describe nodes ?
like wise do we have any commands like mentioned below to describe cluster information ?
kubectl describe cluster
"Kubectl describe <api-resource_type> <api_resource_name> "command is used to describe a specific resources running in your kubernetes cluster, Actually you need to verify different components separately as a developer to check your pods, nodes services and other tools that you have applied/created.
If you are the cluster administrator and you are asking about useful command to check the actual kube-system configuration it depends on your k8s cluster type for example if you are using "kubeadm" package to initialize k8s cluster on premises you can check and change the default cluster configuration using this command :
kubeadm config print init-defaults
after initializing your cluster all main server configurations files a.k.a manifests are located here /etc/kubernetes/manifests (and they are Realtime updated, change anything and the cluster will redeploy it automatically)
Useful kubectl commands :
For cluster infos (api-server domain and dns) run:
kubectl cluster-info
Either ways you can list all api-resources and check it one by one using these commands
kuectl api-resources (list all api-resources names and types)
kubectl get <api_resource_name> (specific to your cluster)
kubectl explain <api_resource_name> (explain the resource object with docs link)
For extra infos you can add specific flags, examples:
kubectl get nodes -o wide
kubectl get pods -n <specific-name-space> -o wide
kubectl describe pods <pod_name>
...
For more informations about the kubectl command line check the kubectl_cheatsheet
K8s VERSION = v1.18.6
I have deployed the Kubernetes dashboard using the following command and added a privileged user with which I logged into the dashboard.
but not able to see Pods CPU and Memory Utilization graphs are missing Kubernetes dashboard
The Kubernetes Metrics Server is an aggregator of resource usage data in your cluster,
To deploy the Metrics Server
Deploy the Metrics Server with the following command:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
Verify that the metrics-server deployment is running the desired number of pods with the following command.
kubectl get deployment metrics-server -n kube-system
Output
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 1/1 1 1 6m
Also you can validate by below command:
kubectl top nodes
to see node cpu utilisation if it works, it should then come up in Dashboard as well.
Resource usage metrics are only available for K8s clusters once Metrics Server has been installed.
I have created a Kubernetes cluster and one of instance in the cluster is inactive
I want to review the configured Kubernetes Engine cluster of an inactive configuration by which command should I check?
Should I use this "kubectl config get-contexts"?
or
kubectl config use-context and kubectl config view?
Am beginner to cloud please anyone explains?
The kubectl config get-context will not help you debug why the instance is failing. Basically it will just show you the list ot contexts. A context is a group of cluster access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. The current context is the cluster that is currently the default for kubectl . On other hand the kubectl config view will just print you kubeconfig settings.
The best way to start is the Kubernestes official documentation. It provides a good basic steps for troubleshoouting your cluster. Some of the steps can be applied to GKE as well as the Kubeadm or Minikube clusters.
If you're using GKE, then you can read the nodes logs from Stackdriver. This document is excellent start when you want to check the logs directly in the log viewer.
If one of your instaces report NotReady after listing them with kubectl get nodes I suggest to ssh to that instances and check kubernetes components (kubelet and kube-proxy). You can view the GKE nodes from the instances page.
Kube Proxy logs:
/var/log/kube-proxy.log
If you want to check the kubelet logs, they're a unit in systemd in COS that can be accessed using jorunactl.
Kubelet logs:
sudo journalctl -u kubelet
For further debugging it is worth mentioning that that GKE master is a node inside a Google managed project and it is different from your cluster project.
For the detailed master logs you will have open a google support ticket. Here is more information about how GKE cluster architecture works, in case there's something related to the api-server.
Let me know if that was helpful.
You can run below command to check status of all the nodes of a kubernetes cluster. Pleases note if you are using GKE managed service you will not be able to see status of master nodes, you will only see status of worker nodes.
kubectl get nodes -o wide
kubectl describe node nodename
You can also run below command to check status of control plane components.
kubectl get componentstatus
You can use the below command to get list of all the nodes in GKE cluster:
kubectl get nodes -o wide
Once you have the list of nodes, you can describe the node to get the events"
kubectl describe node <Node-Name>
Based on the events you can debug the node.
I'm using mongodb-exporter for store/query the metrics via prometheus. I have set up a custom metric server and storing values for that .
That is the evidence of prometheus-exporter and custom-metric-server works compatible .
Query:
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/*/mongodb_mongod_wiredtiger_cache_bytes"
Result:
{"kind":"MetricValueList","apiVersion":"custom.metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/%2A/mongodb_mongod_wiredtiger_cache_bytes"},"items":[{"describedObject":{"kind":"Pod","namespace":"monitoring","name":"mongo-exporter-2-prometheus-mongodb-exporter-68f95fd65d-dvptr","apiVersion":"/v1"},"metricName":"mongodb_mongod_wiredtiger_cache_bytes","timestamp":"TTTTT","value":"0"}]}
In my case when I create a hpa for this custom metrics from mongo exporter, hpa return this error to me :
failed to get mongodb_mongod_wiredtiger_cache_bytes utilization: unable to get metrics for resource mongodb_mongod_wiredtiger_cache_bytes: no metrics returned from resource metrics API
What is the main issue on my case ? I have checked all configs and flow is looking fine, but where is the my mistake .
Help
Thanks :)
In comments you wrote that you have enabled external.metrics, however in original question you had issues with custom.metrics
In short:
metrics supports only basic metric like CPU or Memory.
custom.metrics allows you to extend basic metrics to all Kubernetes objects (http_requests, number of pods, etc.).
external.metrics allows to gather metrics which are not Kubernetes objects:
External metrics allow you to autoscale your cluster based on any
metric available in your monitoring system. Just provide a metric
block with a name and selector, as above, and use the External metric
type instead of Object
For more detailed description, please check this doc.
Minikube
To verify if custom.metrics are enabled you need to execute command below and check if you can see any metrics-server... pod.
$ kubectl get pods -n kube-system
...
metrics-server-587f876775-9qrtc 1/1 Running 4 5d1h
Second way is to check if minikube have enabled metrics-server by
$ minikube addons list
...
- metrics-server: enabled
If it is disabled just execute
$ sudo minikube addons enable metrics-server
✅ metrics-server was successfully enabled
GKE
Currently at GKE heapster and metrics-server are turn on as default but custom.metrics are not supported by default.
You have to install prometheus adapter or stackdriver.
Kubeadm
Kubeadm do not include heapster or metrics server at the beginning. For easy installation, you can use this YAML.
Later you have to install prometheus adapter.
Apply custom.metrics
It's the same for Minikube, Kubeadm, GKE.
Easiest way to apply custom.metrics is to install prometheus adapter via Helm.
After helm installation you will be able to see note:
NOTES:
my-release-prometheus-adapter has been deployed.
In a few minutes you should be able to list metrics using the following command(s):
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
As additional information, you can use jq to get more user friendly output.
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq .