Is there a way to combine kubectl top pod and kubectl top nodes?
Basically I want to know pods sorted by cpu/memory usage BY node.
I can only get pods sorted by memory/cpu for whole cluster with kubectl top pod or directly memory/cpu usage per whole node with kubectl top nodes.
I have been checking the documentation but couldnt find the exact command.
There is no built-in solution to achieve your expectations. kubectl top pod and kubectl top node are different commands and cannot be mixed each other. It is possible to sort results from kubectl top pod command only by cpu or memory:
kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'
If you want to "combine" kubectl top pod and kubectl top node you need to write custom solution. For example script in Bash based on this commands.
Related
Kubectl describe nodes ?
like wise do we have any commands like mentioned below to describe cluster information ?
kubectl describe cluster
"Kubectl describe <api-resource_type> <api_resource_name> "command is used to describe a specific resources running in your kubernetes cluster, Actually you need to verify different components separately as a developer to check your pods, nodes services and other tools that you have applied/created.
If you are the cluster administrator and you are asking about useful command to check the actual kube-system configuration it depends on your k8s cluster type for example if you are using "kubeadm" package to initialize k8s cluster on premises you can check and change the default cluster configuration using this command :
kubeadm config print init-defaults
after initializing your cluster all main server configurations files a.k.a manifests are located here /etc/kubernetes/manifests (and they are Realtime updated, change anything and the cluster will redeploy it automatically)
Useful kubectl commands :
For cluster infos (api-server domain and dns) run:
kubectl cluster-info
Either ways you can list all api-resources and check it one by one using these commands
kuectl api-resources (list all api-resources names and types)
kubectl get <api_resource_name> (specific to your cluster)
kubectl explain <api_resource_name> (explain the resource object with docs link)
For extra infos you can add specific flags, examples:
kubectl get nodes -o wide
kubectl get pods -n <specific-name-space> -o wide
kubectl describe pods <pod_name>
...
For more informations about the kubectl command line check the kubectl_cheatsheet
I've got some deployment on a basic k8s cluster withouth defining requests and limits.
Is there any way to check how much the pod is asking for memory and cpu?
Depending on whether the metrics-server is installed in your cluster, you can use:
kubectl top pod
kubectl top node
After installing the Metrics Server, you can query the Resource Metrics API directly for the resource usages of pods and nodes:
All nodes in the cluster:
kubectl get --raw=/apis/metrics.k8s.io/v1beta1/nodes
A specific node:
kubectl get --raw=/apis/metrics.k8s.io/v1beta1/nodes/{node}
All pods in the cluster:
kubectl get --raw=/apis/metrics.k8s.io/v1beta1/pods
All pods in a specific namespace:
kubectl get --raw=/apis/metrics.k8s.io/v1beta1/namespaces/{namespace}/pods
A specific pod:
kubectl get --raw=/apis/metrics.k8s.io/v1beta1/namespaces/{namespace}/pods/{pod}
The API returns you the absolute CPU and memory usages of the pods and nodes.
From this, you should be able to figure out how much resources each pod consumes and how much free resources are left on each node.
I have noticed that when using kubectl you can pretty much use pod and pods interchangeably. Is there any instance when using one instead of the other could get you different results or can you just use either without worrying about it?
For example:
kubectl get pods
kubectl get pod
kubectl describe pod/app
kubectl describe pods/app
and so on...
From the kubectl documentation:
Resource types are case-insensitive and you can specify the singular,
plural, or abbreviated forms. For example, the following commands
produce the same output:
kubectl get pod pod1
kubectl get pods pod1
kubectl get po pod1
It doesn't matter and both ways of writing will always result in the same result.
I'm looking to update manually with the command kubectl autoscale my maximum number of replicas for auto scaling.
however each time I run the command it creates a new hpa that fails to launch the pod why I don't know at all:(
Do you have an idea how i can update manually with kubectl my HPA ?
https://gist.github.com/zyriuse75/e75a75dc447eeef9e8530f974b19c28a
I think you are mixing two topics here, one is manually scale a pod (you can do it through a deployment applying kubectl scale deploy {mydeploy} --replicas={#repl}). In the other hand you have HPA (Horizontal Pod AutoScaler), in order to do this (HPA) you should have configured any app metrics provider system
e.g:
metrics server
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server
heapster (deprecated) https://github.com/kubernetes-retired/heapster
then you can create a HPA to handle your autoscaling, you can get more info on this link https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
Once created you can patch your HPA or deleted it and create it again
kubectl delete hpa hpa-pod -n ns-svc-cas
kubectl autoscale hpa-pod --min={#number} --max={#number} -n ns-svc-cas
easiest way
I'd like to know the current limit on the RAM. (No limit/request was explicitly configured.)
How do I see the current configuration of an existing pod?
[Edit] That configuration would include not only how much memory is now in use, but also the max-limit, the point at which it would be shut down.
(If I blow up the heap with huge strings, I see a limit of approx 4 GB, and the Google Cloud Console shows a crash at 5.4 GB (which of course includes more than the Python interpreter), but I don't know where this comes from. The Nodes have up to 10 GB.)
I tried kubectl get pod id-for-the-pod -o yaml, but it shows nothing about memory.
I am using Google Container Engine.
Use kubectl top command
kubectl top pod id-for-the-pod
kubectl top --help
Display Resource (CPU/Memory/Storage) usage.
The top command allows you to see the resource consumption for nodes
or pods.
This command requires Heapster to be correctly configured and working
on the server.
Available Commands: node Display Resource
(CPU/Memory/Storage) usage of nodes pod Display Resource
(CPU/Memory/Storage) usage of pods
Usage: kubectl top [flags] [options]
The edit in the question asks how to see the max memory limit for an existing pod. This shold do:
kubectl -n <namespace> exec <pod-name> cat /sys/fs/cgroup/memory/memory.limit_in_bytes
Reference: https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt
With QoS class of BestEffort (seen in the output from kubectl -n <namespace> get pod <pod-name> -o yaml or kubectl -n <namespace> describe pod <pod-name>), there may be no limits (other than the available memory on the node where the pod is running) so the value returned can be a large number (e.g. 9223372036854771712 - see here for an explanation).
You can use
kubectl top pod POD_NAME
It will show you memory and CPU usage.
[Edit: See comment for more]
As already answered by the community, you can run "kubectl top pod POD_NAME" to get how much memory your pod is using. The max limit actually depends on the available memory of nodes (You may get an idea of CPU Requests and CPU Limits of nodes by running "kubectl describe nodes"). Furthermore, the max limit of the pod also depends on its memory requests and limits as defined in the pod's configuration ("requests" and "limits" specs under "resources"). You can also read this relevant link.
Deploy Metrics Server in Kubernetes Cluster (Heapster is deprecated) and then use
kubectl top POD_NAME
to get pod CPU and memory usages.
Answer from comment from #Artem Timchenko: kubectl -n NAMESPACE describe pod POD_NAME | grep -A 2 "Limits"