How much RAM can my Kubernetes pod grow to? - kubernetes

I'd like to know the current limit on the RAM. (No limit/request was explicitly configured.)
How do I see the current configuration of an existing pod?
[Edit] That configuration would include not only how much memory is now in use, but also the max-limit, the point at which it would be shut down.
(If I blow up the heap with huge strings, I see a limit of approx 4 GB, and the Google Cloud Console shows a crash at 5.4 GB (which of course includes more than the Python interpreter), but I don't know where this comes from. The Nodes have up to 10 GB.)
I tried kubectl get pod id-for-the-pod -o yaml, but it shows nothing about memory.
I am using Google Container Engine.

Use kubectl top command
kubectl top pod id-for-the-pod
kubectl top --help
Display Resource (CPU/Memory/Storage) usage.
The top command allows you to see the resource consumption for nodes
or pods.
This command requires Heapster to be correctly configured and working
on the server.
Available Commands: node Display Resource
(CPU/Memory/Storage) usage of nodes pod Display Resource
(CPU/Memory/Storage) usage of pods
Usage: kubectl top [flags] [options]

The edit in the question asks how to see the max memory limit for an existing pod. This shold do:
kubectl -n <namespace> exec <pod-name> cat /sys/fs/cgroup/memory/memory.limit_in_bytes
Reference: https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt
With QoS class of BestEffort (seen in the output from kubectl -n <namespace> get pod <pod-name> -o yaml or kubectl -n <namespace> describe pod <pod-name>), there may be no limits (other than the available memory on the node where the pod is running) so the value returned can be a large number (e.g. 9223372036854771712 - see here for an explanation).

You can use
kubectl top pod POD_NAME
It will show you memory and CPU usage.
[Edit: See comment for more]

As already answered by the community, you can run "kubectl top pod POD_NAME" to get how much memory your pod is using. The max limit actually depends on the available memory of nodes (You may get an idea of CPU Requests and CPU Limits of nodes by running "kubectl describe nodes"). Furthermore, the max limit of the pod also depends on its memory requests and limits as defined in the pod's configuration ("requests" and "limits" specs under "resources"). You can also read this relevant link.

Deploy Metrics Server in Kubernetes Cluster (Heapster is deprecated) and then use
kubectl top POD_NAME
to get pod CPU and memory usages.

Answer from comment from #Artem Timchenko: kubectl -n NAMESPACE describe pod POD_NAME | grep -A 2 "Limits"

Related

Sort kube pods by memory/cpu per node

Is there a way to combine kubectl top pod and kubectl top nodes?
Basically I want to know pods sorted by cpu/memory usage BY node.
I can only get pods sorted by memory/cpu for whole cluster with kubectl top pod or directly memory/cpu usage per whole node with kubectl top nodes.
I have been checking the documentation but couldnt find the exact command.
There is no built-in solution to achieve your expectations. kubectl top pod and kubectl top node are different commands and cannot be mixed each other. It is possible to sort results from kubectl top pod command only by cpu or memory:
kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'
If you want to "combine" kubectl top pod and kubectl top node you need to write custom solution. For example script in Bash based on this commands.

Is there a known method to decide auto scaling threshold value?

Is there a known method / keyword/ topic to solve how to decide auto scale threshold value?
Take K8s HPA for example below, I only know I can install some monitoring tools then check memory usage showing on the graph by my eyes to decide a proper threshold value 100Mi. But why not to set it 99Mi, why not to set it 101Mi? I think this method is too manual.
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageValue: 100Mi
As I am not mastering in computer science, I want to ask
Is there a known method on solving this kind of problem?
Or what kind of course will cover this problem?
Or what is the keyword to search from academic article?
In order to display this information without any graph you can use metrics server. Running it in your cluster makes it possible to get usage for nodes and individual pods through the kubectl top command.
Here`s an example where I'm checking the node resouces:
➜ ~ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
minikube 580m 28% 1391Mi 75%
And for a pod:
➜ ~ kubectl top pod
NAME CPU(cores) MEMORY(bytes)
front-end 0m 28Mi
You can also see resource usages across individual containers instead of pods using the --containers option.
I assume that if you use HPA you have this already installed but it's worth to know that If you use minikube you can easily enable metrics server with minikube addons enable metrics-server. If you bootstrap your server using kubeadm then you have to install it and configure with all of it`s requirements in order to run correctly.
Lastly you can always check manually your pod usage with exec into it:
kubectl exec -it <name_of_the_pod> top
You can here for more prod information about autoscalers.

Profiling Kubernetes Deployment Process

I'm new in Kubernetes and currenlty I'm researching about profiling in Kubernetes. I want to log deployment process in Kubernetes (creating pod, restart pod, etc) and want to know the time and resources(RAM, CPU) needed in each process (for example when downloading image, building deployment, pod, etc).
Is there a way or tool for me to log this process? Thank you!
I am not really sure you can achieve the outcome you want without extensive knowledge about certain components and some deep dive coding.
What can be retrieved from Kubernetes:
Information about events
Like pod creation, termination, allocation with timestamps:
$ kubectl get events --all-namespaces
Even in the json format there is nothing about CPU/RAM usage in this events.
Information about pods
$ kubectl get pods POD_NAME -o json
No information about CPU/RAM usage.
$ kubectl describe pods POD_NAME
No information about CPU/RAM usage either.
Information about resource usage
There is some tools to monitor and report basic resource usage:
$ kubectl top node
With output:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
MASTER 90m 9% 882Mi 33%
WORKER1 47m 5% 841Mi 31%
WORKER2 37m 3% 656Mi 24%
$ kubectl top pods --all-namespaces
With output:
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default nginx-local-84ddb99b55-2nzdb 0m 1Mi
default nginx-local-84ddb99b55-nxfh5 0m 1Mi
default nginx-local-84ddb99b55-xllw2 0m 1Mi
There is CPU/RAM usage but in basic form.
Information about deployments
$ kubectl describe deployment deployment_name
Provided output gives no information about CPU/RAM usage.
Getting information about resources
Getting resources like CPU/RAM usage specific to some actions like pulling the image or scaling the deployment could be problematic. Not all processes are managed by Kubernetes and additional tools at OS level might be needed to fetch that information.
For example pulling an image for deployment engages the kubelet agent as well as the CRI to talk to Docker or other Container Runtime your cluster is using. Adding to that, the Container Runtime not only downloads the image, it does other actions that are not directly monitored by Kubernetes.
For another example HPA (Horizontal Pod Autoscaler) is Kubernetes abstraction and getting it's metrics would be highly dependent on how the metrics are collected in the cluster in order to determine the best way to fetch them.
I would highly encourage you to share what exactly (case by case) you want to monitor.
You can find these in the events feed for the pod, check kubectl describe pod.

Check pod resources consumption

I've got some deployment on a basic k8s cluster withouth defining requests and limits.
Is there any way to check how much the pod is asking for memory and cpu?
Depending on whether the metrics-server is installed in your cluster, you can use:
kubectl top pod
kubectl top node
After installing the Metrics Server, you can query the Resource Metrics API directly for the resource usages of pods and nodes:
All nodes in the cluster:
kubectl get --raw=/apis/metrics.k8s.io/v1beta1/nodes
A specific node:
kubectl get --raw=/apis/metrics.k8s.io/v1beta1/nodes/{node}
All pods in the cluster:
kubectl get --raw=/apis/metrics.k8s.io/v1beta1/pods
All pods in a specific namespace:
kubectl get --raw=/apis/metrics.k8s.io/v1beta1/namespaces/{namespace}/pods
A specific pod:
kubectl get --raw=/apis/metrics.k8s.io/v1beta1/namespaces/{namespace}/pods/{pod}
The API returns you the absolute CPU and memory usages of the pods and nodes.
From this, you should be able to figure out how much resources each pod consumes and how much free resources are left on each node.

How do I resolve /kubepods/besteffort/pod<uuid> to a pod name?

I'm looking at Prometheus metrics in a Grafana dashboard, and I'm confused by a few panels that display metrics based on an ID that is unfamiliar to me. I assume that /kubepods/burstable/pod99b2fe2a-104d-11e8-baa7-06145aa73a4c points to a single pod, and I assume that /kubepods/burstable/pod99b2fe2a-104d-11e8-baa7-06145aa73a4c/<another-long-string> resolves to a container in the pod, but how do I resolve this ID to the pod name and a container i.e. how to do I map this ID to the pod name I see when I run kubectl get pods?
I already tried running kubectl describe pods --all-namespaces | grep "99b2fe2a-104d-11e8-baa7-06145aa73a4c" but that didn't turn up anything.
Furthermore, there are several subpaths in /kubepods, such as /kubepods/burstable and /kubepods/besteffort. What do these mean and how does a given pod fall into one or another of these subpaths?
Lastly, where can I learn more about what manages /kubepods?
Prometheus Query:
sum (container_memory_working_set_bytes{id!="/",kubernetes_io_hostname=~"^$Node$"}) by (id)
/
Thanks for reading.
Eric
OK, now that I've done some digging around, I'll attempt to answer all 3 of my own questions. I hope this helps someone else.
How to do I map this ID to the pod name I see when I run kubectl get pods?
Given the following, /kubepods/burstable/pod99b2fe2a-104d-11e8-baa7-06145aa73a4c, the last bit is the pod UID, and can be resolved to a pod by looking at the metadata.uid property on the pod manifest:
kubectl get pod --all-namespaces -o json | jq '.items[] | select(.metadata.uid == "99b2fe2a-104d-11e8-baa7-06145aa73a4c")'
Once you've resolved the UID to a pod, we can resolve the second UID (container ID) to a container by matching it with the .status.containerStatuses[].containerID in the pod manifest:
~$ kubectl get pod my-pod-6f47444666-4nmbr -o json | jq '.status.containerStatuses[] | select(.containerID == "docker://5339636e84de619d65e1f1bd278c5007904e4993bc3972df8628668be6a1f2d6")'
Furthermore, there are several subpaths in /kubepods, such as /kubepods/burstable and /kubepods/besteffort. What do these mean and how does a given pod fall into one or another of these subpaths?
Burstable, BestEffort, and Guaranteed are Quality of Service (QoS) classes that Kubernetes assigns to pods based on the memory and cpu allocations in the pod spec. More information on QoS classes can be found here https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/.
To quote:
For a Pod to be given a QoS class of Guaranteed:
Every Container in the Pod must have a memory limit and a memory request, and they must be the same.
Every Container in the Pod must have a cpu limit and a cpu request, and they must be the same.
A Pod is given a QoS class of Burstable if:
The Pod does not meet the criteria for QoS class Guaranteed.
At least one Container in the Pod has a memory or cpu request.
For a Pod to be given a QoS class of BestEffort, the Containers in the
Pod must not have any memory or cpu limits or requests.
Lastly, where can I learn more about what manages /kubepods?
/kubepods/burstable, /kubepods/besteffort, and /kubepods/guaranteed are all a part of the cgroups hierarchy, which is located in /sys/fs/cgroup directory. Cgroups is what manages resource usage for container processes such as CPU, memory, disk I/O, and network. Each resource has its own place in the cgroup hierarchy filesystem, and in each resource sub-directory are /kubepods subdirectories. More info on cgroups and Docker containers here: https://docs.docker.com/config/containers/runmetrics/#control-groups