Is sum of containers equals pod memory usage in kubernetes? - kubernetes

I have two commands
1. kubectl top pod $podName --no-headers
2. kubectl top pod $podName --containers --no-headers
For pod that consist of 1 container only, the memory and cpu usage of the pod and container would be the same.
But however for pod with multiple containers, sometimes the sum of containers' resource usage is not equal pod's resource usage, e.g.
CPU
pod: 2m
container1: 1m
container2: 2m
From kubernetes's official document, 1m is the minimum precision to represent fraction.
For above case, I guest
it may due to quantization of containers' individual resource, e.g. 0.0005cpu will be quantized to 1m cpu
Pod is using non-quantized resource value to calculate pod total
But even so, this is just my guest, and I could not find anywhere in official document explaining whether pod is using non-quantized resource value or not.
Appreciate if anyone could explain with document links to me about the difference between pod resource usage and sum of containers resource usage, and which one is the most accurate one to use?

Related

How to get CPU Utilization ,Memory Utilization of namespaces,pods ,services in kubernetes?

For services, pods, namespaces we don't get CPU utilization, memory Utilization metrics,we get Memory request , CPU request, Memory allocated, CPU allocated using CoreV1Api.
How to get Utilization for above?
For Cluster, we did average of CPU Utilization of nodes (EC2 instances Ids) , so we have cpu/memory utilization for cluster, So can we assume, CPU Utilization for pod will be utilization of node on which it is running?For namespace, on which node ,it exists?
You can get resources request, limits or utilization for a couple of ways.
$ kubectl top pods or $ kubectl top nodes. You can check Quota or kubectl describe node/pod to check information inside.
You can also specify if you need pods form only one namespace like kubectl top pod --namespace=kube-system
To do that you will need metric server which is usually installed on the beginning. To check if it is installed you should list your pods in kube-system namespaces.
$ kubectl get pods -n kube-system
NAME
...
metrics-server-v0.3.1-57c75779f-wsmfk 2/2 Running 0 6h24m
...
Then you can check current metrics in a few ways. Check this thread. When you will list raw metrics is good to list it using jq.
$ /apis/metrics.k8s.io/v1beta1/nodes | jq .
Another thing is that you could use Prometheus for metrics and alerting (depends on your needs). If you want only CPU and memory utilization, metrics server is enoug, however Prometheus also installing custom.metrics which will allow you to get metrics from all kubernetes objects.
Later you can also install some UI like Grafana.
So can we assume, CPU Utilization for pod will be utilization of node on which it is running
CPU utilization for node will display utilization of all resources assigned to this node, even if pods are in different namespaces.
I would encourage you to check this article.

Profiling Kubernetes Deployment Process

I'm new in Kubernetes and currenlty I'm researching about profiling in Kubernetes. I want to log deployment process in Kubernetes (creating pod, restart pod, etc) and want to know the time and resources(RAM, CPU) needed in each process (for example when downloading image, building deployment, pod, etc).
Is there a way or tool for me to log this process? Thank you!
I am not really sure you can achieve the outcome you want without extensive knowledge about certain components and some deep dive coding.
What can be retrieved from Kubernetes:
Information about events
Like pod creation, termination, allocation with timestamps:
$ kubectl get events --all-namespaces
Even in the json format there is nothing about CPU/RAM usage in this events.
Information about pods
$ kubectl get pods POD_NAME -o json
No information about CPU/RAM usage.
$ kubectl describe pods POD_NAME
No information about CPU/RAM usage either.
Information about resource usage
There is some tools to monitor and report basic resource usage:
$ kubectl top node
With output:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
MASTER 90m 9% 882Mi 33%
WORKER1 47m 5% 841Mi 31%
WORKER2 37m 3% 656Mi 24%
$ kubectl top pods --all-namespaces
With output:
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default nginx-local-84ddb99b55-2nzdb 0m 1Mi
default nginx-local-84ddb99b55-nxfh5 0m 1Mi
default nginx-local-84ddb99b55-xllw2 0m 1Mi
There is CPU/RAM usage but in basic form.
Information about deployments
$ kubectl describe deployment deployment_name
Provided output gives no information about CPU/RAM usage.
Getting information about resources
Getting resources like CPU/RAM usage specific to some actions like pulling the image or scaling the deployment could be problematic. Not all processes are managed by Kubernetes and additional tools at OS level might be needed to fetch that information.
For example pulling an image for deployment engages the kubelet agent as well as the CRI to talk to Docker or other Container Runtime your cluster is using. Adding to that, the Container Runtime not only downloads the image, it does other actions that are not directly monitored by Kubernetes.
For another example HPA (Horizontal Pod Autoscaler) is Kubernetes abstraction and getting it's metrics would be highly dependent on how the metrics are collected in the cluster in order to determine the best way to fetch them.
I would highly encourage you to share what exactly (case by case) you want to monitor.
You can find these in the events feed for the pod, check kubectl describe pod.

What is the default memory allocated for a pod

I am setting up a pod say test-pod on my google kubernetes engine. When I deploy the pod and see in workloads using google console, I am able to see 100m CPU getting allocated to my pod by default, but I am not able to see how much memory my pod has consumed. The memory requested section always shows 0 there. I know we can restrict memory limits and initial allocation in the deployment YAML. But I want to know how much default memory a pod gets allocated when no values are specified through YAML and what is the maximum limit it can avail?
If you have no resource requests on your pod, it can be scheduled anywhere at all, even the busiest node in your cluster, as though you requested 0 memory and 0 CPU. If you have no resource limits and can consume all available memory and CPU on its node.
(If it’s not obvious, realistic resource requests and limits are a best practice!)
You can set limits on individual pods
If not , you can set limits on the overall namespace
Defaults , no limits
But there are some ticks:
Here is a very nice view of this:
https://blog.balthazar-rouberol.com/allocating-unbounded-resources-to-a-kubernetes-pod
When deploying a pod in a Kubernetes cluster, you normally have 2
choices when it comes to resources allotment:
defining CPU/memory resource requests and limits at the pod level
defining default CPU/memory requests and limits at the namespace level
using a LimitRange
From Docker documentation ( assuming u are using docker runtime ):
By default, a container has no resource constraints and can use as
much of a given resource as the host’s kernel scheduler will allow
https://docs.docker.com/v17.09/engine/admin/resource_constraints/
Kubernetes pods' CPU and memory usage can be seen using the metrics-server service and the kubectl top pod command:
$ kubectl top --help
...
Available Commands:
...
pod Display Resource (CPU/Memory/Storage) usage of pods
...
Example in Minikube below:
minikube addons enable metrics-server
# wait 5 minutes for metrics-server to be up and running
$ kubectl top pod -n=kube-system
NAME CPU(cores) MEMORY(bytes)
coredns-fb8b8dccf-6t5k8 6m 10Mi
coredns-fb8b8dccf-sjkvc 5m 10Mi
etcd-minikube 37m 60Mi
kube-addon-manager-minikube 17m 20Mi
kube-apiserver-minikube 55m 201Mi
kube-controller-manager-minikube 30m 46Mi
kube-proxy-bsddk 1m 11Mi
kube-scheduler-minikube 2m 12Mi
metrics-server-77fddcc57b-x2jx6 1m 12Mi
storage-provisioner 0m 15Mi
tiller-deploy-66b7dd976-d8hbk 0m 13Mi
This link has more information.
Kubernetes doesn’t provide default resource limits out-of-the-box. This means that unless you explicitly define limits, your containers can consume unlimited CPU and memory.
More details here: https://medium.com/#reuvenharrison/kubernetes-resource-limits-defaults-and-limitranges-f1eed8655474
The real problem in many of these cases is not that the nodes are too small, but that we have not accurately specified resource limits for the pods.
Resource limits are set on a per-container basis using the resources property of a containerSpec, which is a v1 api object of type ResourceRequirements. Each object specifies both “limits” and “requests” for the types of resources.
If you do not specify a memory limit for a container, one of the following situations applies:
The container has no upper bound on the amount of memory it uses. The container could use all of the memory available on the Node where it is running which in turn could invoke the OOM Killer. Further, in case of an OOM Kill, a container with no resource limits will have a greater chance of being killed.
The container is running in a namespace that has a default memory limit, and the container is automatically assigned the default limit. Cluster administrators can use a LimitRange to specify a default value for the memory limit.
When you set a limit, but not a request, kubernetes defaults the request to the limit. If you think about it from the scheduler’s perspective it makes sense.
It is important to set correct resource requests, setting them too low makes that nodes can get overloaded; too high makes that nodes will stuck idle.
Useful article: memory-limits.
Kubernetes doesn’t provide default resource limits out-of-the-box. This means that unless you explicitly define limits, your containers can consume unlimited CPU and memory.
https://medium.com/#reuvenharrison/kubernetes-resource-limits-defaults-and-limitranges-f1eed8655474

How to dump the resource (CPU, memory) usage per namespace in k8s?

I have a list of namespaces created under the same k8s cluster and I'd like to find out the resource (CPU, memory) usage per namespace. Is there any command I can use?
Yes. You can use
$ kubectl -n <nampespace> top pod
For example:
$ kubectl top pod -n kube-system
NAME CPU(cores) MEMORY(bytes)
calico-node-xxxxx 17m 166Mi
coredns-xxxxxxxxxx-xxxxx 2m 11Mi
coredns-xxxxxxxxxx-xxxxx 3m 11Mi
etcd-ip-x-x-x-x.us-west-2.compute.internal 19m 149Mi
kube-apiserver-ip-x-x-x-x.us-west-2.compute.internal 39m 754Mi
kube-controller-manager-ip-x-x-x-x.us-west-2.compute.internal 20m 138Mi
kube-proxy-xxxxx 5m 12Mi
kube-scheduler-ip-x-x-x-x.us-west-2.compute.internal 6m 17Mi
metrics-server-xxxxxxxxxx-xxxxx 0m 15Mi
You need to add up all the entries on the CPU and MEMORY columns if you want the total.
Note that for kubectl top to work you need to have the metrics-server set up and configured appropriately. (Older clusters use the heapster)
Write a shell script to get all namespaces in the cluster. Iterate through each namespace. Run kubectl top pod.
Add up the cpu and memory of all pods in the namespace.
Thanks Rico, the answer is good but just as an addition:
You can specify resource quotas and then view them as specified here.
Other than that, there are external monitoring tools like Prometheus. Also, there is a Resource Explorer which can:
Display historical statistical resource usage from StackDriver.
https://github.com/kubernetes/kubernetes/issues/55046
List resource QoS allocation to pods in a cluster. Inspired by:
https://github.com/kubernetes/kubernetes/issues/1751
The case is still open on GitHub, but it seems there should be some changes eventually as one of the contributors states there is a plan to remove kubectl top and using some native solutions so I advise to follow this thread.

How do I resolve /kubepods/besteffort/pod<uuid> to a pod name?

I'm looking at Prometheus metrics in a Grafana dashboard, and I'm confused by a few panels that display metrics based on an ID that is unfamiliar to me. I assume that /kubepods/burstable/pod99b2fe2a-104d-11e8-baa7-06145aa73a4c points to a single pod, and I assume that /kubepods/burstable/pod99b2fe2a-104d-11e8-baa7-06145aa73a4c/<another-long-string> resolves to a container in the pod, but how do I resolve this ID to the pod name and a container i.e. how to do I map this ID to the pod name I see when I run kubectl get pods?
I already tried running kubectl describe pods --all-namespaces | grep "99b2fe2a-104d-11e8-baa7-06145aa73a4c" but that didn't turn up anything.
Furthermore, there are several subpaths in /kubepods, such as /kubepods/burstable and /kubepods/besteffort. What do these mean and how does a given pod fall into one or another of these subpaths?
Lastly, where can I learn more about what manages /kubepods?
Prometheus Query:
sum (container_memory_working_set_bytes{id!="/",kubernetes_io_hostname=~"^$Node$"}) by (id)
/
Thanks for reading.
Eric
OK, now that I've done some digging around, I'll attempt to answer all 3 of my own questions. I hope this helps someone else.
How to do I map this ID to the pod name I see when I run kubectl get pods?
Given the following, /kubepods/burstable/pod99b2fe2a-104d-11e8-baa7-06145aa73a4c, the last bit is the pod UID, and can be resolved to a pod by looking at the metadata.uid property on the pod manifest:
kubectl get pod --all-namespaces -o json | jq '.items[] | select(.metadata.uid == "99b2fe2a-104d-11e8-baa7-06145aa73a4c")'
Once you've resolved the UID to a pod, we can resolve the second UID (container ID) to a container by matching it with the .status.containerStatuses[].containerID in the pod manifest:
~$ kubectl get pod my-pod-6f47444666-4nmbr -o json | jq '.status.containerStatuses[] | select(.containerID == "docker://5339636e84de619d65e1f1bd278c5007904e4993bc3972df8628668be6a1f2d6")'
Furthermore, there are several subpaths in /kubepods, such as /kubepods/burstable and /kubepods/besteffort. What do these mean and how does a given pod fall into one or another of these subpaths?
Burstable, BestEffort, and Guaranteed are Quality of Service (QoS) classes that Kubernetes assigns to pods based on the memory and cpu allocations in the pod spec. More information on QoS classes can be found here https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/.
To quote:
For a Pod to be given a QoS class of Guaranteed:
Every Container in the Pod must have a memory limit and a memory request, and they must be the same.
Every Container in the Pod must have a cpu limit and a cpu request, and they must be the same.
A Pod is given a QoS class of Burstable if:
The Pod does not meet the criteria for QoS class Guaranteed.
At least one Container in the Pod has a memory or cpu request.
For a Pod to be given a QoS class of BestEffort, the Containers in the
Pod must not have any memory or cpu limits or requests.
Lastly, where can I learn more about what manages /kubepods?
/kubepods/burstable, /kubepods/besteffort, and /kubepods/guaranteed are all a part of the cgroups hierarchy, which is located in /sys/fs/cgroup directory. Cgroups is what manages resource usage for container processes such as CPU, memory, disk I/O, and network. Each resource has its own place in the cgroup hierarchy filesystem, and in each resource sub-directory are /kubepods subdirectories. More info on cgroups and Docker containers here: https://docs.docker.com/config/containers/runmetrics/#control-groups