Multiple kubectl scale command high CPU usage - kubernetes

I'm trying to run a 'kubectl scale' command on all the deployments in a namespace (around 30).
I'm creating a list of all deployments, and then run one by one (with xargs / foreach) with a 'kubectl scale' command.
Few technical details:
AWS EKS cluster, version 1.20.
kubectl client vesrion 1.23.5
kubectl server version 1.20.11-eks-f17b81
When I'm doing it, I can see a massive CPU usage (in many cases its also causing the app to crash).
I'm currently running it in a kubernetes pod with limits of CPU: 1200m, MEM:2Gi. (however, it's also happening when I run it locally on my computer).
To show the problem, I run the scale, and at the same time running both "kubectl get deployments" and "kubectl top pod" to show the status of the namespace scaling and the resources usage of the pod running the scales.
In this example I'm scaling all deployments replicas from 0 to 1.
As you can see here, there is a massive increase of the CPU usage after a few scale commands (in this case around 15, there are times that even with less).
After it finished running the scale commands on all deployments, and some of them finished the scale (the pod created successfully), the CPU usage started to decrease.
And even before all pods were created, the CPU returned to its normal
So the question is why the scale command causing such a massive usage of the cpu?
This command is using (as far as I understand) the kubernetes API, so its not something that actually runs on it, so why its getting so high CPU usage?
I originally thought that the CPU usage is related with some wait of the scale command to actual finish (for the pods to be recreated) and that its continue to do some stuff on the background until its happening.
However - the decrease of the CPU usage before all pods were created disproved it.
Thanks,
Afik

Related

How to get iostats of a container running in a pod on Kubernetes?

Memory and cpu resources of a container can be tracked using prometheus. But can we track I/O of a container? Are there any metrices available?
If you are using Docker containers you can check the data with the docker stats command (as P... mentioned in the comment). Here you can find more information about this command.
If you want to check pods cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.
Go to pod's exec mode kubectl exec pod_name -- /bin/bash
Go to cd /sys/fs/cgroup/cpu for cpu usage run cat cpuacct.usage
Go to cd /sys/fs/cgroup/memory for memory usage run cat memory.usage_in_bytes
For more look at this similar question.
Here you can find another interesting question. You should know, that
Containers inside pods partially share /procwith the host system include path about a memory and CPU information.
See also this article about Memory inside Linux containers.

jvm heap usage history in a killed Kubernetes pod

I have a play framework based java application deployed in kubernetes. One of the pods died due to out of memory/memory leak. In local , can use some utilities and monitor jvm heap usage. I am new to kubernetes.
Appreciate if you tell how to check for heap usage history of my application in a Kubernetes pod which got killed. kubectl get events on this killed pod will give events history but I want to check object wise heap usage history on that dead pod. Thanks much
You can install addons or external tools like Prometheus or metrics-server.
Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud.
You can define queries:
For CPU percentage
avg((sum (rate (container_cpu_usage_seconds_total {container_name!="" ,pod="<Pod name>" } [5m])) by (namespace , pod, container ) / on (container , pod , namespace) ((kube_pod_container_resource_limits_cpu_cores >0)*300))*100)
For Memory percentage
avg((avg (container_memory_working_set_bytes{pod="<pod name>"}) by (container_name , pod ))/ on (container_name , pod)(avg (container_spec_memory_limit_bytes>0 ) by (container_name, pod))*100)
Take a look: prometheus-pod-memory-usage.
You can visualize such metrics using Grafana - take a look how to set it up with Prometheus - grafana-prometheus-setup.
Metrics-server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
Metrics Server collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler. Metrics API can also be accessed by kubectl top, making it easier to debug autoscaling pipelines.
You can execute:
$ kubectl top pod <your-pod-name> --namespace=your-namespace --containers
The following command will give you both the CPU usage as well as the memory usage for a given pod and its containers.
See how to firstly install metrics-server: metrics-server-installtion.
Otherwise if you want to check cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.
Go to shell of running container kubectl exec pod_name -- /bin/bash
Go to cd /sys/fs/cgroup/cpu for cpu usage run cat cpuacct.usage
Go to cd /sys/fs/cgroup/memory for memory usage run cat memory.usage_in_bytes
Remember that memory usage is in bytes.
Take a look: memory-usage-kubernetes.

The metrics of kubectl top nodes is not correct?

I try to get CPU/Memory usage of the k8s Cluster Nodes via metrics-server API, but I found the returned values of metrics-server is lower than actual used CPU/Memory.
The output of kubectl top command : kubectl top nodes
The following is the output of the free command, from which you could see the memory usage is great than 90%.
Why the difference is so high?
kubectl top nodes is reflecting the actual usage of your Kubernetes Nodes.
For example:
Your node has 60GB memory and you actually use 30GB so it will be 50% of usage.
But you can request for example:
100 MB and have a limit 200MB memory.
This doesn't mean you only consume 0.16% (100 / 60000) memory, but the amount of your configuration.
I know this is an old topic, but I think the problem is still remaining.
To answer simply, the kubectl top command shows ONLY the actual resource usage, and it is not related to the request/limits configurations in your manifests.
for example:
you could obtain a 400m:1Gi (cpu/memory) usage for a specifique node while total requests/limits are 1.5:4Gi (cpu/memory).
You will observe enougth available resources to schedule but actually it will not work.
requests/limits are impacting directly node resources (resources reservation) but it does not means they are completly used (what kubectl top nodes is showing).

Kubernetes log - from the beginning

I have been running a pod for more than a week and there has been no restart since started. But, still I am unable to view the logs since it started and it only gives logs for the last two days. Is there any log rotation policy for the container and how to control the rotation like based on size or date?
I tried the below command but shows only last two days logs.
kubectl logs POD_NAME --since=0
Is there any other way?
Is there any log rotation policy for the container and how to control the rotation like based on size or date
The log rotation is controlled by the docker --log-driver and --log-opts (or their daemon.json equivalent), which for any sane system has file size and file count limits to prevent a run-away service from blowing out the disk on the docker host. that answer also assumes you are using docker, but that's a fairly safe assumption
Is there any other way?
I strongly advise something like fluentd-elasticsearch, or graylog2, or Sumologic, or Splunk, or whatever in order to egress those logs from the hosts. No serious cluster would rely on infinite log disks nor on using kubectl logs in a for loop to search the output of Pods. To say nothing of egressing the logs from the kubernetes containers themselves, which is almost essential for keeping tabs on the health of the cluster.

Disk Utilization stats per POD in K8

I was looking for ways to get details on disk utilization (mainly writes and Delete's) on a per POD level. I did get google advice such as cAdvisor/heapster etc but none of them talk about disk usage profiling from POD perspective.
Any help on this is greatly appreciated.
TIA!
Assuming the pods are running a linux variant you can do:
kubectl exec -it <pod> cat /proc/1/io
Returns info on the main process' IO defined here
You could then write a script to run the above command (or use the kuberentes API) for each pod of interest.