Does Kubernetes PODs provide memory back, after acquiring more than the requested amount - kubernetes

I am trying to understand the behavior of K8S POD memory allocation and so far no luck on the materials I read on the internet.
My question is, If I have a POD template defined with the below values for the memory
Limits:
cpu: 2
memory: 8Gi
Requests:
cpu: 500m
memory: 2Gi
And say my application suddenly requires more memory and the POD allocates 4Gi ( from 2Gi initial memory ) to get the task done. Would the POD give back the extra 2Gi it acquired back to the underlying OS and become a 2Gi POD again after the task is complete or would it function as a POD with 4Gi memory afterward.
My application is a Java application running on Apache Tomcat having the max heap defined for 6Gi.

The Kubernetes resource requests come into effect at basically three times:
When new pods are being initially scheduled, the resource requests (only) are used to find a node with enough space. The sum of requests must be less than the physical size of the node. Limits and actual utilization aren't considered.
If the process allocates memory, and this would bring its total utilization above the pod's limit, the allocation will fail.
If the node runs out of memory, Kubernetes will look through the pods on that node and evict the pods whose actual usage most exceeds their requests.
Say you have a node with 16 GiB of memory. You run this specific pod in a Deployment with replicas: 8; they would all fit on the node, and for the sake of argument let's say Kubernetes puts them all there. Regardless of what the pods are doing, a 9th pod wouldn't fit on the node because the memory requests would exceed the physical memory.
If your pod goes ahead and allocates a total of 4 GiB of memory, that's fine so long as the physical system has the memory for it. If the node runs out of memory, though, Kubernetes will see this pod has used 2 GiB more than its request; that could result in the pod getting evicted (destroyed and recreated, probably on a different node).
If the process did return the memory back to the OS, that would show up in the "actual utilization" part of the metric; since its usage would now be less than its requests, it would be in less danger of getting evicted if the node did run out of memory. (Many garbage-collected systems will hold on to OS memory as long as they can and reuse it, though; see e.g. Does GC release back memory to OS?.)

Related

kubernetes pod resource cpu on nodes with different cpu cores count

This is a bit crazy, but we run a kubernetes cluster with 4 nodes (w/ Docker as container engine):
node01/node02: 8 cores
node03/node04: 4 cores
I am confusing about exactly what pod resource request cpu give as real cpu for a containerized application.
In my understanding, pods from a deployment that request 1 CPU, will all have the same cpu shares, so this mean a container will run faster on node01/node02 than 03/04 ?
Not necessarily:
If the application is single-threaded, it will run at the same speed no matter how many cores the system it's on has.
If the application is disk- or database-bound, adding more cores won't make it go faster.
If other pods (or non-Kubernetes processes) are running on either of the nodes, those share the CPU resource, and a busy 8-core system could in practice be slower than an idle 4-core system.
If the pod spec has resource requests, it could be prevented from running on the smaller system
resources:
requests:
cpu: 6 # can't run on the 4-core system
If the pod spec has resource limits, that can prevent it from using all of the cores, even if it's scheduled on the larger system
resources:
limits:
cpu: 3 # even if it's scheduled on the 8-core system

Kubernetes: cpu request and total resources doubts

For better understand my doubts, I will put an example
Example:
We have one worker node with 3 allocatable cpus and kubernetes has scheduled three pods on it:
pod_1 with 500m cpu request
pod_2 with 700m cpu request
pod_3 with 300m cpu request
In this worker node I can't schedule other pods.
But if I check the real usage:
pod_1 cpu usage is 300m
pod_2: cpu usage is 600m
My question is:
Can pod_3 have a real usage of 500m or the request of other pods will limit the cpu usage?
Thanks
Pietro
It doesn't matter what the real usage is - the "request" means how much resources are guaranteed to be available for the pod. Your workload might be using only a fraction of the requested resources - but what will really count is the "request" itself.
Example - Let's say you have a node with 1CPU core.
Pod A - 100m Request
Pod B - 200m Request
Pod C - 700m Request
Now, no pod can be allocated in the node - because the whole 1 CPU resource is already requested by 3 pods. It doesn't really matter which fraction of the allocated resources each pod is using at any given time.
Another point worth noting is the "Limit". A requested resource usage could be surpassed by a workload - but it cannot surpass the "Limit". This is a very important mechanism to be understood.
Kubernetes will schedule the pods based on the request that you configure for the container(s) of pod (via the specs for the respective Deployment or other kinds).
Here's an example:
For simplicity, let's assume only one container for the pod.
containers:
- name: "foobar"
resources:
requests:
cpu: "300m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
If you ask for 300 millicpus as your request, Kubernetes will place the pod on a node that has at least 300 millicpus allocatable to that pod. If a node has less allocatable CPU available, the pod will not be placed on that node. Similarly, you can also set the value for memory request as well.
The limit works to limit the resource use by the container. In the example above, Kubernetes will evict the pod if the container ends up using more than 512MiB of memory; once evicted, the pod will be placed on a node that has at least 300 millicpus available (and if no such node exists, the pod will remain in Pending state with FailedScheduling as the reason, until a node with sufficient capacity is available).
Do note, that the resource request works only at the time of pod scheduling, and not at runtime (meaning, the actual consumption of the resources will not trigger a re-scheduling of the pod even if the container used more resources than what it requested as long as it remains below the limit, if specified).
https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-requests-are-scheduled
So, in summary,
The total of all your requests is used as the what can be allocated regardless of the actual runtime utilization of your pod (as long as the limit is not crossed)
You can request for 300 millicpus, but only use 100 millicpus, or 400 millicpus; Kubernetes will still show the "allocated" value as 300
If your container crosses the limit, it will get evicted by Kubernetes

Kubernetes Pod 00MKilled Issue

The scenario is we run some web sites based on an nginx image.
When we had our cluster setup with nodes of 2cores and 4GB RAM each.
The pods had the following configurations, cpu: 40m and memory: 100MiB.
Later, we upgraded our cluster with nodes of 4cores and 8GB RAM each.
But kept on getting 00MKilled in every pod.
So we increased memory on every pods to around 300MiB and then every thing seems to be working fine.
My question is why does this happen and how do I solve it.
P.S. if we revert back to each node being 2cores and 4GB RAM, the pods work just fine with decreased resources of 100MiB.
Any help would be highly appreciated.
Regards.
For each container in kubernetes you can configure resources for both cpu and memory, like following
resources:
limits:
cpu: 100m
memory: "200Mi"
requests:
cpu: 50m
memory: "100Mi"
According to documentation
When you specify the resource request for Containers in a Pod, the scheduler uses this information to decide which node to place the Pod on. When you specify a resource limit for a Container, the kubelet enforces those limits so that the running container is not allowed to use more of that resource than the limit you set.
So if you set memory: 100MiB on resources:limits and your container consume more than 100MiB memory then you will get out of memory (OOM) error
For more details about request and limits on resources click here

K8s memory request handling for 2 and more pods

I am trying understand memory requests in k8s. I have observed
that when I set memory request for pod, e.g. nginx, equals 1Gi, it actually consume only 1Mi (I have checked it with kubectl top pods). My question. I have 2Gi RAM on node and set
memory requests for pod1 and pod2 equal 1.5Gi, but they actually consume only 1Mi of memory. I start pod1 and it should be started, cause node has 2Gi memory and pod1 requests only 1.5Gi. But what happens If I try to start pod2 after that? Would it be started? I am not sure, cause pod1 consumes only 1Mi of memory but has request for 1.5Gi. Do memory request of pod1 influences on execution of pod2? How k8s will rule this situation?
Memory request is the amount of memory that kubernetes holds for pod. If pod requests some amount of memory, there is a strong guarantee that it will get it. This is why you can't create pod1 with 1.5Gi and pod2 with 1.5Gi request on 2Gi node because if kubernetes would allow it and these pods start using this memory kubernetes won't be able to satisfy the requirements and this is unacceptable.
This is why sum of all pod requests running an specific node cannot exceed this specific node's memory.
"But what happens If I try to start pod2 after that? [...] How k8s
will rule this situation?"
If you have only one node with 2Gi of memory then pod2 won't start. You would see that this pod is in Pending state, waiting for resources. If you have spare resources on different node then kubernetes would schedule pod2 to this node.
Let me know if something is not clear and needs more explanation.
Request is reserved resource for a container, Limit is maximum allowed for the container to use. If you try to start two pods with 1.5Gi on a machine with 2Gi the 2nd one will not start due to the lack of resources it needs to reserve. You need to set requests lower - to the average expected consumption of the pod and some reasonable Limit (max allowed memory). It's better to get familiar with these concepts
In Kubernetes you decide on Pod/Container memory using two parameters:
spec.containers[].resources.requests.memory: Kubernetes scheduler will not schedule your Pod if there is not enough memory, this memory is also reserved for you container
spec.containers[].resources.limits.memory: Container cannot exceed this memory
If you want to be precise about the memory for you container, then you'd better set the same value for both parameters.
This is a very good article explaining by example. And here's the official doc.

Ensuring availability in Kubernetes with high-variance memory / CPU load?

Problem: the code we're running on Kubernetes Pods have a very high variance across it's runtime; specifically, it has occasional CPU & Memory spikes when certain conditions are triggered. These triggers involve user queries with hard realtime requirements (system has to respond within <5 seconds).
Under conditions where the node serving the spiking pod doesn't have enough CPU/RAM, Kubernetes responds to these excessive requests by killing the pod altogether; which results in no output across any time whatsoever.
In what way can we ensure, that these spikes are being taken into account when pods are allocated; and more critically, that no pod shutdown happens for these reasons?
Thanks!
High availability of pods with load can be achieved in two ways:
Configuring More CPU/Memory
As the applications requires more CPU/memory during the peak times configure in such a way that allocated resources for the POD will take care of extra load. Configure the POD something like this:
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
You can increase the limits based on the usage. But this way of doing can cause two issues
1) Underutilized resources
As the resources are allocated in large number, these may go wasted unless there is a spike in the traffic.
2) Deployment failure
POD deployment may fail because of not having enough resources in the kubernetes node to cater the request.
For more info : https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
> Autoscaling
Ideal way of doing it is to autoscale the POD based on the traffic.
kubectl autoscale deployment <DEPLOY-APP-NAME> --cpu-percent=50 --min=1 --max=10
Configure the cpu-percent based on the requirement, else 80% by default. Min and max are the number of PODS which can be configured accordingly.
So each time a POD hits the CPU percent with 50% a new pod will be launched and continues till it launches a max of 10 PODS and same applicable for vice-versa scenario.
For more info: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
Limit is a limit, it's expected to do that, period.
What you can do is either run without limit - it will then behave like in any other situation when run on the node - OOM will happen when Node, not Pod reaches memory limit. But this sounds like asking for trouble. And mind that even if you'd set a high limit it's the request that actualy guarantees some resources to pod, so even with limit of 2Gi on Pod it can OOM on 512Mi if request was 128Mi
You should design your app in a way that will not generate such spikes or that will tolerate OOMs on pods. Hard to tell what your soft does exactly, but some things that come to mind that could help cracking this are request throttling, horizontal pod autoscaler or running asynchronously with some kind of message queue.