what is kubernetes memory commitment limits on worker nodes? - kubernetes

I am running single node K8s cluster and my machine has 250GB memory. I am trying to launch multiple pods each needing atleast 50GB memory. I am setting my pod specifications as following.
Now I can launch first four pods and they starts "Running" immediately; however the fifth pod is "Pending". kubectl describe pods shows 0/1 node available; 1 insufficient memory
now at first this does not make sense - i have 250GB memory so i should be able to launch five pods each requesting 50GB memory. shouldn't i?
on second thought - this might be too tight for K8s to fulfill as, if it schedules the fifth pod than the worker node has no memory left for anything else (kubelet kube-proxy OS etc). if this is the case for denial than K8s must be keeping some amount of node resources (CPU memory storage network) always free or reserved. What is this amount? is it static value or N% of total node resource capacity? can we change these values when we launch cluster?
where can i find more details on related topic? i tried googling but nothing came which specifically addresses these questions. can you help?

now at first this does not make sense - i have 250GB memory so i should be able to launch five pods each requesting 50GB memory. shouldn't i?
This depends on how much memory that is "allocatable" on your node. Some memory may be reserved for e.g. OS or other system tasks.
First list your nodes:
kubectl get nodes
Then you get a list of your nodes, now describe one of your nodes:
kubectl describe node <node-name>
And now, you should see how much "allocatable" memory the node has.
Example output:
...
Allocatable:
cpu: 4
memory: 2036732Ki
pods: 110
...
Set custom reserved resources
K8s must be keeping some amount of node resources (CPU memory storage network) always free or reserved. What is this amount? is it static value or N% of total node resource capacity? can we change these values when we launch cluster?
Yes, this can be configured. See Reserve Compute Resources for System Daemons.

Related

If requested memory is "the minimum", why is kubernetes killing my pod when it exceeds 10x the requested?

I am debuggin a problem with pod eviction in Kubernetes.
It looks like it is related to a configuration in PHP FPM children processes quantity.
I assigned a minimum memory of 128 MB and Kubernetes is evicting my pod apparently when exceeds 10x that amount (The node was low on resource: memory. Container phpfpm was using 1607600Ki, which exceeds its request of 128Mi.)
How can I prevent this? I thought that requested resources is the minimum and that the pod can use whatever is available if there's no upper limit.
Requested memory is not "the minimum", it is what it is called - the amount of memory requested by pod. When kubernetes schedules pod, it uses request as a guidance to choose a node which can accommodate this workload, but it doesn't guarantee that pod won't be killed if node is short on memory.
As per docs https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run
if a container exceeds its memory request and the node that it runs on becomes short of memory overall, it is likely that the Pod the container belongs to will be evicted.
If you want to guarantee a certain memory window for your pods - you should use limits, but in that case if your pod doesn't use most of this memory, it will be "wasted"
So to answer your question "How can I prevent this?", you can:
reconfigure your php-fpm in a way, that prevents it to use 10x memory (i.e. reduce workers count), and configure autoscaling. That way your overloaded pods won't be evicted, and kubernetes will schedule new pods in event of higher load
set memory limit to guarantee a certain amount of memory to your pods
Increase memory on your nodes
Use affinity to schedule your demanding pods on some dedicated nodes and other workloads on separate nodes

k8s - how scheduler assigns the nodes

I am just curious to know how k8s master/scheduler will handle this.
Lets consider I have a k8s master with 2 nodes. Assume that each node has 8GB RAM and each node running a pod which consumes 3GB RAM.
node A - 8GB
- pod A - 3GB
node B - 8GB
- pod B - 3GB
Now I would like to schedule another pod, say pod C, which requires 6GB RAM.
Question:
Will the k8s master shift pod A or B to other node to accommodate the pod C in the cluster or will the pod C be in the pending status?
If the pod C is going to be in pending status, how to use the resources efficiently with k8s?
Unfortunately I could not try this with my minikube. If you know how k8s scheduler assigns the nodes, please clarify.
Most of the Kubernetes components are split by responsibility and workload assignment is no different. We could define the workload assignment process as Scheduling and Execution.
The Scheduler as the name suggests will be responsible for the Scheduling step, The process can be briefly described as, "get a list of pods, if it is not scheduled to a node, assign it to one node with capacity to run the pod". There is a nice blog post from Julia Evan here explaining Schedulers.
And Kubelet is responsible for the Execution of pods scheduled to it's node. It will get a list of POD Definitions allocated to it's node, make sure they are running with the right configuration, if not running start then.
With that in mind, the scenario you described will have the behavior expected, the POD will not be scheduled, because you don't have a node with capacity available for the POD.
Resource Balancing is mainly decided at scheduling level, a nice way to see it is when you add a new node to the cluster, if there are no PODs pending allocation, the node will not receive any pods. A brief of the logic used to Resource balancing can be seen on this PR
The solutions,
Kubernetes ships with a default scheduler. If the default scheduler does not suit your needs you can implement your own scheduler as described here. The idea would be implement and extension for the Scheduler to ReSchedule PODs already running when the cluster has capacity but not well distributed to allocated the new load.
Another option is use tools created for scenarios like this, the Descheduler is one, it will monitor the cluster and evict pods from nodes to make the scheduler re-allocate the PODs with a better balance. There is a nice blog post here describing these scenarios.
PS:
Keep in mind that the total memory of a node is not allocatable, depending on which provider you use, the capacity allocatable will be much lower than the total, take a look on this SO: Cannot create a deployment that requests more than 2Gi memory
find below the answers
Will the k8s master shift pod A or B to other node to accommodate the pod C in the cluster or will the pod C be in the pending status?
No. pod A and pod B would still be running, pod C will not be scheduled.
If the pod C is going to be in pending status, how to use the resources efficiently with k8s?
Both the nodes cant meet the resource requirements needed to run pod C and hence it cant be scheduled.
You mentioned that the node capacity is 8 GB RAM. note that whole 8 GB RAM is not available to run the work loads. certain amount of RAM is reserved for kube-proxy, kubelet and other node management activities.

Kubernetes cluster seems to be unstable

Recently we've experienced issues with both Non-Production and Production clusters where the nodes encountered 'System OOM encountered' issue.
The nodes within the Non-Production cluster don't seem to be sharing the pods. It seems like a given node is running all the pods and putting a load on the system.
Also, the Pods are stuck in this status: 'Waiting: ContainerCreating'.
Any help/guidance with the above issues would be greatly appreciated. We are building more and more services in this cluster and want to make sure there's no instability and/or environment issues and place proper checks/configuration in place before we go live.
"I would recommend you manage container compute resources properly within your Kubernetes cluster. When creating a Pod, you can optionally specify how much CPU and memory (RAM) each Container needs to avoid OOM situations.
When Containers have resource requests specified, the scheduler can make better decisions about which nodes to place Pods on. And when Containers have their limits specified, contention for resources on a node can be handled in a specified manner. CPU specifications are in units of cores, and memory is specified in units of bytes.
An event is produced each time the scheduler fails, use the command below to see the status of events:
$ kubectl describe pod <pod-name>| grep Events
Also, read the official Kubernetes guide on “Configure Out Of Resource Handling”. Always make sure to:
reserve 10-20% of memory capacity for system daemons like kubelet and OS kernel
identify pods which can be evicted at 90-95% memory utilization to reduce thrashing and incidence of system OOM.
To facilitate this kind of scenario, the kubelet would be launched with options like below:
--eviction-hard=memory.available<xMi
--system-reserved=memory=yGi
Replacing x and y with actual memory values.
Having Heapster container monitoring in place should be helpful for visualization".
Read more reading on Kubernetes and Docker Administration
Unable to mount volumes for pod
"xxx-3615518044-6l1cf_xxx-qa(8a5d9893-230b-11e8-a943-000d3a35d8f4)":
timeout expired waiting for volumes to attach/mount for pod
"xxx-service-3615518044-6l1cf"/"xxx-qa"
That indicates your pod is having trouble mounting the volume specified in your configuration. This can often be a permissions issue. If you post your config files (like to a gist) with private info removed, we could probably be more helpful.

Kubernetes: do evicted pods with no resource requests get rescheduled successfully?

I've read as much Kubernetes documentation as I can find, but I'm still having trouble understanding a specific scenario I have in mind.
For the sake of example, let's say I have a single node with 1GB of memory. I also have a deployment that wants 100 pods with memory limits set to 100MB and memory requests unset. The pods only use 1MB most of the time, but can sometimes jump up to 99MB.
Question 1: Will all 100 pods be scheduled onto the node?
Now, let's say all the pods simultaneously start using 99MB of memory each and stay there. There isn't enough memory on the machine to handle that, but none of the pods have exceeded their memory limit. I'm assuming Kubernetes evicts some pods at this point.
Question 2: When Kubernetes tries to reschedule the evicted pods, does it succeed since there is no memory request set? What happens when the node immediately runs out of memory again? Does this eviction, rescheduling keep happening over and over? If so, is there some metric that I can use to detect that this is happening?
A pod will be scheduled as long as there's an eligible node that can satisfy the requested resources. So if you do not specify request the pod will pretty much get scheduled. Request and limits are totally different things. Request is a condition for a pod to be scheduled and limit is a condition for a running pod already scheduled.
If you overcommit the actual resources on a node you will run into typical issues - if you overcommit on memory it'll start to swap and CPU there will just be general slow down. Either way the node and pods on it will become unresponsive. It's difficult to deal with and tools like request and limits set up sane boundaries that will help you not take things quite this far where you'll simply see the pod fail to schedule.
When the Kubernetes scheduler schedules a pod running on a node, it will always ensure that the total limits of the containers are less than the node capacity. If a node runs out of resources, Kubernetes will not schedule any new containers running on it. If no node is available when you launch a pod, the pod will remain pending, since the Kubernetes scheduler will be unable to nd any node that could run your desired pod.
Kubernetes Cookbook
I think this excerpt gave you some understanding on how it internally works. So answers for your questions:
At most 10 pods will be scheduled into your node.
If there no free memory in node evicted pods will be pending. Also k8s can simply evict pod if it exceeds limits when resources are needed for other pods and services.

Why does a single node cluster only have a small percentage of the cpu quota available?

pod will not start due to "No nodes are available that match all of the following predicates:: Insufficient cpu"
In the above question, I had an issue starting a deployment with 3 containers.
Upon further investigation, it appears there is only 27% of the CPU quota available - which seems very low. The rest of the CPU seems to be assigned to some default bundled containers.
How is this normally mitigated? Is a larger node required? Do limits need to be set manually? Are all those additional containers necessary?
1 cpu for a single node cluster is probably too small.
From the containers in the original answer, both the dashboard and fluentd can be removed:
the dashboard is just a web UI, which can go away if you use kubectl (which you should, IMO);
fluentd should be reading the log files on disk to ship them somewhere (GCP's log aggregation, I think).
The unnecessary containers should be tied to a Deployment or ReplicaSet, which can be listed with kubectl get deployment and kubectl get rs, respectively. You can then kubectl delete them.
Increasing the resources on the node should not change the requirements for the basic pods, meaning they should all be free scheduling.