Kubernetes - NodeUnderMemoryPressure Issue - kubernetes

I'm very new to Kubernetes. We are using Kubernetes cluster on Google Cloud Platform.
I have created Cluster, Services, Pod, Replica controllers.
I have created Horizontal Pod Autoscaler and it is based on CPU Params.
Cluster details
Default running node count is set to 3
3GB allocatable memory per node
Default running node count is 3 in the cluster.
After running for 1 hour Service and Nodes showing NodeUnderMemoryPressure Issues.
How to resolve this ??
If you any more details, please ask
Thanks

I don't know how much traffic is hitting your cluster, but I would highly recommend running Prometheus in your cluster.
Prometheus is an open-source monitoring and alerting tool, and integrates very well with Kubernetes.
This tool should give you a much better view of memory consumption, CPU usage, amongst many other monitoring capabilities, that will allow you to effectively troubleshoot these types of issues.

There are several ways to address this issue that depends on the type of your workloads.
The easiest is simply scale your nodes, but it can be useless if there is a memory leakage. Even if now you are not affected by it you should always consider the possibility of a memory leakage happening, therefore the best practise is to introduce always memory limits for PODs and Namespaces.
Scale the cluster
if you have many pods running and there are not some of them way bigger that the others it would be useful to scale horizontally your cluster, in this way the number of running pods per nodes will reduce and the NodeUnderMemoryPressure warning should disappear.
if you are running few PODs or some of them are capable to make the cluster suffering alone, then the only option is to scale the nodes vertically adding a new node pool with Compute Engine instances having more memory and possibly delete the old one.
if your workload is correct and you memory suffer because in certain moment of the day you receive 100 times more the usual traffic and you create more pods to support this traffic, you should consider to make use of the Autoscaler.
Check Memory leakages
On the other hand if it is not an "healthy" situation and you have pods consuming way more RAM than expected then you should follow the advice of grizzthedj and understand why your PODs are consuming so much and maybe verify if some of your container is affected by memory leakage and in this case scale the amount of RAM is useless since at some point you will run out of it anyway.
Therefore start to understand which are the PODs consuming too much and then troubleshoot why they have this behaviour, if you do not want to make use of Prometeus simply SSH into the container and check with the classical Linux commands.
Limit the RAM consumed by PODs
To prevent this to happen in the future I advise you when writing YAML file to always limit the amount of RAM they can make use of, in this way you will control them and you will be sure that there is not the risk that they cause the Kubernetes "node agent" to fail because out of memory.
Consider also to limit the CPU and introduce minimum requirements of both RAM and CPU for PODs to help the scheduler to properly schedule the PODs to avoid to hit NodeUnderMemoryPressure under high workload.

Related

Kubernetes - Multiple pods per node vs one pod per node

What is usually preferred in Kubernetes - having a one pod per node configuration, or multiple pods per node?
From a performance standpoint, what are the benefits of having multiple pods per node, if there is an overhead in having multiple pods living on the same node?
From a performance standpoint, wouldn't it be better to have a single pod per node?
The answer to your question is heavily dependent on your workload.
There are very specific scenarios (machine learning, big data, GPU intensive tasks) where you might have a one pod per node configuration due to an IO or hardware requirement for a singular pod. However, this is normally not a efficient use of resources and sort of eliminates a lot of the benefits of containerization.
The benefit of multiple pods per node is a more efficient use of all available resources. Generally speaking, managed kubernetes clusters will automatically schedule and manage the amount of pods that run on a node for you automatically, and many providers offer simple autoscaling solutions to ensure that you are always able to run all your workloads.
Running only a single pod per node has its cons as well. For example each node will need its own "support" pods such as metrics, logs, network agents and other system pods which most likely will not have its all resources
fully utilized. Which in terms of performance would mean that selecting the correct node size to pods amount ratio might result with less costs for the same performance as single pod per node.
On the contrary running too many pods in a massive node can cause lack of those resources and cause metrics or logs gaps or lost packets OOM errors etc.
Finally, when we also consider auto scaling, scaling up couple more pods on an existing nodes will be lot more responsive than scaling up a new node for each pod.

GKE Limit RAM & CPU

Am using GKE(google managed kubernetes) and I have requirement where I want to leave around 10% of memory on each Node as Idle so that during burst workload scenarios, the pod's already deployed on that Node can make use of those idle resources (within limit range)
Basically What I want to achieve is, I want to avoid a scenario where Pod's get scheduled onto a Node till 100% resources are consumed and assuming all the Pod's/Services are utilizing their allocated resources (set via requests) and one of the POD has a burst workload scenario or the pod got restarted and it needs more memory during boot up, then it should be able to make use of those idle resources
After going through the documentation I have come across this, but since GKE is a managed service, these properties aren't exposed anywhere, are there any other ways to achieve the same ?
GKE is a managed service and therefore you will not be able to costumize the worker node kublet parameters like --eviction-hard or --system-reserved.
As a workaround, you need to calculate your pod's memory requests and memory limits in order to configure a maximum number of pods per node, in this way you can controle the number of pods that run on your node and the spare CPU and memory to be used by your pods in case of a burst.

what are recommendation for pod size(CPU, memory) in kubernetes

I want to know the recommendation set for pod size. I.e. when to put application within pod or at what size it will be better to use machine itself in place of pod.
Ex. when to think of coming out of k8s and used as external service for some application, when pod required 8GB or 16GB or 32GB? Same for CPU intensive.
Because if pod required 16GB or 16 CPU and we have a machine/node of the same size then I think there is no sense of running pod on that machine. If we run in that scenario then it will be like we will having 10 pods and which required 8 Nodes.
Hopes you understand my concern.
So if some one have some recommendation for that then please share your thoughts on that. Some references will be more better.
Recommendation for ideal range:
size of pods in terms of RAM and CPU
Pods is to nodes ratio, i.e. number of pods per nodes
Whether good for stateless or stateful or both type of application or not
etc.
Running 16cpu/16gb pod on 16cpu/16gb machine is normal. Why not? You think of pods to be tiny but there is no such requirement. Pods can be gigantic, there is no issue with that. Remember container is just a process on a node, why you refuse to run a fat process on a fat node? Kubernetes adds very nice orchestration level to containers, why not make use of it?
There is no such thing as a universal or recommended pod size. Asking for recommended pod size is the same as asking for a recommended size for VM or bare metal server. It is totally up to your application. If your application requires 16 or 64 GB of RAM - this is the recommended size for you, you see?
Regarding pods to nodes ratio - current upper limit of Kubernetes is 110 pods per node. Everything below that watermark is fine. The only thing is that recommended master node size increases with total number of pods. If you have like 1000 pods - you go with small to medium size master nodes. If you have over 10 000 pods - you should increase your master nodes size.
Regarding statefulness - stateless application generally survive better. But often state also should be stored somewhere and stored reliably. So if you plan your application as a set of microservices - create as much stateless apps you can and as few stateful as you can. Ideally, only the relational databases should be truly stateful.

Question about 100 pods per node limitation

I'm trying to build a web app where each user gets their own instance of the app, running in its own container. I'm new to kubernetes so I'm probably not understanding something correctly.
I will have a few physical servers to use, which in kubernetes as I understand are called nodes. For each node, there is a limitation of 100 pods. So if I am building the app so that each user gets their own pod, will I be limited to 100 users per physical server? (If I have 10 servers, I can only have 500 users?) I suppose I could run multiple VMs that act as nodes on each physical server but doesn't that defeat the purpose of containerization?
The main issue in having too many pods in a node is because it will degrade the node performance and makes is slower(and sometimes unreliable) to manage the containers, each pod is managed individually, increasing the amount will take more time and more resources.
When you create a POD, the runtime need to keep a constant track, doing probes (readiness and Liveness), monitoring, Routing rules many other small bits that adds up to the load in the node.
Containers also requires processor time to run properly, even though you can allocate fractions of a CPU, adding too many containers\pod will increase the context switch and degrade the performance when the PODs are consuming their quota.
Each platform provider also set their own limits to provide a good quality of service and SLAs, overloading the nodes is also a risk, because a node is a single point of failure, and any fault in high density nodes might have a huge impact in the cluster and applications.
You should either consider:
Smaller nodes and add more nodes to the cluster or
Use Actors instead, where each client will be one Actor. And many actor will be running in a single container. To make it more balanced around the cluster, you partition the actors into multiple containers instances.
Regarding the limits, this thread has a good discussion about the concerns
Because of the hard limit if you have 10 servers you're limited to 1000 pods.
You might want to count also control plane pods in your 1000 available pods. Usually located in the namespace kube-system it can include (but is not limited to) :
node log exporters (1 per node)
metrics exporters
kube proxy (usually 1 per node)
kubernetes dashboard
DNS (scaling according to the number of nodes)
controllers like certmanager
A pretty good rule of thumb could be 80-90 application pods per node, so 10 nodes will be able to handle 800-900 clients considering you don't have any other big deployment on those nodes.
If you're using containers in order to gain perfs, creating node VMs will be against your goal. But if you're using containers as a way to deploy coherent environments and scale stateless applications then using VMs as node can make sense.
There are no magic rules and your context will dictate what to do.
As managing a virtualization cluster and a kubernetes cluster may skyrocket your infrastructure complexity, maybe kubernetes is not the most efficient tool to manage your workload.
You may also want to take a look at Nomad wich does not seem to have those kind of limitations and may provide features that are closer to your needs.

High loads causing node to become NotReady?

I'm running several experiments in GCE with a Kubernetes cluster built with KOPS. I can start my experiments, verify that they're running, then close to the end of the run the node responsible for generating the load for my cluster get a state "Unknown" for the "MemoryPressure", "DiskPressure" and "Ready" types.
Coincidentally the pods that run on the node require the most resources towards the end of the run as well.
So my question is, is it possible that the node is unable to respond to a request from the kube-controller or api-server due to its load-generation?
If so, how do I resolve this? Since, my experiments potentially render the node unresponsive for a maximum of about half an hour or more.
Thanks for any responses in advance.
Turns out one of my pods was consuming all the CPU on the node. Causing kubelte to become unresponsive. I've set a limit on the pod's CPU-time and that fixed the issue. Also, added a kube-reserved setting to ensure kubelet gets the CPU-time it needs.
If the load is growing because of growing amount of Pods, you can try to use Node autoscaling. Here you can find the instruction about it.
If only several Pods consume all Node resources, then the only way is to use Nodes with bigger amount of CPU and Memory