Node not ready, pods pending - kubernetes

I am running a cluster on GKE and sometimes I get into a hanging state. Right now I was working with just two nodes and allowed the cluster to autoscale. One of the nodes has a NotReady status and simply stays in it. Because of that, half of my pods are Pending, because of insufficient CPU.
How I got there
I deployed a pod which has quite high CPU usage from the moment it starts. When I scaled it to 2, I noticed CPU usage was at 1.0; the moment I scaled the Deployment to 3 replicas, I expected to have the third one in Pending state until the cluster adds another node, then schedule it there.
What happened instead is the node switched to a NotReady status and all pods that were on it are now Pending.
However, the node does not restart or anything - it is just not used by Kubernetes. The GKE then thinks that there are enough resources as the VM has 0 CPU usage and won't scale up to 3.
I cannot manually SSH into the instance from console - it is stuck in the loading loop.
I can manually delete the instance and then it starts working - but I don't think that's the idea of fully managed.
One thing I noticed - not sure if related: in GCE console, when I look at VM instances, the Ready node is being used by the instance group and the load balancer (which is the service around an nginx entry point), but the NotReady node is only in use by the instance group - not the load balancer.
Furthermore, in kubectl get events, there was a line:
Warning CreatingLoadBalancerFailed {service-controller } Error creating load balancer (will retry): Failed to create load balancer for service default/proxy-service: failed to ensure static IP 104.199.xx.xx: error creating gce static IP address: googleapi: Error 400: Invalid value for field 'resource.address': '104.199.xx.xx'. Specified IP address is already reserved., invalid
I specified loadBalancerIP: 104.199.xx.xx in the definition of the proxy-service to make sure that on each restart the service gets the same (reserved) static IP.
Any ideas on how to prevent this from happening? So that if a node gets stuck in NotReady state it at least restarts - but ideally doesn't get into such state to begin with?
Thanks.

The first thing I would do is to define Resources and Limits for those pods.
Resources tell the cluster how much memory and CPU you think that the pod is going to use. You do this to help the scheduler to find the best location to run those pods.
Limits are crucial here: they are set to prevent your pods damaging the stability of the nodes. It's better to have a pod killed by an OOM than a pod bringing a node down because of resource starvation.
For example, in this case you're saying that you want 200m CPU (20%) for your pod but if for any chance it goes above 300 (30%), you want the scheduler to kill it and start a new one.
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources:
limits:
cpu: 300m
memory: 200Mi
requests:
cpu: 200m
memory: 100Mi
You can read more here: http://kubernetes.io/docs/admin/limitrange/

For AWS I can tell. You can create dynamic scaling policies based on CPU and memory utilization.
It goes in NotReady state because of out of memory or maybe insufficient CPU. You can create a custom memory metric to collect memory metric of all the worker nodes in the cluster collectively and push it to cloudwatch.
You can follow this documentation- https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html
CPU metric is already there so no need to create it. So a memory metric will be created for you cluster.
You can now create an alarm for it when it goes above certain threshold. Now you have to go to the Auto Scaling Group through AWS console. Now you have to add a scaling policy for your autoscaling group selecting the alarm that you created and add number of instance accordingly.

Related

My kubernetes pods are Evicting with ephemeral-storage issue

I am running a k8 cluster with 8 workers and 3 master nodes. And my pods are evicting repetively with the ephemeral storage issues.
Below is the error I am getting on Evicted pods:
Message: The node was low on resource: ephemeral-storage. Container xpaas-logger was using 30108Ki, which exceeds its request of 0. Container wso2am-gateway-am was using 406468Ki, which exceeds its request of 0.
To overcome the above error, I have added ephemeral storage limits and request to my namespace.
apiVersion: v1
kind: LimitRange
metadata:
name: ephemeral-storage-limit-range
spec:
limits:
- default:
ephemeral-storage: 2Gi
defaultRequest:
ephemeral-storage: 130Mi
type: Container
Even after adding the above limits and requests to my namespace, my pod is reaching its limits and then evicting.
Message: Pod ephemeral local storage usage exceeds the total limit of containers 2Gi.
How can I monitor my ephemeral storage, where does it store on my instance?
How can I set the docker logrotate to my ephemeral storage based on size? Any suggestions?
"Ephemeral storage" here refers to space being used in the container filesystem that's not in a volume. Something inside your process is using a lot of local disk space. In the abstract this is relatively easy to debug: use kubectl exec to get a shell in the pod, and then use normal Unix commands like du to find where the space is going. Since it's space inside the pod, it's not directly accessible from the nodes, and you probably can't use tools like logrotate to try to manage it.
One specific cause of this I've run into in the past is processes configured to log to a file. In Kubernetes you should generally set your logging setup to log to stdout instead. This avoids this specific ephemeral-storage problem, but also avoids a number of practical issues around actually getting the log file out of the pod. kubectl logs will show you these logs and you can set up cluster-level tooling to export them to another system.

AutoScaling work loads without running out of memory

I have a number of pods running and horizontal pod auto scaler assigned to target them, the cluster I am using can also add nodes and remove nodes automatically based on current load.
BUT we recently had the cluster go offline with OOM errors and this caused a disruption in service.
Is there a way to monitor the load on each node and if usage reaches say 80% of the memory on a node, Kubernetes should not schedule more pods on that node but wait for another node to come online.
The pending pods are what one should monitor and define Resource requests which affect scheduling.
The Scheduler uses Resource requests Information when scheduling the pod
to a node. Each node has a certain amount of CPU and memory it can allocate to
pods. When scheduling a pod, the Scheduler will only consider nodes with enough
unallocated resources to meet the pod’s resource requirements. If the amount of
unallocated CPU or memory is less than what the pod requests, Kubernetes will not
schedule the pod to that node, because the node can’t provide the minimum amount
required by the pod. The new Pods will remain in Pending state until new nodes come into the cluster.
Example:
apiVersion: v1
kind: Pod
metadata:
name: requests-pod
spec:
containers:
- image: busybox
command: ["dd", "if=/dev/zero", "of=/dev/null"]
name: main
resources:
requests:
cpu: 200m
memory: 10Mi
When you don’t specify a request for CPU, you’re saying you don’t care how much
CPU time the process running in your container is allotted. In the worst case, it may
not get any CPU time at all (this happens when a heavy demand by other processes
exists on the CPU). Although this may be fine for low-priority batch jobs, which aren’t
time-critical, it obviously isn’t appropriate for containers handling user requests.
Short answer: add resources requests but don't add limits. Otherwise, you will face the throttling issue.

K8s memory request handling for 2 and more pods

I am trying understand memory requests in k8s. I have observed
that when I set memory request for pod, e.g. nginx, equals 1Gi, it actually consume only 1Mi (I have checked it with kubectl top pods). My question. I have 2Gi RAM on node and set
memory requests for pod1 and pod2 equal 1.5Gi, but they actually consume only 1Mi of memory. I start pod1 and it should be started, cause node has 2Gi memory and pod1 requests only 1.5Gi. But what happens If I try to start pod2 after that? Would it be started? I am not sure, cause pod1 consumes only 1Mi of memory but has request for 1.5Gi. Do memory request of pod1 influences on execution of pod2? How k8s will rule this situation?
Memory request is the amount of memory that kubernetes holds for pod. If pod requests some amount of memory, there is a strong guarantee that it will get it. This is why you can't create pod1 with 1.5Gi and pod2 with 1.5Gi request on 2Gi node because if kubernetes would allow it and these pods start using this memory kubernetes won't be able to satisfy the requirements and this is unacceptable.
This is why sum of all pod requests running an specific node cannot exceed this specific node's memory.
"But what happens If I try to start pod2 after that? [...] How k8s
will rule this situation?"
If you have only one node with 2Gi of memory then pod2 won't start. You would see that this pod is in Pending state, waiting for resources. If you have spare resources on different node then kubernetes would schedule pod2 to this node.
Let me know if something is not clear and needs more explanation.
Request is reserved resource for a container, Limit is maximum allowed for the container to use. If you try to start two pods with 1.5Gi on a machine with 2Gi the 2nd one will not start due to the lack of resources it needs to reserve. You need to set requests lower - to the average expected consumption of the pod and some reasonable Limit (max allowed memory). It's better to get familiar with these concepts
In Kubernetes you decide on Pod/Container memory using two parameters:
spec.containers[].resources.requests.memory: Kubernetes scheduler will not schedule your Pod if there is not enough memory, this memory is also reserved for you container
spec.containers[].resources.limits.memory: Container cannot exceed this memory
If you want to be precise about the memory for you container, then you'd better set the same value for both parameters.
This is a very good article explaining by example. And here's the official doc.

"Limits" property ignored when deploying a container in a Kubernetes cluster

I am deploying a container in Google Kubernetes Engine with this YAML fragment:
spec:
containers:
- name: service
image: registry/service-go:latest
resources:
requests:
memory: "20Mi"
cpu: "20m"
limits:
memory: "100Mi"
cpu: "50m"
But it keeps taking 120m. Why is "limits" property being ignored? Everything else is working correctly. If I request 200m, 200m are being reserved, but limit keeps being ignored.
My Kubernetes version is 1.10.7-gke.1
I only have the default namespace and when executing
kubectl describe namespace default
Name: default
Labels: <none>
Annotations: <none>
Status: Active
No resource quota.
Resource Limits
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu - - 100m - -
Considering Resources Request Only
The google cloud console works well, I think you have multiple containers in your pod, this is why. The value shown above is the sum of resources requests declared in your truncated YAML file. You can verify easily with kubectl.
First verify the number of containers in you pod.
kubectl describe pod service-85cc4df46d-t6wc9
Then, look the description of the node via kubectl, you should have the same informations as the console says.
kubectl describe node gke-default-pool-abcdefgh...
What is the difference between resources request and limit ?
You can imagine your cluster as a big square box. This is the total of your allocatable resources. When you drop a Pod in the big box, Kubernetes will check if there is an empty space for the requested resources of the pod (is the small box fits in the big box?). If there is enough space available, then it will schedule your workload on the selected node.
Resources limits are not taken into account by the scheduler. All is done at the kernel level with CGroups. The goal is to restrict workloads to take all the CPU or Memory on the node they are scheduled on.
If your resources requests == resources limits then, workloads cannot escape their "box" and are not able to use available CPU/Memory next to them. In other terms, your resource are guaranteed for the pod.
But, if the limits are greater than your requests, this is called overcommiting resources. You bet that all the workloads on the same node are not fully loaded at the same time (generally the case).
I recommend to not overcommiting the memory resource, do not let the pod escape the "box" in term of memory, it can leads to OOMKilling.
You can try logging into the node running your pod and run:
ps -Af | grep docker
You'll see the full command line that kubelet sends to docker. Representing the memory limit it should have something like --memory. Note that the request value for memory is only used by the Kubernetes scheduler to determine whether it has exceeded all pods/containers running on a node.
Representing the requests for CPUs you'll see the --cpu-shares flag. In this case the limit is not a hard limit but again it's a way for the Kubernetes scheduler to not allocate containers/pod passed that limit when running multiple containers/pods on a specific node. You can learn more about cpu-shares here and from the Kubernetes side here. So in essence, if you don't have enough workloads on the node, it will always go over its CPU share if it needs to and that's what you are probably seeing.
Docker has other ways of restricting the CPUs such as cpu-period/cpu-quota and cpuset-cpus but not used bu Kubernetes as of this writing. In this, I believe mesos does somehow better when dealing with CPU/memory reservations and quotas imo.
Hope it helps.

How to find out the minimum and maximum usable CPU and memory space left on a kubernetes node

I'm trying to deploy Magento on a GCE n1-standard-1 machine, but I keep getting the following error message.
pod (magento-magento-1486272877-zd34d) failed to fit in any node fit failure summary on nodes : Insufficient cpu (1)
I'm using the official Magento helm chart, and I've configured the values.yml file to contain very low CPU requests: cpu: 25m
When I look at the node details on the kubernetes dashboard, I see that my CPU is already spinning at 0.728 (72.80%) while it's not even doing anything besides the system containers. Also see image below:
Does this mean I have 1 - 0.728 = 0.272m left for container requests? Then why is kubernetes still telling me that it has insufficient CPU when specifying 0.25m?
Thanks for your help.
I didn't see that the CPU limits were 0.248 according to the picture in my post, so I put cpu: 20m and it worked.
There is a nifty kubectl command to get information about your nodes resources...
kubectl top nodes
And pods...
kubectl top pods
Pods with containers
kubectl top pods --containers=true