Jenkins X builds fail with "The node was low on resource: [DiskPressure]." - kubernetes

My Jenkins X installation, mid-project, is now becoming very unstable. (Mainly) Jenkins pods are failing to start due to disk pressure.
Commonly, many pods are failing with
The node was low on resource: [DiskPressure].
or
0/4 nodes are available: 1 Insufficient cpu, 1 node(s) had disk pressure, 2 node(s) had no available volume zone.
Unable to mount volumes for pod "jenkins-x-chartmuseum-blah": timeout expired waiting for volumes to attach or mount for pod "jx"/"jenkins-x-chartmuseum-blah". list of unmounted volumes=[storage-volume]. list of unattached volumes=[storage-volume default-token-blah]
Multi-Attach error for volume "pvc-blah" Volume is already exclusively attached to one node and can't be attached to another
This may have become more pronounced with more preview builds for projects with npm and the massive node-modules directories it generates. I'm also not sure if Jenkins is cleaning up after itself.
Rebooting the nodes helps, but not for very long.

Let's approach this from the Kubernetes side.
There are few things you could do to fix this:
As mentioned by #Vasily check what is causing disk pressure on nodes. You may also need to check logs from:
kubeclt logs: kube-scheduler events logs
journalctl -u kubelet: kubelet logs
/var/log/kube-scheduler.log
More about why those logs below.
Check your Eviction Thresholds. Adjust Kubelet and Kube-Scheduler configuration if needed. See what is happening with both of them (logs mentioned earlier might be useful now). More info can be found here
Check if you got a correctly running Horizontal Pod Autoscaler: kubectl get hpa
You can use standard kubectl commands to setup and manage your HPA.
Finally, the volume related errors that you receive indicates that we might have problem with PVC and/or PV. Make sure you have your volume in the same zone as node. If you want to mount the volume to a specific container make sure it is not exclusively attached to another one. More info can be found here and here
I did not test it myself because more info is needed in order to reproduce the whole scenario but I hope that above suggestion will be useful.
Please let me know if that helped.

Related

Kubernetes: avoid pod being evicted by DiskPressure

I am looking for best practices to avoid Pod The node had condition: [DiskPressure].
So what I'm doing is full database export of all our views which is massive. At some point the pod runs into DiskPressure error and the k8 decides to Evict and kill it.
What would be the best practices to handle this? There is 7GB of free space which maybe is not enough. Is just raising that the best way to go about it or are the other mechanisms to handle this type of work?
Hope my question makes sense
Error message Pod The node had a condition: [DiskPressure]
happens when the kubelet agent won't admit new pods on the node, that means they won't start. Node disk pressure means that the disks that are attached to the node are under pressure.
The reason you might run into node disk pressure is because Kubernetes has not cleaned up any unused images and is a problem of logs building up.if you have a long-running container with a lot of logs, they may build up enough that it overloads the capacity of the node disk.
Troubleshooting Node Disk Pressure:
To troubleshoot the issue of node disk pressure, you need to figure out what files are taking up the most space. You can either manually SSH into each Kubernetes node, or use a DaemonSet, you can do that from this link.
After installing you can start looking at the logs of the pods that are running by executing kubectl logs -l app=disk-checker. You will see a list of files and their sizes, which will give you greater insight into what is taking up space on your nodes.
Possible solutions:
The issue is caused by necessary application data, making it impossible to delete the files. In this case, you will have to increase the size of the node disks to ensure that there’s sufficient room for the application files.
Another solution is that you find applications that have produced a lot of files that are no longer needed and simply delete the unnecessary files.
Adding more for your information:
1)To avoid DiskPressure crashing the node :
DiskPressure triggers when either node root file systems or image file systems satisfies an eviction threshold for available disk space, inodes will trigger DiskPressure which in turn causes pod eviction,refer to these Node conditions.
Based on the Node conditions, you should consider adjusting the parameters of your kubelet, --image-gc-high-threshold and --image-gc-low-threshold, so that there is always enough space for normal operations, consider --low-diskspace-threshold-mb provisioning more space for your nodes, depending on your requirements.
2) To reduce the DiskPressure condition
Use the kubelet command line arg :
--eviction-hard mapStringString: A set of eviction thresholds (e.g. memory.available<1Gi) that if met would trigger a pod eviction.
DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See Set Kubelet parameters via a config file for more information.

Kubernetes Multi-Attach error for volume "pvc "Volume is already exclusively attached to one node and can't be attached to another"

Kubernetes version:
V1.22.2
Cloud Provider Vsphere version 6.7
Architecture:
3 Masters
15 Workers
What happened:
One of the pods for some "unknown" reason went down, and when we try to lift him up, it couldn't attach the existing PVC.
This only happened to a specific pod, all the others didn't have any kind of problem.
What did you expect to happen:
Pods should dynamically assume PVCs
Validation:
First step: The connection to Vsphere has been validated, and we have confirmed that the PVC exists.
Second step: The Pod was restarted (Statefulset 1/1 replicas) to see if the pod would rise again and assume the pvc, but without success.
Third step: Made a restart to the services (kube-controller, kube-apiserve, etc)
Last step: All workers and masters were rebooted but without success, each time the pod was launched it had the same error ""Multi-Attach error for volume "pvc......" Volume is already exclusively attached to one node and can't be attached to another""
When I delete a pod and try to recreate it, I get this warning:
Multi-Attach error for volume "pvc-xxxxx" The volume is already exclusively attached to a node
and cannot be attached to another
Anything else we need to know:
I have a cluster (3 master and 15 nodes)
Temporary resolution:
Erase the existing PVC and launch the pod again to recreate the PVC.
Since this is data, it is not the best solution to delete the existing PVC.
Multi-Attach error for volume "pvc-xxx" Volume is already
exclusively attached to one node and can't be attached to another
A longer term solution is referring to 2 facts:
You're using ReadWriteOnce access mode where the volume can be mounted as read-write by a single node.
Pods might be schedule by K8S Scheduler on a different node for multiple reason.
Consider switching to ReadWriteMany where the volume can be mounted as read-write by many nodes.

Debugging nfs volume "Unable to attach or mount volumes for pod"

I've set up an nfs server that serves a RMW pv according to the example at https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs
This setup works fine for me in lots of production environments, but in some specific GKE cluster instance, mount stopped working after pods restarted.
From kubelet logs I see the following repeating many times
Unable to attach or mount volumes for pod "api-bf5869665-zpj4c_default(521b43c8-319f-425f-aaa7-e05c08282e8e)": unmounted volumes=[shared-mount], unattached volumes=[geekadm-net deployment-role-token-6tg9p shared-mount]: timed out waiting for the condition; skipping pod
Error syncing pod 521b43c8-319f-425f-aaa7-e05c08282e8e ("api-bf5869665-zpj4c_default(521b43c8-319f-425f-aaa7-e05c08282e8e)"), skipping: unmounted volumes=[shared-mount], unattached volumes=[geekadm-net deployment-role-token-6tg9p shared-mount]: timed out waiting for the condition
Manually mounting the nfs on any of the nodes work just fine: mount -t nfs <service ip>:/ /tmp/mnt
How can I further debug the issue? Are there any other logs I could look at besides kubelet?
In case the pod gets kicked out of the node because the mount is too slow, you may see messages like that in logs.
Kubelets even inform about this issue in logs.
Sample log from Kubelets:
Setting volume ownership for /var/lib/kubelet/pods/c9987636-acbe-4653-8b8d-
aa80fe423597/volumes/kubernetes.io~gce-pd/pvc-fbae0402-b8c7-4bc8-b375-
1060487d730d and fsGroup set. If the volume has a lot of files then setting
volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699
Cause:
The pod.spec.securityContext.fsGroup setting causes kubelet to run chown and chmod on all the files in the volumes mounted for given pod. This can be a very time consuming thing to do in case of big volumes with many files.
By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a Pod's securityContext when that volume is mounted. From the document.
Solution:
You can deal with it in the following ways.
Reduce the number of files in the volume.
Stop using the fsGroup setting.
Did you specify an nfs version when mounting command-line? I had the same issue on AKS, but inspired by https://stackoverflow.com/a/71789693/1382108 I checked the nfs versions. Noticed my PV had a vers=3. When I tried mounting command-line using mount -t nfs -o vers=3 command just hung, with vers=4.1 it worked immediately. Changed the version in my PV and next Pod worked just fine.

Kubernetes Node NotReady: ContainerGCFailed / ImageGCFailed context deadline exceeded

Worker node is getting into "NotReady" state with an error in the output of kubectl describe node:
ContainerGCFailed rpc error: code = DeadlineExceeded desc = context deadline exceeded
Environment:
Ubuntu, 16.04 LTS
Kubernetes version: v1.13.3
Docker version: 18.06.1-ce
There is a closed issue on that on Kubernetes GitHub k8 git, which is closed on the merit of being related to Docker issue.
Steps done to troubleshoot the issue:
kubectl describe node - error in question was found(root cause isn't clear).
journalctl -u kubelet - shows this related message:
skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful]
it is related to this open k8 issue Ready/NotReady with PLEG issues
Check node health on AWS with cloudwatch - everything seems to be fine.
journalctl -fu docker.service : check docker for errors/issues -
the output doesn't show any erros related to that.
systemctl restart docker - after restarting docker, the node gets into "Ready" state but in 3-5 minutes becomes "NotReady" again.
It all seems to start when I deployed more pods to the node( close to its resource capacity but don't think that it is direct dependency) or was stopping/starting instances( after restart it is ok, but after some time node is NotReady).
Questions:
What is the root cause of the error?
How to monitor that kind of issue and make sure it doesn't happen?
Are there any workarounds to this problem?
What is the root cause of the error?
From what I was able to find it seems like the error happens when there is an issue contacting Docker, either because it is overloaded or because it is unresponsive. This is based on my experience and what has been mentioned in the GitHub issue you provided.
How to monitor that kind of issue and make sure it doesn't happen?
There seem to be no clarified mitigation or monitoring to this. But it seems like the best way would be to make sure your node will not be overloaded with pods. I have seen that it is not always shown on disk or memory pressure of the Node - but this is probably a problem of not enough resources allocated to Docker and it fails to respond in time. Proposed solution is to set limits for your pods to prevent overloading the Node.
In case of managed Kubernetes in GKE (not sure but other vendors probably have similar feature) there is a feature called node auto-repair. Which will not prevent node pressure or Docker related issue but when it detects an unhealthy node it can drain and redeploy the node/s.
If you already have resources and limits it seems like the best way to make sure this does not happen is to increase memory resource requests for pods. This will mean fewer pods per node and the actual used memory on each node should be lower.
Another way of monitoring/recognizing this could be done by SSH into the node check the memory, the processes with PS, monitoring the syslog and command $docker stats --all
I have got the same issue. I have cordoned and evicted the pods.
Rebooted the server. automatically node came into ready state.

Is it possible to add swap space on kubernetes nodes?

I am trying to add swap space on kubernetes node to prevent it from out of memory issue. Is it possible to add swap space on node (previously known as minion)? If possible what procedure should I follow and how it effects pods acceptance test?
Kubernetes doesn't support container memory swap. Even if you add swap space, kubelet will create the container with --memory-swappiness=0 (when using Docker). There have been discussions about adding support, but the proposal was not approved. https://github.com/kubernetes/kubernetes/issues/7294
Technically you can do it.
There is a broad discussion weather to give K8S users the privilege to decide enabling swap or not.
I'll first refer directly to your question and then continue with the discussion.
If you run K8S on Kubeadm and you've added swap to your nodes - follow the steps below:
1 ) Reset the current cluster setup and then add the fail-swap-on=false flag to the kubelet configuration:
kubeadm reset
echo 'Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"' >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
(*) If you're running on Ubuntu replace the path for the Kubelet config from etc/systemd/syste,/kubelet to /etc/default/kubelet.
2 ) Reload the service:
systemctl daemon-reload
systemctl restart kubelet
3 ) Initialize the cluster settings again and ignore the swap error:
kubeadm init --ignore-preflight-errors Swap
OR:
If you prefer working with kubeadm-config.yaml:
1 ) Add the failSwapOn flag:
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false # <---- Here
2 ) And run:
kubeadm init --config /etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=Swap
Returning back to discussion weather to allow swapping or not.
From the one hand, K8S is very clear about this - Kubelet is not designed to support swap - you can see it mentioned in the Kubeadm link I shared above:
Swap disabled. You MUST disable swap in order for the kubelet to work
properly
From the other hand, you can see users reporting that there are cases where there deployments require swap enabled.
I would suggest that you first try without enabling swap.
(Not because swap is a function that the kernel can't manage, but merely because it is not recommended by Kube - probably related to the design of Kubelet).
Make sure that you are familiar with the features that K8S provides to prioritize memory of pods:
1 ) The 3 qos classes - Make sure that your high priority workloads are running with the Guaranteed (or at least Burstable) class.
2 ) Pod Priority and Preemption.
I would recommend also reading Evicting end-user Pods:
If the kubelet is unable to reclaim sufficient resource on the node,
kubelet begins evicting Pods.
The kubelet ranks Pods for eviction first by whether or not their
usage of the starved resource exceeds requests, then by Priority, and
then by the consumption of the starved compute resource relative to
the Pods' scheduling requests.
As a result, kubelet ranks and evicts Pods in the following order:
BestEffort or Burstable Pods whose usage of a starved resource exceeds its request. Such pods are ranked by Priority, and then usage
above request.
Guaranteed pods and Burstable pods whose usage is beneath requests are evicted last. Guaranteed Pods are guaranteed only when requests
and limits are specified for all the containers and they are equal.
Such pods are guaranteed to never be evicted because of another Pod's
resource consumption. If a system daemon (such as kubelet, docker, and
journald) is consuming more resources than were reserved via
system-reserved or kube-reserved allocations, and the node only has
Guaranteed or Burstable Pods using less than requests remaining, then
the node must choose to evict such a Pod in order to preserve node
stability and to limit the impact of the unexpected consumption to
other Pods. In this case, it will choose to evict pods of Lowest
Priority first.
Good luck (:
A few relevant discussions:
Kubelet/Kubernetes should work with Swap Enabled
[ERROR Swap]: running with swap on is not supported. Please disable swap
Kubelet needs to allow configuration of container memory-swap
Kubernetes 1.22 introduced swap as an alpha feature.
More at:
https://kubernetes.io/blog/2021/08/09/run-nodes-with-swap-alpha/
https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory