I recently had to disable the fluentd-elasticsearch Kubernetes addon because it ended up eating all the disk space on one of my minions which in turn prevented an important pod from starting.
I am now worried that the monitoring addon might end up eating disk space as well. Is the monitoring data (stored in influxdb) ever garbage collected or does it keep eating away at disk space? Are there other Kubernetes components that eat up disk space indefinitely?
I setup my cluster using ./cluster/kube-up.sh on AWS.
Client Version: version.Info{Major:"1", Minor:"2",
GitVersion:"v1.2.4",
GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f",
GitTreeState:"clean"} Server Version: version.Info{Major:"1",
Minor:"2", GitVersion:"v1.2.4",
GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f",
GitTreeState:"clean"}
To answer your specific question: you should lookin for pods using emptydir (kubectl get po --all-namespaces -o yaml | grep emptyDir) or hostPath.
As a general policy:
If you use a pv it should be limited by the space available on the pv. Such a pv is usually backed by cloudprovider or nfs and is mounted over the network.
If you use "emptyDir", your storage is taken out of kubelet's --root-dir. Depending on the distribution/setup, this might be an isolated partition making it impossible for a rogue app to take down the node.
If you use hostPath, you are explicitly choosing a path on the node. If you're running without enough privileges to claim sensitive portions of the filesystem and fill them with data, the node goes down.
There's work in the logging front to make this better: https://github.com/kubernetes/kubernetes/issues/17183
There is also image/container GC, which kicks in if your disk usage is above a threshold. You should check if the version of kubernetes you're using had GC issues (will be mentioned in the release notes).
Related
I am looking for best practices to avoid Pod The node had condition: [DiskPressure].
So what I'm doing is full database export of all our views which is massive. At some point the pod runs into DiskPressure error and the k8 decides to Evict and kill it.
What would be the best practices to handle this? There is 7GB of free space which maybe is not enough. Is just raising that the best way to go about it or are the other mechanisms to handle this type of work?
Hope my question makes sense
Error message Pod The node had a condition: [DiskPressure]
happens when the kubelet agent won't admit new pods on the node, that means they won't start. Node disk pressure means that the disks that are attached to the node are under pressure.
The reason you might run into node disk pressure is because Kubernetes has not cleaned up any unused images and is a problem of logs building up.if you have a long-running container with a lot of logs, they may build up enough that it overloads the capacity of the node disk.
Troubleshooting Node Disk Pressure:
To troubleshoot the issue of node disk pressure, you need to figure out what files are taking up the most space. You can either manually SSH into each Kubernetes node, or use a DaemonSet, you can do that from this link.
After installing you can start looking at the logs of the pods that are running by executing kubectl logs -l app=disk-checker. You will see a list of files and their sizes, which will give you greater insight into what is taking up space on your nodes.
Possible solutions:
The issue is caused by necessary application data, making it impossible to delete the files. In this case, you will have to increase the size of the node disks to ensure that there’s sufficient room for the application files.
Another solution is that you find applications that have produced a lot of files that are no longer needed and simply delete the unnecessary files.
Adding more for your information:
1)To avoid DiskPressure crashing the node :
DiskPressure triggers when either node root file systems or image file systems satisfies an eviction threshold for available disk space, inodes will trigger DiskPressure which in turn causes pod eviction,refer to these Node conditions.
Based on the Node conditions, you should consider adjusting the parameters of your kubelet, --image-gc-high-threshold and --image-gc-low-threshold, so that there is always enough space for normal operations, consider --low-diskspace-threshold-mb provisioning more space for your nodes, depending on your requirements.
2) To reduce the DiskPressure condition
Use the kubelet command line arg :
--eviction-hard mapStringString: A set of eviction thresholds (e.g. memory.available<1Gi) that if met would trigger a pod eviction.
DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See Set Kubelet parameters via a config file for more information.
One of our containers is using ephemeral storage but we don't know why. The app running in the container shouldn't be writing anything to the disk.
We set the storage limit to 20MB but it's still being evicted. We could increase the limit but this seems like a bandaid fix.
We're not sure what or where this container is writing to, and I'm not sure how to check that. When a container is evicted, the only information I can see is that the container exceeded its storage limit.
Is there an efficient way to know what's being written, or is our only option to comb through the code?
Adding details to the topic.
Pods use ephemeral local storage for scratch space, caching, and logs.
Pods can be evicted due to other pods filling the local storage, after which new pods are not admitted until sufficient storage has been reclaimed.
The kubelet can provide scratch space to Pods using local ephemeral storage to mount emptyDir volumes into containers.
For container-level isolation, if a container's writable layer and log usage exceeds its storage limit, the kubelet marks the Pod for eviction.
For pod-level isolation the kubelet works out an overall Pod storage limit by summing the limits for the containers in that Pod. In this case, if the sum of the local ephemeral storage usage from all containers and also the Pod's emptyDir volumes exceeds the overall Pod storage limit, then the kubelet also marks the Pod for eviction.
To see what files have been written since the pod started, you can run:
find / -mount -newer /proc -print
This will output a list of files modified more recently than '/proc'.
/etc/nginx/conf.d
/etc/nginx/conf.d/default.conf
/run/secrets
/run/secrets/kubernetes.io
/run/secrets/kubernetes.io/serviceaccount
/run/nginx.pid
/var/cache/nginx
/var/cache/nginx/fastcgi_temp
/var/cache/nginx/client_temp
/var/cache/nginx/uwsgi_temp
/var/cache/nginx/proxy_temp
/var/cache/nginx/scgi_temp
/dev
Also, try without the '-mount' option.
To see if any new files are being modified, you can run some variations of the following command in a Pod:
while true; do rm -f a; touch a; sleep 30; echo "monitoring..."; find / -mount -newer a -print; done
and check the file size using the du -h someDir command.
Also, as #gohm'c pointed out in his answer, you can use sidecar/ephemeral debug containers.
Read more about Local ephemeral storage here.
We're not sure what or where this container is writing to, and I'm not sure how to check that.
Try look into the container volumeMounts section that is mounted with emptyDir, then add a sidecar container (eg. busybox) to start a shell session where you can check the path. If your cluster support ephemeral debug container you don't need the sidecar container.
I am not really sure, if this is a prometheus issue, or just Longhorn, or maybe a combination of the two.
Setup:
Kubernetes K3s v1.21.9+k3s1
Rancher Longhorn Storage Provider 1.2.2
Prometheus Helm Chart 32.2.1 and image: quay.io/prometheus/prometheus:v2.33.1
Problem:
Infinitely growing PV in Longhorn, even over the defined max size. Currently using 75G on a 50G volume.
Description:
I have a really small 3 node cluster with not too many deployments running. Currently only one "real" application and the rest is just kubernetes system stuff so far.
Apart from etcd, I am using all the default scraping rules.
The PV is filling up a bit more than 1 GB per day, which seems fine to me.
The problem is, that for whatever reason, the data used inside longhorn is infinitely growing. I have configured retention rules for the helm chart with a retention: 7d and retentionSize: 25GB, so the retentionSize should never be reached anyway.
When I log into the containers shell and do a du -sh in /prometheus, it shows ~8.7GB being used, which looks good to me as well.
The problem is that when I look at the longhorn UI, the used spaced is growing all the time. The PV does exist now for ~20 days and is currently using almost 75GB of a defined max of 50GB. When I take a look at the Kubernetes node itself and inspect the folder, which longhorn uses to store its PV data, I see the same values of space being used as in the Longhorn UI, while inside the prometheus container, everything looks good to me.
I hope someone has an idea what the problem could be. I have not experienced this issue with any other deployment so far, all others are good and really decrease in size used, when something inside the container gets deleted.
I have a local OpenNESS network edge cluster using Kubernetes as its infrastructure management.
I'm facing the disk pressure issue due to which pods are getting Evicted and in CrashLoopBack state.
Also, the images from worker-node went missing(got deleted automatically)
If I check the disk usage, I see 83% been used by the dev/sda4 or overlay filesystem.
how to solve this issue.
image attached shows the disk usage
Your disk usage chart reveals a lot of disk usage on the overlay filesystem, so by Docker containers union file system. This suggests that you are having some large containers running. Those might have been large to start with or be writing binary data to the container file system while running.
To get to the bottom of this, you can either have a look into at your monitoring (if present). Or, your can ssh into the affected node and try to identify the "guilty" pod with:
du --max-depth=1 /var/lib/docker/overlay2/ | sort -n
and a subsequent: du | sort -n in the biggest folder.
I am trying to add swap space on kubernetes node to prevent it from out of memory issue. Is it possible to add swap space on node (previously known as minion)? If possible what procedure should I follow and how it effects pods acceptance test?
Kubernetes doesn't support container memory swap. Even if you add swap space, kubelet will create the container with --memory-swappiness=0 (when using Docker). There have been discussions about adding support, but the proposal was not approved. https://github.com/kubernetes/kubernetes/issues/7294
Technically you can do it.
There is a broad discussion weather to give K8S users the privilege to decide enabling swap or not.
I'll first refer directly to your question and then continue with the discussion.
If you run K8S on Kubeadm and you've added swap to your nodes - follow the steps below:
1 ) Reset the current cluster setup and then add the fail-swap-on=false flag to the kubelet configuration:
kubeadm reset
echo 'Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"' >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
(*) If you're running on Ubuntu replace the path for the Kubelet config from etc/systemd/syste,/kubelet to /etc/default/kubelet.
2 ) Reload the service:
systemctl daemon-reload
systemctl restart kubelet
3 ) Initialize the cluster settings again and ignore the swap error:
kubeadm init --ignore-preflight-errors Swap
OR:
If you prefer working with kubeadm-config.yaml:
1 ) Add the failSwapOn flag:
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false # <---- Here
2 ) And run:
kubeadm init --config /etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=Swap
Returning back to discussion weather to allow swapping or not.
From the one hand, K8S is very clear about this - Kubelet is not designed to support swap - you can see it mentioned in the Kubeadm link I shared above:
Swap disabled. You MUST disable swap in order for the kubelet to work
properly
From the other hand, you can see users reporting that there are cases where there deployments require swap enabled.
I would suggest that you first try without enabling swap.
(Not because swap is a function that the kernel can't manage, but merely because it is not recommended by Kube - probably related to the design of Kubelet).
Make sure that you are familiar with the features that K8S provides to prioritize memory of pods:
1 ) The 3 qos classes - Make sure that your high priority workloads are running with the Guaranteed (or at least Burstable) class.
2 ) Pod Priority and Preemption.
I would recommend also reading Evicting end-user Pods:
If the kubelet is unable to reclaim sufficient resource on the node,
kubelet begins evicting Pods.
The kubelet ranks Pods for eviction first by whether or not their
usage of the starved resource exceeds requests, then by Priority, and
then by the consumption of the starved compute resource relative to
the Pods' scheduling requests.
As a result, kubelet ranks and evicts Pods in the following order:
BestEffort or Burstable Pods whose usage of a starved resource exceeds its request. Such pods are ranked by Priority, and then usage
above request.
Guaranteed pods and Burstable pods whose usage is beneath requests are evicted last. Guaranteed Pods are guaranteed only when requests
and limits are specified for all the containers and they are equal.
Such pods are guaranteed to never be evicted because of another Pod's
resource consumption. If a system daemon (such as kubelet, docker, and
journald) is consuming more resources than were reserved via
system-reserved or kube-reserved allocations, and the node only has
Guaranteed or Burstable Pods using less than requests remaining, then
the node must choose to evict such a Pod in order to preserve node
stability and to limit the impact of the unexpected consumption to
other Pods. In this case, it will choose to evict pods of Lowest
Priority first.
Good luck (:
A few relevant discussions:
Kubelet/Kubernetes should work with Swap Enabled
[ERROR Swap]: running with swap on is not supported. Please disable swap
Kubelet needs to allow configuration of container memory-swap
Kubernetes 1.22 introduced swap as an alpha feature.
More at:
https://kubernetes.io/blog/2021/08/09/run-nodes-with-swap-alpha/
https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory