Bitnami Redis Cluster Helm Chart unable to bind to persistent volumes - kubernetes

I am doing testing which includes the Redis Cluster Bitnami Helm Chart. However, some recent changes to the chart means that I can no longer set the persistence option to false. This is highly irritating, as now the cluster is stuck in pending status with the failure message "0/5 nodes are available: 5 node(s) didn't find available persistent volumes to bind". I assume because it is attempting to fulfill some outstanding PVCs but cannot find a volume. Since this is just for testing and do not need to persist the data to disk, is there a way of disabling this or making a dummy volume? If not, what is the easiest way around this?

As Franxi mentioned in the comments above and provided the PR, there is no way doing a dummy volume. Closest solution for you is to use emptyDir
Note this:
Depending on your environment, emptyDir volumes are stored on whatever medium that backs the node such as disk or SSD, or network storage. However, if you set the emptyDir.medium field to "Memory", Kubernetes mounts a tmpfs (RAM-backed filesystem) for you instead. While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on node reboot and any files you write count against your container's memory limit.
Examples:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
Example with emptyDir.medium field:
...
volumes:
- name: ram-disk
emptyDir:
medium: "Memory"
You can also to determine the size limit:
Enable kubelets to determine the size limit for memory-backed volumes (mainly emptyDir volumes).

Related

When the kubelet reports that the node is in diskpressure?

I know that the kubelet reports that the node is in diskpressure if there is not enough space on the node.
But I want to know the exact threshold of diskpressure.
Please let me know the source code of the kubelet related this issue if you could.
Or I really thanks for your help about the official documentation from k8s or sth else.
Thanks again!!
Kubelet running on the node will report the disk pressure depending on hard or soft eviction threshold values which is being set. if nothing being set it will be all default values. Please refer kubernetes documentation
Below are values which will be used
DiskPressure - nodefs.available, nodefs.inodesFree, imagefs.available, or imagefs.inodesFree
Disk pressure is a condition indicating that a node is using too much disk space or is using disk space too fast, according to the thresholds you have set in your Kubernetes configuration.
DaemonSet can deploy apps to multiple nodes in a single step. Like deployments, DaemonSets must be applied using kubectl before they can take effect.
Since kubernetes is running on Linux,this is easily done by running du command.you can either manually ssh into each kubernetes nodes,or use a Daemonset As follows:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: disk-checker
labels:
app: disk-checker
spec:
selector:
matchLabels:
app: disk-checker
template:
metadata:
labels:
app: disk-checker
spec:
hostPID: true
hostIPC: true
hostNetwork: true
containers:
- resources:
requests:
cpu: 0.15
securityContext:
privileged: true
image: busybox
imagePullPolicy: IfNotPresent
name: disk-checked
command: ["/bin/sh"]
args: ["-c", "du -a /host | sort -n -r | head -n 20"]
volumeMounts:
- name: host
mountPath: "/host"
volumes:
- name: host
hostPath:
path: "/"
Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold, check complete Node Conditions for more details.
Ways to set Kubelet options :
1)Command line options like --eviction-hard.
2)Config file.
3)More recent is dynamic configuration.
When you experience an issue with node disk pressure, your immediate thoughts should be when you run into the issue: an error in garbage collecting or log files. Of course the better answer here is to clean up unused files (free up some disk space).
So Monitor your clusters and get notified of any node disks approaching pressure, and get the issue resolved before it starts killing other pods inside the cluster.
Edit :
AFAIK there is no magic trick to know the exact threshold of diskpressure . You need to start with reasonable (limits & requests) and refine using trial and error.
Refer to this SO for more information on how to set the threshold of diskpressure.

How to monitor disk usage of persistent volumes of all storageclasses using Prometheus?

I am using Kubernetes Persistent Volumes dashboard on Grafana to view the disk usage for my PVCs. This dashboard makes use of kubelet_volume_stats_used_bytes metric to fetch the data for the PVCs and I am able to visualize it as well.
But, it does not display the PVCs which use EFS as storageclass.
I found some stackoverflow answers and comments regarding the same, but none of them included a solution.
How to monitor kubernetes persistence volume claim i.e disk usage
How to monitor disk usage of kubernetes persistent volumes?
So my question is how do get usage metrics for EFS PVCs?
Or to make it more generic - how to scrape PVC metrics for all storageclasses?
EFS PVCs don't have any size limits. Thats why this traditional method doesn't work while checking their disk usage.
Instead, I was able to use a sidecar-logging mechanism to get this data -
Attach the following sidecar to the pod which is using the PVC myPvcName
containers:
- name: pvc-logging
image: busybox:latest
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do du -sh /myPvc ; sleep 5; done;" ]
volumeMounts:
- mountPath: /myPvc
name: myPvc
readOnly: true
volumes:
- name: myPvc
persistentVolumeClaim:
claimName: myPvcName
This sidecar -
has the MyPvcName pvc mounted at the path /myPvc
checks the space used by this pvc every 5 seconds (you can change this interval)
It's logs look like -
10.3G /myPvc
10.3G /myPvc
10.4G /myPvc
These can be parsed using whatever logging stack you use - in my case, it was the elastic stack.
But I haven't yet looked into how I can send these logs into prometheus so I can visualise them on Grafana.

How do I mount and format new google compute disk to be mounted in a GKE pod?

I have created a new disk in Google Compute Engine.
gcloud compute disks create --size=10GB --zone=us-central1-a dane-disk
It says I need to format it. But I have no idea how could I mount/access the disk?
gcloud compute disks list
NAME LOCATION LOCATION_SCOPE SIZE_GB TYPE STATUS
notowania-disk us-central1-a zone 10 pd-standard READY
New disks are unformatted. You must format and mount a disk before it
can be used. You can find instructions on how to do this at:
https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting
I tried instruction above but lsblk is not showing the disk at all
Do I need to create a VM and somehow attach it to it in order to use it? My goal was to mount the disk as a persistent GKE volume independent of the VM (last time GKE upgrade caused recreation of VM and data loss)
Thanks for the clarification of what you are trying to do in the comments.
I have 2 different answers here.
The first is that my testing shows that the Kubernetes GCE PD documentation is exactly right, and the warning about formatting seems like it can be safely ignored.
If you just issue:
gcloud compute disks create --size=10GB --zone=us-central1-a my-test-data-disk
And then use it in a pod:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: nginx
name: nginx-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-test-data-disk
fsType: ext4
It will be formatted when it is mounted. This is likely because the fsType parameter instructs the system how to format the disk. You don't need to do anything with a separate GCE instance. The disk is retained even if you delete the pod or even the entire cluster. It is not reformatted on uses after the first and the data is kept around.
So, the warning message from gcloud is confusing, but can be safely ignored in this case.
Now, in order to dynamically create a persistent volume based on GCE PD that isn't automatically deleted, you will need to create a new StorageClass that sets the Reclaim Policy to Retain, and then create a PersistentVolumeClaim based on that StorageClass. This also keeps basically the entire operation inside of Kubernetes, without needing to do anything with gcloud. Likewise, a similar approach is what you would want to use with a StatefulSet as opposed to a single pod, as described here.
Most of what you are looking to do is described in this GKE documentation about dynamically allocating PVCs as well as the Kubernetes StorageClass documentation. Here's an example:
gce-pd-retain-storageclass.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gce-pd-retained
reclaimPolicy: Retain
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
The above storage class is basically the same as the 'standard' GKE storage class, except with the reclaimPolicy set to Retain.
pvc-demo.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-demo-disk
spec:
accessModes:
- ReadWriteOnce
storageClassName: gce-pd-retained
resources:
requests:
storage: 10Gi
Applying the above will dynamically create a disk that will be retained when you delete the claim.
And finally a demo-pod.yaml that mounts the PVC as a volume (this is really a nonsense example using nginx, but it demonstrates the syntax):
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: nginx
name: nginx-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvc-demo-disk
Now, if you apply these three in order, you'll get a container running using the PersistentVolumeClaim which has automatically created (and formatted) a disk for you. When you delete the pod, the claim keeps the disk around. If you delete the claim the StorageClass still keeps the disk from being deleted.
Note that the PV that is left around after this won't be automatically reused, as the data is still on the disk. See the Kubernetes documentation about what you can do to reclaim it in this case. Really, this mostly says that you shouldn't delete the PVC unless you're ready to do work to move the data off the old volume.
Note that these disks will even continue to exist when the entire GKE cluster is deleted as well (and you will continue to be billed for them until you delete them).

How to write to gcePersistentDisk while mounted to multiple Kubernetes pod

I'm currently mounting a gcePersistentDisk to each pod in my kubernetes deployment. Since I want multiple pods to read from the disk, I have to mount it as read only. My deployment yaml file looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
spec:
replicas: 1
...
...
template:
...
...
spec:
containers:
- image: ...
...
...
volumeMounts:
- mountPath: /my-volume
name: my-volume
readOnly: true
...
...
volumes:
- name: my-storage
gcePersistentDisk:
pdName: my-disk
fsType: ext4
readOnly: true
Right now, in order to write new stuff to the disk, I need to scale the deployment to 0, then start a kubernetes job that mounts the disk to a single pod that has read / write access, write to the disk and then scale the deployment up again.
Is there a way I can do this without taking down all my pods?
Is it possible/recommended to do something like "hot-swapping" persistent disks in kubernetes deployments?
Looking at the requirements:
1)- No other choice with the current use-case. Pods need to be scaled down every time.
2)- You can use a different type of PV, then use ReadWriteMany access mode [1] & [2].
3)- hot-swap: meaning changing the deployment (kubectl apply)? Not sure, need clarification.
4)- Another option is to use NFS [2], but that obviously is a whole different approach.
[1] https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#access_modes
[2] Access Modes https://kubernetes.io/docs/concepts/storage/persistent-volumes/

Kubernetes cannot attach AWS EBS as volume. Probably due to cloud provider issue

I hava a kubernetes cluster up running on AWS. Now when I'm trying to attach a AWS EBS as a volume to a pod, I got a "special device does not exist" problem.
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-xxxxxxx does not exist
I did some research and found that the correct AWS EBS device path should be like this format:
/var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-west-2a/vol-xxxxxxxx
My doubt is that it might because I set up the Kubernetes cluster according to this tutorial and did not set the cloud provider, and therefore the AWS device "does not exit". I wonder if my doubt is correct, and if yes, how to set the cloud provider after the cluster is already up running.
You need to set the cloud provider to properly mount an EBS volume. To do that after the fact set --cloud-provider=aws in the following services:
controller-manager
apiserver
kubelet
Restart everything and try mounting again.
An example pod which mounts an EBS volume explicitly may look like this:
apiVersion: v1
kind: Pod
metadata:
name: test-ebs
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-ebs
name: test-volume
volumes:
- name: test-volume
# This AWS EBS volume must already exist.
awsElasticBlockStore:
volumeID: <volume-id>
fsType: ext4
The Kubernetes version is an important factor here. The EBS mounts was experimental in 1.2.x, I tried it then but without success. In the last releases I never tried it again but be sure to check your IAM roles on the k8s vm's to make sure they have the rights to provision EBS disks.