I need to run pods on multiple nodes with very large (700GB) readonly dataset in Kubernetes. I tried using readonlymany, but it fails in multi-node setup, and in general was very unstable.
Is there a way for pods to create a new persistent disk from a snapshot, attach it to the pod, and destroy it when pod is destroyed? This would allow me to update snapshots once in a while with the new data.
You can manually provision a persistent disk using an existing image on GCP:
gcloud beta compute disks create --size=500GB --image=<snapshot-name> my-data-disk
Then use it on your pod:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
The GCE storage class doesn't support snapshots so unfortunately, you can't do it with PVCs. More info here
Hope it helps.
Related
I have a pod that needs to create a lot of jobs.
I'd like to share a read-only folder.
How can I do it?
Several ideas I can imagine (I'm newbie to Kubernetes):
Ephemeral volumes seem a good choice, but I've read it cannot be shared with another pod.
I thihk NFS is an overkill, too much for my needs.
Maybe, I could build a data only Docker image, but this is a deprecated feature of Docker.
kubectl cp to copy the data between the base pod to the pod in the job.
What would be the better solution for this?
You can use a PersistentVolume and mount it as read only volume inside the pod via PersistentVolumeClaim. To mount a read only volume, set .spec.containers[*].volumeMounts[*].readOnly to true.
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
readOnly: true
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
Check out these links:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes
After Kubernetes upgrade from 1.18.13 to 1.19.5 I get error bellow for some pods randomly. After some time pod fails to start(it's a simple pod, doesn't belong to deployment)
Warning FailedMount 99s kubelet Unable to attach or mount volumes: unmounted volumes=[red-tmp data logs docker src red-conf], unattached volumes=[red-tmp data logs docker src red-conf]: timed out waiting for the condition
On 1.18 we don't have such issue, also during upgrade K8S doesn't show any errors or incompatibility messages.
No additional logs from any other K8S components(tried to increase verbosity level for kubelet)
Everything is fine with disk space and other host's metrics like LA, RAM
No network storages, only local data
PV and PVC are created before pods and we don't change them
Tried to use higher K8S versions but no luck
We have pretty standard setup without any special customizations:
CNI: Flannel
CRI: Docker
Only one node as master and worker
16 cores and 32G RAM
Example of pod definition:
apiVersion: v1
kind: Pod
metadata:
labels:
app: provision
ver: latest
name: provision
namespace: red
spec:
containers:
- args:
- wait
command:
- provision.sh
image: app-tests
imagePullPolicy: IfNotPresent
name: provision
volumeMounts:
- mountPath: /opt/app/be
name: src
- mountPath: /opt/app/be/conf
name: red-conf
- mountPath: /opt/app/be/tmp
name: red-tmp
- mountPath: /var/lib/app
name: data
- mountPath: /var/log/app
name: logs
- mountPath: /var/run/docker.sock
name: docker
dnsConfig:
options:
- name: ndots
value: "2"
dnsPolicy: ClusterFirst
enableServiceLinks: false
restartPolicy: Never
volumes:
- hostPath:
path: /opt/agent/projects/app-backend
type: Directory
name: src
- name: red-conf
persistentVolumeClaim:
claimName: conf
- name: red-tmp
persistentVolumeClaim:
claimName: tmp
- name: data
persistentVolumeClaim:
claimName: data
- name: logs
persistentVolumeClaim:
claimName: logs
- hostPath:
path: /var/run/docker.sock
type: Socket
name: docker
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: red-conf
labels:
namespace: red
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
hostPath:
path: /var/lib/docker/k8s/red-conf
persistentVolumeReclaimPolicy: Retain
storageClassName: red-conf
volumeMode: Filesystem
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: conf
namespace: red
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: red-conf
volumeMode: Filesystem
volumeName: red-conf
tmp data logs pv have the same setup as conf beside path. They have separate folders:
/var/lib/docker/k8s/red-tmp
/var/lib/docker/k8s/red-data
/var/lib/docker/k8s/red-logs
Currently I don't have any clues how to diagnose the issue :(
Would be glad to get advice. Thanks in advance.
you must be using local volumes. Follow the below link to understand how to create storage class, pv's and pvc when you use local volumes.
https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/
I recommend you to start troubleshooting by reviewing the VolumeAttachment events against what node has tied the PV, perhaps your volume is still linked to a node that was in evicted condition and was replaced by a new one.
You can use this command to check your PV name and status:
kubectl get pv
And then, to review what node has the correct volumeattachment, you can use the following command:
kubectl get volumeattachment
Once you get the name of your PV and at what node it is attached, then you will be able to see if the PV is tied to the correct node or if maybe it is tied to a previous node that is not working or was removed. The node gets evicted and scheduled into a new available node from the pool; to know what nodes are ready and running, you can use this command:
kubectl get nodes
If you detect that your PV is tied to the node that no longer exists, you will need to delete the VolumeAttachment with the following command:
kubectl delete volumeattachment [csi-volumeattachment_name]
If you need to review a detailed guide for this troubleshooting, you can follow this link.
I have a three node GCE cluster and a single-pod GKE deployment with three replicas. I created the PV and PVC like so:
# Create a persistent volume for web content
apiVersion: v1
kind: PersistentVolume
metadata:
name: nginx-content
labels:
type: local
spec:
capacity:
storage: 5Gi
accessModes:
- ReadOnlyMany
hostPath:
path: "/usr/share/nginx/html"
--
# Request a persistent volume for web content
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-content-claim
annotations:
volume.alpha.kubernetes.io/storage-class: default
spec:
accessModes: [ReadOnlyMany]
resources:
requests:
storage: 5Gi
They are referenced in the container spec like so:
spec:
containers:
- image: launcher.gcr.io/google/nginx1
name: nginx-container
volumeMounts:
- name: nginx-content
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: nginx-content
persistentVolumeClaim:
claimName: nginx-content-claim
Even though I created the volumes as ReadOnlyMany, only one pod can mount the volume at any given time. The rest give "Error 400: RESOURCE_IN_USE_BY_ANOTHER_RESOURCE". How can I make it so all three replicas read the same web content from the same volume?
First I'd like to point out one fundamental discrapency in your configuration. Note that when you use your PersistentVolumeClaim defined as in your example, you don't use your nginx-content PersistentVolume at all. You can easily verify it by running:
kubectl get pv
on your GKE cluster. You'll notice that apart from your manually created nginx-content PV, there is another one, which was automatically provisioned based on the PVC that you applied.
Note that in your PersistentVolumeClaim definition you're explicitely referring the default storage class which has nothing to do with your manually created PV. Actually even if you completely omit the annotation:
annotations:
volume.alpha.kubernetes.io/storage-class: default
it will work exactly the same way, namely the default storage class will be used anyway. Using the default storage class on GKE means that GCE Persistent Disk will be used as your volume provisioner. You can read more about it here:
Volume implementations such as gcePersistentDisk are configured
through StorageClass resources. GKE creates a default StorageClass for
you which uses the standard persistent disk type (ext4). The default
StorageClass is used when a PersistentVolumeClaim doesn't specify a
StorageClassName. You can replace the provided default StorageClass
with your own.
But let's move on to the solution of the problem you're facing.
Solution:
First, I'd like to emphasize you don't have to use any NFS-like filesystems to achive your goal.
If you need your PersistentVolume to be available in ReadOnlyMany mode, GCE Persistent Disk is a perfect solution that entirely meets your requirements.
It can be mounted in ro mode by many Pods at the same time and what is even more important by many Pods, scheduled on different GKE nodes. Furthermore it's really simple to configure and it works on GKE out of the box.
In case you want to use your storage in ReadWriteMany mode, I agree that something like NFS may be the only solution as GCE Persistent Disk doesn't provide such capability.
Let's take a closer look how we can configure it.
We need to start from defining our PVC. This step was actually already done by yourself but you got lost a bit in further steps. Let me explain how it works.
The following configuration is correct (as I mentioned annotations section can be omitted):
# Request a persistent volume for web content
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-content-claim
spec:
accessModes: [ReadOnlyMany]
resources:
requests:
storage: 5Gi
However I'd like to add one important comment to this. You said:
Even though I created the volumes as ReadOnlyMany, only one pod can
mount the volume at any given time.
Well, actually you didn't. I know it may seem a bit tricky and somewhat surprising but this is not the way how defining accessModes really works. In fact it's a widely misunderstood concept. First of all you cannot define access modes in PVC in a sense of putting there the constraints you want. Supported access modes are inherent feature of a particular storage type. They are already defined by the storage provider.
What you actually do in PVC definition is requesting a PV that supports the particular access mode or access modes. Note that it's in a form of a list which means you may provide many different access modes that you want your PV to support.
Basically it's like saying: "Hey! Storage provider! Give me a volume that supports ReadOnlyMany mode." You're asking this way for a storage that will satisfy your requirements. Keep in mind however that you can be given more than you ask. And this is also our scenario when asking for a PV that supports ReadOnlyMany mode in GCP. It creates for us a PersistentVolume which meets our requirements we listed in accessModes section but it also supports ReadWriteOnce mode. Although we didn't ask for something that also supports ReadWriteOnce you will probably agree with me that storage which has a built-in support for those two modes fully satisfies our request for something that supports ReadOnlyMany. So basically this is the way it works.
Your PV that was automatically provisioned by GCP in response for your PVC supports those two accessModes and if you don't specify explicitely in Pod or Deployment definition that you want to mount it in read-only mode, by default it is mounted in read-write mode.
You can easily verify it by attaching to the Pod that was able to successfully mount the PersistentVolume:
kubectl exec -ti pod-name -- /bin/bash
and trying to write something on the mounted filesystem.
The error message you get:
"Error 400: RESOURCE_IN_USE_BY_ANOTHER_RESOURCE"
concerns specifically GCE Persistent Disk that is already mounted by one GKE node in ReadWriteOnce mode and it cannot be mounted by another node on which the rest of your Pods were scheduled.
If you want it to be mounted in ReadOnlyMany mode, you need to specify it explicitely in your Deployment definition by adding readOnly: true statement in the volumes section under Pod's template specification like below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: nginx-content
volumes:
- name: nginx-content
persistentVolumeClaim:
claimName: nginx-content-claim
readOnly: true
Keep in mind however that to be able to mount it in readOnly mode, first we need to pre-populate such volume with data. Otherwise you'll see another error message, saying that unformatted volume cannot be mounted in read only mode.
The easiest way to do it is by creating a single Pod which will serve only for copying data which was already uploaded to one of our GKE nodes to our destination PV.
Note that pre-populating PersistentVolume with data can be done in many different ways. You can mount in such Pod only your PersistentVolume that you will be using in your Deployment and get your data using curl or wget from some external location saving it directly on your destination PV. It's up to you.
In my example I'm showing how to do it using additional local volume that allows us to mount into our Pod a directory, partition or disk (in my example I use a directory /var/tmp/test located on one of my GKE nodes) available on one of our kubernetes nodes. It's much more flexible solution than hostPath as we don't have to care about scheduling such Pod to particular node, that contains the data. Specific node affinity rule is already defined in PersistentVolume and Pod is automatically scheduled on specific node.
To create it we need 3 things:
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
PersistentVolume definition:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /var/tmp/test
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <gke-node-name>
and finally PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Gi
storageClassName: local-storage
Then we can create our temporary Pod which will serve only for copying data from our GKE node to our GCE Persistent Disk.
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/mnt/source"
name: mypd
- mountPath: "/mnt/destination"
name: nginx-content
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
- name: nginx-content
persistentVolumeClaim:
claimName: nginx-content-claim
Paths you can see above are not really important. The task of this Pod is only to allow us to copy our data to the destination PV. Eventually our PV will be mounted in completely different path.
Once the Pod is created and both volumes are successfully mounted, we can attach to it by running:
kubectl exec -ti my-pod -- /bin/bash
Withing the Pod simply run:
cp /mnt/source/* /mnt/destination/
That's all. Now we can exit and delete our temporary Pod:
kubectl delete pod mypod
Once it is gone, we can apply our Deployment and our PersistentVolume finally can be mounted in readOnly mode by all the Pods located on various GKE nodes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: nginx-content
volumes:
- name: nginx-content
persistentVolumeClaim:
claimName: nginx-content-claim
readOnly: true
Btw. if you are ok with the fact that your Pods will be scheduled only on one particular node, you can give up on using GCE Persistent Disk at all and switch to the above mentioned local volume. This way all your Pods will be able not only to read from it but also to write to it at the same time. The only caveat is that all those Pods will be running on a single node.
You can achieve this with a NFS like file system. On Google Cloud, Filestore is the right product for this (NFS managed). You have a tutorial here for achieving your configuration
You will need to use a shared volume claim with ReadWriteMany (RWX) type if you want to share the volume across different nodes and provide highly scalable solution. Like using NFS server.
You can find out how to deploy an NFS server here:
https://www.shebanglabs.io/run-nfs-server-on-ubuntu-20-04/
And then you can mount volumes (directories from NFS server) as follows:
https://www.shebanglabs.io/how-to-set-up-read-write-many-rwx-persistent-volumes-with-nfs-on-kubernetes/
I've used such a way to deliver shared static content between +8 k8s deployments (+200 pods) serving 1 Billion requests a month over Nginx. and it did work perfectly with that NFS setup :)
Google provides NFS like filesystem called as Google Cloud Filestore. You can mount that on multiple pods.
So far I was convinced that one need a PVC to access a PV like in this example from k8s doc:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
But then I saw in Docker doc that one can use the following syntax (example using nfs):
kind: Pod
apiVersion: v1
metadata:
name: nfs-in-a-pod
spec:
containers:
- name: app
image: alpine
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs # Please change the destination you like the share to be mounted too
command: ["/bin/sh"]
args: ["-c", "sleep 500000"]
volumes:
- name: nfs-volume
nfs:
server: nfs.example.com # Please change this to your NFS server
path: /share1 # Please change this to the relevant share
I am confused:
Is this syntax creating a PVC under the hood?
Or is any PV matching the spec mounted without a PVC?
Or perhaps the spec selects an existing PVC?
The various kinds of things you can mount are part of the Volume object in the Kubernetes API (which is part of a PodSpec, which is part of a Pod). None of these are an option to mount a specific PersistentVolume by name.
(There are some special cases you can see there for things like NFS and various clustered storage systems. Those mostly predate persistent volumes.)
The best you can do here is to create a PVC that's very tightly bound to a single persistent volume, and then reference that in the pod spec.
An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node.
You dont need pv and pvc for emptyDIr volume.
Note that when a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
If you want to retain the data even if the pod crashes or restarts or the pod is deleted or undeployed then you need to use pv and pvc
Look at another example below, where you dont need pv and pvc using hostPath
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
If you need to store the data on external storage solutions like nfs, azure file storage, aws EBS, google persistentDisk etc then you need to create pv and pvc.
mounting pv directly to a pod is not allowed and is against the kubernetes design principles. It would cause tight coupling below the pod vloume and the underlysing storage.
pvc enables light coupling between the pod and the persistent volume. The pod
doesnt know what the underlying storage is used to store the container data and is not necessary for the pod to know that info.
pv and pvc are required for static and dynamic provisioning of storage volumes for work loads in kubernetes cluster
We have success creating the pods, services and replication controllers according to our project requirements. Now we are planning to setup persistence storage in AWS using Kubernetes. I have created the YAML file to create an EBS volume in AWS, it's working fine as expected. I am able to claim volume and successfully mount to my pod (this is for single replica only).
But when I am trying to create more the one replica, my pods are not creating successfully. When I am trying to create volumes, it's creating in only one availability zone. If my pod is created in a different zone node, since my volume is already created in different zone, due to that my pod is not creating successfully. How to create volumes in different zones for same application? How to make it successful, along with replica? How to create my persistent volumes claims?
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mongo-pvc
labels:
type: amazonEBS
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: mongo-pp
name: mongo-controller-pp
spec:
replicas: 2
template:
metadata:
labels:
name: mongo-pp
spec:
containers:
- image: mongo
name: mongo-pp
ports:
- name: mongo-pp
containerPort: 27017
hostPort: 27017
volumeMounts:
- mountPath: "/opt/couchbase/var"
name: mypd1
volumes:
- name: mypd1
persistentVolumeClaim:
claimName: mongo-pvc
When you are using ReadWriteOnce volumes (ones that can not be mounted to multiple pods at the same time), simple PV/PVC creation will not cut it.
Both PV and PVC are pretty "singular" in a way that if you refer in Deployment to a particular claim name, your pods will all try to get the same one claim and the same one pv bound to that claim, resulting in a race condition where only one of the pods will be the first and only allowed to mount that RWO storage.
To mitigate this, you should use not PVC directly but via volumeClaimTemplates that will create PVC dynamicaly for every new pod scaled, like below :
volumeClaimTemplates:
- metadata:
name: claimname
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
I think the problem your a facing is caused by the underlying storage mechanism, in this case EBS.
When scaling Pods behind a replication controller, each replica will attempt to mount the same persistent volume. If you look at the K8 docs in regards to EBS, you will see the following:
There are some restrictions when using an awsElasticBlockStore volume:
the nodes on which pods are running must be AWS EC2 instances those
instances need to be in the same region and availability-zone as the
EBS volume EBS only supports a single EC2 instance mounting a volume
So by default, when you scale up behind a replication controller, Kubernetes will try to spread across different nodes, this means that a second node is trying to mount this volume which is not allowed for EBS.
Basically, I see that you have two options.
Use a different volume type. nfs, Glusterfs etc
Use a StatefulSet instead of a replication controller and have each replica mount an independent volume. Would require database replication but provide high availability.