It seems that Kubernetes supports 3 kinds of access mode for persistent volume: ReadWriteOnce, ReadOnlyMany, ReadWriteMany.
I'm really curious about the scheduler strategy for a pod which uses the ReadWriteOnce mode volume. For example, I created an RC which have pod num=2, I guess the two pods will be scheduled into the same host because they use the volume that has ReadWriteOnce mode?
I really want to know the source code of this part.
I think the upvoted answer is wrong. As per Kubernetes docs on Access Modes
The access modes are:
ReadWriteOnce -- the volume can be mounted as read-write by a single node
ReadOnlyMany -- the volume can be mounted read-only by many nodes
ReadWriteMany -- the volume can be mounted as read-write by many nodes
So AccessModes as defined today, only describe node attach (not pod mount) semantics, and doesn't enforce anything.
So to prevent two pods mount the same PVC even if they are scheduled to be run on the same node you can use pod anti-affinity. It is not the same as not to mount one volume to 2 pods scheduled on the same node. But anti-affinity can be used to ask scheduler not to run 2 pods on the same node. Therefore it prevents mounting one volume into 2 pods.
If a pod mounts a volume with ReadWriteOnce access mode, no other pod can mount it. In GCE (Google Compute Engine) the only allowed modes are ReadWriteOnce and ReadOnlyMany. So either one pod mounts the volume ReadWrite, or one or more pods mount the volume ReadOnlyMany.
The scheduler (code here) will not allow a pod to schedule if it uses a GCE volume that has already been mounted read-write.
(Documentation reference for those who didn't understand the question: persistent volume access modes)
In Kubernetes you provision storage either statically(using a storage class) or dynamically (Persistent Volume). Once the storage is available to bound and claimed, you need to configure it in what way your Pods or Nodes are connecting to the storage (a persistent volume). That could be configured in below four modes.
ReadOnlyMany (ROX)
In this mode multiple pods running on different Nodes could connect to the storage and carry out read operation.
ReadWriteMany (RWX)
In this mode multiple pods running on different Nodes could connect to the storage and carry out read and write operation.
ReadWriteOnce (RWO)
In this mode multiple pods running in only one Node could connect to the storage and carry out read and write operation.
ReadWriteOncePod (RWOP)
In this mode the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across whole cluster can read that PVC or write to it. This is only supported for CSI volumes and Kubernetes version 1.22+.
Follow the documentation to get more insight.
Related
In a Kubernetes cluster on Oracle cloud, I have a pod with an Apache server.
This pod needs a persistent volume so I used a persistentVolumeClaim and the cloud provider is able to automatically create an associated volume (Oracle Block Volume).
The access mode used by the PVC is readWriteOnce and therefore the volume created has the same access mode.
Everything work great.
Now I want to backup this volume using borg backup and borgmatic by starting a new pod regularly with a cronJob.
This backup pod needs to mount the volume in read only.
Question:
Can I use the previously defined PVC?
Do I need to create a new PVC with readOnly access mode?
As per documentation:
ReadWriteOnce:
the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node.
That means if you make a strict rule for deploying your pods to the same node, you can use the same PVC, here's the INSTRUCTION
I'm learning Kubernetes and there is something I don't get well.
There are 3 ways of setting up static storage:
Pods with volumes you attach diretctly the storage to
Pods with a PVC attached to its volume
StatefulSets with also PVC inside
I can understand the power of PVC when working together with StorageClass, but not when working with static storage and local storage like hostPath
To me, it sounds very similar:
In the first case I have a volume directly attached to a pod.
In the second case I have a volume statically attached to a PVC, which is also manually attached to a Pod. In the end, the volume will be statically attached to the Pod.
On both cases, the data will remain when the Pod is terminates and will be adopted by the next Pod which the corresponing definition, right?
The only profit I see from using PVCs over plain Pod is that you can define the acces mode. Apart of that. Is there a difference when working with hostpath?
On the other hand, the advantage of using a StatefulSet instead of a PVC is (if understood properly) that it get a headless service, and that the rollout and rollback mechanism works differently. Is that the point?
Thank you in advance!
Extracted from this blog:
The biggest difference is that the Kubernetes scheduler understands
which node a Local Persistent Volume belongs to. With HostPath
volumes, a pod referencing a HostPath volume may be moved by the
scheduler to a different node resulting in data loss. But with Local
Persistent Volumes, the Kubernetes scheduler ensures that a pod using
a Local Persistent Volume is always scheduled to the same node.
Using hostPath does not garantee that a pod will restart on the same node. So you pod can attach /tmp/storage on k8s-node-1, then if you delete and re-create the pod, it may attach tmp/storage on k8s-node-[2-n]
On the contrary, if you use PVC/PV with local persistent storage class, then if you delete and re-create a pod, it will stick on the node which handle the local persistent storage.
StatefulSet creates pods and has volumeClaimTemplate field, which creates a dedicated PVC for each pod. So each pod created by the statefulSet will have its own dedicated storage, linked with Pod->PVC->PV->Storage. So StatefulSet use also the PVC/PV mechanism.
More details are available here.
As per this official document, Kubernetes Persistent Volumes support three types of access modes.
ReadOnlyMany
ReadWriteOnce
ReadWriteMany
The given definitions of them in the document is very high-level. It would be great if someone can explain them in little more detail along with some examples of different use cases where we should use one vs other.
You should use ReadWriteX when you plan to have Pods that will need to write to the volume, and not only read data from the volume.
You should use XMany when you want the ability for Pods to access the given volume while those workloads are running on different nodes in the Kubernetes cluster. These Pods may be multiple replicas belonging to a Deployment, or may be completely different Pods. There are many cases where it's desirable to have Pods running on different nodes, for instance if you have multiple Pod replicas for a single Deployment, then having them run on different nodes can help ensure some amount of continued availability even if one of the nodes fails or is being updated.
If you don't use XMany, but you do have multiple Pods that need access to the given volume, that will force Kubernetes to schedule all those Pods to run on whatever node the volume gets mounted to first, which could overload that node if there are too many such pods, and can impact the availability of Deployments whose Pods need access to that volume as explained in the previous paragraph.
So putting all that together:
If you need to write to the volume, and you may have multiple Pods needing to write to the volume where you'd prefer the flexibility of those Pods being scheduled to different nodes, and ReadWriteMany is an option given the volume plugin for your K8s cluster, use ReadWriteMany.
If you need to write to the volume but either you don't have the requirement that multiple pods should be able to write to it, or ReadWriteMany simply isn't an available option for you, use ReadWriteOnce.
If you only need to read from the volume, and you may have multiple Pods needing to read from the volume where you'd prefer the flexibility of those Pods being scheduled to different nodes, and ReadOnlyMany is an option given the volume plugin for your K8s cluster, use ReadOnlyMany.
If you only need to read from the volume but either you don't have the requirement that multiple pods should be able to read from it, or ReadOnlyMany simply isn't an available option for you, use ReadWriteOnce. In this case, you want the volume to be read-only but the limitations of your volume plugin have forced you to choose ReadWriteOnce (there's no ReadOnlyOnce option). As a good practice, consider the containers.volumeMounts.readOnly setting to true in your Pod specs for volume mounts corresponding to volumes that are intended to be read-only.
ReadOnlyMany – the volume can be mounted read-only by many nodes
By this method, multiple pods running on multiple nodes can use a single volume and read data.
If a pod mounts a volume with ReadOnlyMany access mode, other pod can mount it and perform only read operation. Right now GCP is not supporting this method.
This means a volume can be mounted on one or many nodes of your Kubernetes cluster and you can only perform read operation.
You have one pod is running on node and you are reading stored file from volume. While on same volume you can not perform writes.
As it's ReadOnlyMany, if your pod is scheduled to another node, then also volume and data will be available to perform read operation.
ReadWriteMany – the volume can be mounted as read-write by many nodes
By this method, multiple pods running on multiple nodes can use a single volume and read/write data.
If a pod mounts a volume with ReadWriteMany access mode, other pod can also mount it.
This means the volume can be mounted on one or many node of your Kubernetes cluster and you can perform both, read and write operation.
You have one pod running on a node and you are reading & writing the stored file from the volume.
As it's ReadWriteMany if your pod schedule to another node then also the volume and data will be available there to perform read/write operation.
for this, you can use NFS (MinIO, GlusterFS) or EFS filesystem also.
ReadWriteOnce – the volume can be mounted as read-write by a single node
If a pod mounts a volume with ReadWriteOnce access mode, no other pod can mount it. In GCE (Google Compute Engine) the only allowed modes are ReadWriteOnce and ReadOnlyMany. So either one pod mounts the volume ReadWrite, or one or more pods mount the volume ReadOnlyMany.
This means the volume can be mounted on only one node of your kubernetes cluster and you can only perform read operation.
You have one pod running on node and you are reading stored file from volume. While on same volume you cannot perform writes.
As it's ReadWriteOnce if your pod is scheduled to another node then may mossible volume will be attached to the node and you can not get access of data there.
In Kubernetes you provision storage either statically(using a storage class) or dynamically (Persistent Volume). Once the storage is available to bound and claimed, you need to configure it in what way your Pods or Nodes are connecting to the storage (a persistent volume). That could be configured in below four modes.
ReadOnlyMany (ROX)
In this mode multiple pods running on different Nodes could connect to the storage and carry out read operation.
ReadWriteMany (RWX)
In this mode multiple pods running on different Nodes could connect to the storage and carry out read and write operation.
ReadWriteOnce (RWO)
In this mode multiple pods running in only one Node could connect to the storage and carry out read and write operation.
ReadWriteOncePod (RWOP)
In this mode the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across whole cluster can read that PVC or write to it. This is only supported for CSI volumes and Kubernetes version 1.22+.
Folow the documentation to get more insight.
I'm currently trying to configure a Logstash cluster with Kubernetes and I would like to have each of the logstash nodes mount a volume as read-only with the pipelines. This same volume would then be mounted as read/write on a single management instance where I could edit the configs.
Is this possible with K8s and GCEPersistentDisk?
By Logstash I believe you mean an ELK cluster. Logstash is just a log forwarder and not an endpoint for storage.
Not really. It's not possible with a GCEPersistentDisk. This is more of GCE limitation where you can only mount a volume on an instance at a time.
Also, as you can see in the docs supports the ReadWriteOnce and the ReadOnlyMany but not at the same time.
Important! A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.
You could achieve this but just using a single volume on a single K8s node and then partition your volume to be used by different Elasticsearch pods on the same node but this would defeat the purpose of having a distributed cluster.
Elasticsearch works fine if you have your nodes in different Kubernetes nodes and each of them has a separate volume.
I am trying to understand different access modes for Persistent Volume Claims in Openshift. Found the following information from here
Access Mode CLI Abbreviation Description
ReadWriteOnce RWO The volume can be mounted as read-write by a single node.
ReadOnlyMany ROX The volume can be mounted read-only by many nodes.
ReadWriteMany RWX The volume can be mounted as read-write by many nodes.
I know that PVC are bound to single project/namespace and can be extended to different projects as well.
But the confusion here is, what does "single node" or "many nodes" mean here. For example, in RWO mode, "The volume can be mounted as read-write by a single node". What node it is referring to.
Can someone give me the significance of these modes with respect to single project/namespace. Does the storage with RWO can have write permission for only one application or all the applications within the project?
The whole RWO vs RWX concept is related to the issue of mounting the same filesystem on multiple hosts, which requires support for things like ie. distributed locking. There are specific implementations that can handle this like ie. NFS, Ceph, GlusterFS etc. generally network/cluster oriented filesystems. Other filesystems are unable to operate correctly if you try to mount them on different servers at the same time (usually they will just not allow this).
So, node, in this case, means particular kubernetes cluster node (be it baremetal server or vm). But, by extension, you should think about it in scope of POD as well, cause in most cases pods can spin up on different nodes, meaning they could not use the same volume or you can not assume that this volume will have coherent shared state, as would happen ie. using HostPath volumes that are unique per every node in cluster.
To clarify for the question below :
RWO volumes have 1:1 relation to pod in general. While in some cases you can define RWO volumes to point to the same physical resource like hostPath, technically they will always be tightly coupled to exclusively one POD. This is specially visible if you use PhysicalVolumes / PhysicalVolumeClaims objects, which will take into account these restrictions for binding PV to PVC. Only RWX volumes give you a storage shared by multiple pods with all pods being able to write to it.