As can not use RMX mode directly , so I use 3 pvcs with default storage class for 3 pods.
But I can saw they are not working in the pod , or just 1 pvc worked as expected.
The pvc yaml as below , nothing special.I use get pvc then can see all the status are bound,but in the pod the error is "pod has unbound immediate PersistentVolumeClaims".
My understanding is , the pvc should be separate and independent even if use the same storageClass.
The actual is pvc can not bounded in the pod , or only one pod can access the pvc ,the other still have errors.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: demo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
Related
I have deployed a GlusterFS cluster on GCE, three nodes, one volume and one brick. I need to mount it on a pod deployed on GKE. I have successfully created the endpoint and PV, but I cannot create the PVC, If I introduce the volumeName refered to my PV, I get the following error:
2s Warning VolumeMismatch persistentvolumeclaim/my-glusterfs-pvc Cannot bind to requested volume "my-glusterfs-pv": storageClassName does not match
If I don't introduce a volumeName selector I get the following error:
3s Warning ProvisioningFailed persistentvolumeclaim/my-glusterfs-pvc Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported
I though that glusterfs has native support on GKE, however I cannot find on the GCP site:
https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#storage_driver_support
Is it related to the errors I'm getting while deploying the PVC? How could I solve it?
Thanks!
I have not defined any storageClass. I can only find Heketi provisioned GlusterFS storageClass on Kubernetes documentation. My YAMLs:
kind: PersistentVolume
metadata:
name: my-glusterfs-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
glusterfs:
path: mypath
endpoints: my-glusterfs-cluster
readOnly: false
persistentVolumeReclaimPolicy: Retain
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-glusterfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
volumeName: my-glusterfs-pv
I have deployed Heketi on my glusterfs cluster and it works correctly. However I cannot deploy the pvc on GKE. Glusterfs-client is running ok, the heketi secret is available, the gluster storageclass has been created without errors, but the PVC remains in Pending an I have found this event:
17m Warning ProvisioningFailed persistentvolumeclaim/liferay-glusterfs-pvc Failed to provision volume with StorageClass "gluster-heketi-external": failed to create volume: failed to create volume: see kube-controller-manager.log for details
Storageclass:
kind: StorageClass
#apiVersion: storage.k8s.io/v1beta1
apiVersion: storage.k8s.io/v1
metadata:
name: gluster-heketi-external
provisioner: kubernetes.io/glusterfs
parameters:
restauthenabled: "true"
resturl: "http://my-heketi-node-ip:8080"
clusterid: "idididididididididid"
restuser: "myadminuser"
secretName: "heketi-secret"
secretNamespace: "my-app"
volumetype: "replicate:3"
PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-class: gluster-heketi-external
# finalizers:
# - kubernetes.io/pvc-protection
name: my-app-pvc
namespace: my-app
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
I also have created the firewall rule needed for Heketi, in order GKE nodes can access to Heketi resturl and port. I have checked that it´s working ok launching a telnet to 8080 port while I am sniffing the traffic on the Heketi node. Everything appears to be ok, I can see the answer and the packets. However I cannot see any packet when I deploy the pvc. There is nothing on syslog. I can create volumes when the heketi command is launched locally.
I have tried to curl Heketi node from a gluster-client pod and it works correctly:
kubectl exec -it gluster-client-6fqmb -- curl http://[myheketinodeip]:8080/hello
Hello from Heketi
I am deploying stolon via statefulset (default from stolon repo).
I have define in statefulset config
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: stolon-local-storage
resources:
requests:
storage: 1Gi
and here is my storageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: stolon-local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
statefulset was created fine, but pod has error:
pod has unbound immediate PersistentVolumeClaims
How can I resolve it?
pod has unbound immediate PersistentVolumeClaims
In this case pvc could not connect to storageclass because it wasn't make as a default.
Depending on the installation method, your Kubernetes cluster may be deployed with an existing StorageClass that is marked as default. This default StorageClass is then used to dynamically provision storage for PersistentVolumeClaims that do not require any specific storage class. See PersistentVolumeClaim documentation for details.
Command which can be used to make your new created storageclass a default one.
kubectl patch storageclass <name_of_storageclass> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Then You can use kubectl get storageclass and it should look like this
NAME PROVISIONER AGE
stolon-local-storage (default) kubernetes.io/gce-pd 1d
I followed the instructions in this post:
how to bound a Persistent volume claim with a gcePersistentDisk?
And when I applied that, my PVC did not bind to the PV, instead I got this error in the event list:
14s 17s 2 test-pvc.155b8df6bac15b5b PersistentVolumeClaim Warning ProvisioningFailed persistentvolume-controller Failed to provision volume with StorageClass "standard": claim.Spec.Selector is not supported for dynamic provisioning on GCE
I found a github posting that suggested something that would fix this:
https://github.com/coreos/prometheus-operator/issues/323#issuecomment-299016953
But unfortunately that made no difference.
Is there a soup-to-nuts doc somewhere telling us exactly how to use PV and PVC to create truly persistent volumes? Specifically where you can shut down the pv and pvc and restore them later, and get all your content back? Because as it seems right now, if you lose your PVC for whatever reason, you lose connection to your volume and there is no way to get it back again.
The default StorageClass is not compatible with a gcePesistentDisk. Something like this would work:
$ cat <<EOF | kubectl create -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
EOF
then on your PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
labels:
app: test
spec:
accessModes:
- ReadWriteOnce
storageClassName: "slow" <== specify the storageClass
resources:
requests:
storage: 2Gi
selector:
matchLabels:
app: test
You can also set "slow" as the default storageClass in which case you wouldn't have to specify it on your PVC:
$ kubectl patch storageclass slow -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
I'm creating a Kubernetes PVC and a Deploy that uses it.
In the yaml it is specified that uid and gid must be 1000.
But when deployed the volume is mounted with different IDs so I have no write access on it.
How can I specify effectively uid and gid for a PVC?
PVC yaml:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jmdlcbdata
annotations:
pv.beta.kubernetes.io/gid: "1000"
volume.beta.kubernetes.io/mount-options: "uid=1000,gid=1000"
volume.beta.kubernetes.io/storage-class: default
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "2Gi"
storageClassName: "default"
Deploy yaml:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: jmdlcbempty
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
name: jmdlcbempty
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
volumes:
- name: jmdlcbdata
persistentVolumeClaim:
claimName: jmdlcbdata
containers:
- name: myalpine
image: "alpine"
command:
- /bin/sh
- "-c"
- "sleep 60m"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /usr/share/logstash/data
name: jmdlcbdata
And here is the dir list:
$ kubectl get pvc; kubectl get pods;
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jmdlcbdata Bound pvc-6dfcdb29-8a0a-11e8-938b-1a5d4ff12be9 20Gi RWO default 2m
NAME READY STATUS RESTARTS AGE
jmdlcbempty-68cd675757-q4mll 1/1 Running 0 6s
$ kubectl exec -it jmdlcbempty-68cd675757-q4mll -- ls -ltr /usr/share/logstash/
total 4
drwxr-xr-x 2 nobody 42949672 4096 Jul 17 21:44 data
I'm working on a IBM's Bluemix cluster.
Thanks.
After some experiments, finally, I can provide an answer.
There are several ways to run processes in a Container from specific UID and GID:
runAsUser field in securityContext in a Pod definition specifies a user ID for the first process runs in Containers in the Pod.
fsGroup field in securityContext in a Pod specifies what group ID is associated with all Containers in the Pod. This group ID is also associated with volumes mounted to the Pod and with any files created in these volumes.
When a Pod consumes a PersistentVolume that has a pv.beta.kubernetes.io/gid annotation, the annotated GID is applied to all Containers in the Pod in the same way that GIDs specified in the Pod’s security context are.
Note, every GID, whether it originates from a PersistentVolume annotation or the Pod’s specification, is applied to the first process run in each Container.
Also, there are several ways to set up mount options for PersistentVolumes. A PersistentVolume is a piece of storage in the cluster that has been provisioned by an administrator. Also, it can be provisioned dynamically using a StorageClass. Therefore, you can specify mount options in a PersistentVolume when you create it manually. Or you can specify them in StorageClass, and every PersistentVolume requested from that class by a PersistentVolumeClaim will have these options.
It is better to use mountOptions attribute than volume.beta.kubernetes.io/mount-options annotation and storageClassName attribute instead of volume.beta.kubernetes.io/storage-class annotation. These annotations were used instead of the attributes in the past and they are still working now, however they will become fully deprecated in a future Kubernetes release. Here is an example:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: with-permissions
provisioner: <your-provider>
parameters:
<option-for your-provider>
reclaimPolicy: Retain
mountOptions: #these options
- uid=1000
- gid=1000
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "2Gi"
storageClassName: "with-permissions" #these options
Note that mount options are not validated, so mount will simply fail if one is invalid. And you can use uid=1000, gid=1000 mount options for file systems like FAT or NTFS, but not for EXT4, for example.
Referring to your configuration:
In your PVC yaml volume.beta.kubernetes.io/mount-options: "uid=1000,gid=1000" is not working, because it is an option for StorageClass or PV.
You specified storageClassName: "default" and volume.beta.kubernetes.io/storage-class: default in your PVC yaml, but they are doing the same. Also, default StorageClass do not have mount options by default.
In your PVC yaml 'pv.beta.kubernetes.io/gid: "1000"' annotation does the same as securityContext.fsGroup: 1000 option in Deployment definition, so the first is unnecessary.
Try to create a StorageClass with required mount options (uid=1000, gid=1000), and use a PVC to request a PV from it, as in the example above. After that, you need to use a Deployment definition with SecurityContext to setup access to mounted PVC. But make sure that you are using mount options available for your file system.
You can use an initContainer to set the UID/GID permissions for the volume mount path.
The UID/GID that you see by default is due to root squash being enabled on NFS.
Steps: https://console.bluemix.net/docs/containers/cs_troubleshoot_storage.html#nonroot
My question is about PersistentVolumeClaim
I have one node cluster setup on aws ec2
I am trying to create a storage class using kubernetes.io/host-path as Provisioner.
yaml file content for storage class as follows,
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: kube-system
name: my-storage
annotations:
storageclass.beta.kubernetes.io/is-default-class: "false"
labels:
kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/host-path
yaml file content for PersistentVolumeClaim as follows,
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
annotations:
volume.beta.kubernetes.io/storage-class: my-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
When I am trying to create storage class and PVC on minikube, it's working. It is creating
volume on minikube in /tmp/hostpath_volume/
But, When I am trying similar thing on one node cluster setup on aws ec2, I am getting following error
Failed to create provisioner: Provisioning in volume plugin "kubernetes.io/host-path" is disabled
I can see this error when I do the kubectl describe pvc task-pv-claim, Also as, PV is not created, so claim is in pending state
I found something like kube-controller-manager which shows
--enable-dynamic-provisioning and --enable-hostpath-provisioner in its option but don't know how to use it.
It seems you might not be running the provisioner itself, so there's nothing to actually do the work of creating the hostpath directory.
Take a look here
The way this works is that the hostpath provisioner reads from the kubernetes API, and watches for you to create a storage class (which you've done) and a persistentvolumeclaim (also done).
When those exist, the provisioner (which is running as a pod) will go an execute a mkdir to create the hostpath.
Run the following:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/docs/demo/hostpath-provisioner/pod.yaml
And then recreate your storageclass and pvc