I am trying out the CrunchyData postgres-operator(Helm) with the NFS Helm chart. And I am unable to create the Cluster with NFS. The following configuration is performed:
Installed NFS helm chart Repo
helm install nfs-abc stable/nfs-server-provisioner
Set the postgres storage values Doc
backrest_storage: 'nfsstorage'
backup_storage: 'nfsstorage'
primary_storage: 'nfsstorage'
replica_storage: 'nfsstorage'
Set the storage configuration Doc
export CCP_SECURITY_CONTEXT='"supplementalGroups": [65534]'
export CCP_STORAGE_PATH=/nfsfileshare
export CCP_NFS_IP=data-nfs-dravoka-nfs-server-provisioner-0.default.svc.cluster.local
export CCP_STORAGE_MODE=ReadWriteMany
export CCP_STORAGE_CAPACITY=400M
Created PGO cluster
pgo create cluster -n pgo dravoka --storage-config='nfsstorage' --pgbackrest-storage-config='nfsstorage' --pvc-size='2Gi'
PVC describe:
kubectl describe -n pgo pvc dravoka
Name: dravoka
Namespace: pgo
StorageClass: standard
Status: Pending
Volume:
Labels: pg-cluster=dravoka
pgremove=true
vendor=crunchydata
Annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 112s (x10 over 7m45s) persistentvolume-controller Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported
Pod Describe:
kubectl describe -n pgo pod dravoka-backrest-shared-repo-9fdd77886-j2mjv
Name: dravoka-backrest-shared-repo-9fdd77886-j2mjv
Namespace: pgo
Priority: 0
Node: <none>
Labels: name=dravoka-backrest-shared-repo
pg-cluster=dravoka
pg-pod-anti-affinity=preferred
pgo-backrest-repo=true
pod-template-hash=9fdd77886
service-name=dravoka-backrest-shared-repo
vendor=crunchydata
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/dravoka-backrest-shared-repo-9fdd77886
Containers:
database:
Image: registry.developers.crunchydata.com/crunchydata/pgo-backrest-repo:centos7-4.4.1
Port: 2022/TCP
Host Port: 0/TCP
Requests:
memory: 48Mi
Environment:
PGBACKREST_STANZA: db
SSHD_PORT: 2022
PGBACKREST_DB_PATH: /pgdata/dravoka
PGBACKREST_REPO_PATH: /backrestrepo/dravoka-backrest-shared-repo
PGBACKREST_PG1_PORT: 5432
PGBACKREST_LOG_PATH: /tmp
PGBACKREST_PG1_SOCKET_PATH: /tmp
PGBACKREST_DB_HOST: dravoka
Mounts:
/backrestrepo from backrestrepo (rw)
/sshd from sshd (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
sshd:
Type: Secret (a volume populated by a Secret)
SecretName: dravoka-backrest-repo-config
Optional: false
backrestrepo:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: dravoka-pgbr-repo
ReadOnly: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 76s (x7 over 9m58s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
Am I missing something that too is configured or doing something wrong? my purpose is to use NFS as postgres storage. Any help will be appreciated.
Failed to provision volume with StorageClass "standard": invalid
AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce
ReadOnlyMany] are supported
So here is the root cause of the problem. You are provisioning the pvc using storage class that does not support the AccessModes that you want i.e. ReadWriteMany
Looking at the doc, seems like you have having this configuration
storage3_name: 'nfsstorage'
storage3_access_mode: 'ReadWriteMany'
storage3_size: '1G'
storage3_type: 'create'
storage3_supplemental_groups: 65534
Where the storage3_access_mode is saying that the access_mode is ReadWriteMany but this is not supported.
Please try to change it to ReadWriteOnce
Anw, Postgres requires block storage to work so even when you get the NFS setup right, your Postgres cluster will not run anyway. More explanation here
Related
I have a 3 node microk8s cluster running on virtualbox Ubuntu vms. And I am trying to get mayastor for openebs working to use with PVCs. I have followed the steps in this guide:
https://mayastor.gitbook.io/introduction/quickstart/preparing-the-cluster
https://mayastor.gitbook.io/introduction/quickstart/deploy-mayastor
https://mayastor.gitbook.io/introduction/quickstart/configure-mayastor
An example of my MayastorPool from step 3 looks like this:
apiVersion: "openebs.io/v1alpha1"
kind: MayastorPool
metadata:
name: pool-on-node1-n2
namespace: mayastor
spec:
node: node1
disks: [ "/dev/nvme0n2" ]
And my StorageClass looks like this:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mayastor-3
provisioner: io.openebs.csi-mayastor
parameters:
repl: '3'
protocol: 'nvmf'
ioTimeout: '60'
local: 'true'
volumeBindingMode: WaitForFirstConsumer
All the checks seem fine according to guide, but when I try to creating a PVC and using it according to this https://mayastor.gitbook.io/introduction/quickstart/deploy-a-test-application the the test application fio pod doesn't come up. When I look at it with describe I see the following:
$ kubectl describe pods fio -n mayastor
Name: fio
Namespace: mayastor
Priority: 0
Node: node2/192.168.40.12
Start Time: Wed, 02 Jun 2021 22:56:03 +0000
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
fio:
Container ID:
Image: nixery.dev/shell/fio
Image ID:
Port: <none>
Host Port: <none>
Args:
sleep
1000000
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l6cdf (ro)
/volume from ms-volume (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
ms-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ms-volume-claim
ReadOnly: false
kube-api-access-l6cdf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: openebs.io/engine=mayastor
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 44m default-scheduler Successfully assigned mayastor/fio to node2
Normal SuccessfulAttachVolume 44m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-ec6ce101-fb3e-4a5a-8d61-1d228f8f8199"
Warning FailedMount 24m (x4 over 40m) kubelet Unable to attach or mount volumes: unmounted volumes=[ms-volume], unattached volumes=[kube-api-access-l6cdf ms-volume]: timed out waiting for the condition
Warning FailedMount 13m (x23 over 44m) kubelet MountVolume.SetUp failed for volume "pvc-ec6ce101-fb3e-4a5a-8d61-1d228f8f8199" : rpc error: code = Internal desc = Failed to find parent dir for mountpoint /var/snap/microk8s/common/var/lib/kubelet/pods/b1166af6-1ade-4a3a-9b1d-653151418695/volumes/kubernetes.io~csi/pvc-ec6ce101-fb3e-4a5a-8d61-1d228f8f8199/mount, volume ec6ce101-fb3e-4a5a-8d61-1d228f8f8199
Warning FailedMount 4m3s (x13 over 42m) kubelet Unable to attach or mount volumes: unmounted volumes=[ms-volume], unattached volumes=[ms-volume kube-api-access-l6cdf]: timed out waiting for the condition
Any ideas where to look or what to do to get mayastor working with microk8s? Happy to post more information.
Thanks to Kiran Mova's comments and Niladri from the openebs slack channel:
Replace the step:
https://mayastor.gitbook.io/introduction/quickstart/deploy-mayastor#csi-node-plugin
kubectl apply -f https://raw.githubusercontent.com/openebs/Mayastor/master/deploy/csi-daemonset.yaml
with
curl -fSs https://raw.githubusercontent.com/openebs/Mayastor/master/deploy/csi-daemonset.yaml | sed "s|/var/lib/kubelet|/var/snap/microk8s/common/var/lib/kubelet|g" - | kubectl apply -f -
So replace the path with the microk8s installation specific path. Even though there is a symlink things don't seem to work out right without this change.
This is a continuation of the question I asked at Hyperledger Fabric - migration from Docker swarm to Kubernetes possible?
After I have run kompose convert on my docker-compose files, I obtain the files exactly as listed in the answer I accepted. Then I run the following commands in order:
$ kubectl apply -f dev-orderer1-pod.yaml
$ kubectl apply -f dev-orderer1-service.yaml
$ kubectl apply -f dev-peer1-pod.yaml
$ kubectl apply -f dev-peer1-service.yaml
$ kubectl apply -f dev-couchdb1-pod.yaml
$ kubectl apply -f dev-couchdb1-service.yaml
$ kubectl apply -f ar2bc-networkpolicy.yaml
When I try to view my pods I see this:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
dev-couchdb1 0/1 Pending 0 7m20s
dev-orderer1 0/1 Pending 0 8m25s
dev-peer1 0/1 Pending 0 7m39s
When I try to describe any of the three pods I see this:
$ kubectl describe pod dev-orderer1
Name: dev-orderer1
Namespace: default
Priority: 0
Node: <none>
Labels: io.kompose.network/ar2bc=true
io.kompose.service=dev-orderer1
Annotations: kompose.cmd: kompose convert -f docker-compose-orderer1.yaml -f docker-compose-peer1.yaml --volumes hostPath
kompose.version: 1.22.0 (955b78124)
Status: Pending
IP:
IPs: <none>
Containers:
dev-orderer1:
Image: hyperledger/fabric-orderer:latest
Port: 7050/TCP
Host Port: 0/TCP
Args:
orderer
Environment:
ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE: /var/hyperledger/orderer/tls/server.crt
ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY: /var/hyperledger/orderer/tls/server.key
ORDERER_GENERAL_CLUSTER_ROOTCAS: [/var/hyperledger/orderer/tls/ca.crt]
ORDERER_GENERAL_GENESISFILE: /var/hyperledger/orderer/orderer.genesis.block
ORDERER_GENERAL_GENESISMETHOD: file
ORDERER_GENERAL_LISTENADDRESS: 0.0.0.0
ORDERER_GENERAL_LOCALMSPDIR: /var/hyperledger/orderer/msp
ORDERER_GENERAL_LOCALMSPID: OrdererMSP
ORDERER_GENERAL_LOGLEVEL: INFO
ORDERER_GENERAL_TLS_CERTIFICATE: /var/hyperledger/orderer/tls/server.crt
ORDERER_GENERAL_TLS_ENABLED: true
ORDERER_GENERAL_TLS_PRIVATEKEY: /var/hyperledger/orderer/tls/server.key
ORDERER_GENERAL_TLS_ROOTCAS: [/var/hyperledger/orderer/tls/ca.crt]
Mounts:
/var/hyperledger/orderer/msp from dev-orderer1-hostpath1 (rw)
/var/hyperledger/orderer/orderer.genesis.block from dev-orderer1-hostpath0 (rw)
/var/hyperledger/orderer/tls from dev-orderer1-hostpath2 (rw)
/var/hyperledger/production/orderer from orderer1 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-44lfq (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
dev-orderer1-hostpath0:
Type: HostPath (bare host directory volume)
Path: /home/isprintsg/hlf/channel-artifacts/genesis.block
HostPathType:
dev-orderer1-hostpath1:
Type: HostPath (bare host directory volume)
Path: /home/isprintsg/hlf/crypto-config/ordererOrganizations/ar2dev.accessreal.com/orderers/orderer1.ar2dev.accessreal.com/msp
HostPathType:
dev-orderer1-hostpath2:
Type: HostPath (bare host directory volume)
Path: /home/isprintsg/hlf/crypto-config/ordererOrganizations/ar2dev.accessreal.com/orderers/orderer1.ar2dev.accessreal.com/tls
HostPathType:
orderer1:
Type: HostPath (bare host directory volume)
Path: /home/isprintsg/hlf
HostPathType:
default-token-44lfq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-44lfq
Optional: false
QoS Class: BestEffort
Node-Selectors: kubernetes.io/hostname=isprintdev
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 51s (x27 over 27m) default-scheduler 0/1 nodes are available: 1 node(s) didn't match Pod's node affinity.
The error message right at the end is common to all three pods. I try to Google that message but surprisingly I don't get any straightforward results. What does that message mean, and how should I go about resolving this? I'm quite new to Kubernetes, in case you're wondering.
EDIT
dev-orderer1-pod.yaml - https://pastebin.com/PQUnz3Q2
dev-orderer1-service.yaml - https://pastebin.com/gxuHNvAX
dev-peer1-pod.yaml - https://pastebin.com/hwUQdq5L
dev-peer1-service.yaml - https://pastebin.com/n2Q8uMFB
dev-couchdb1-pod.yaml - https://pastebin.com/HTC3TQPz
dev-couchdb1-service.yaml - https://pastebin.com/Sg6ZkrHz
ar2bc-networkpolicy.yaml - https://pastebin.com/fjEdAGJe
I stumbled across this while researching a parallel problem.. in case it helps, I'd imagine that this is your problem:
Node-Selectors: kubernetes.io/hostname=isprintdev
The node selector tells Kubernetes to only schedule these pods on a node whose hostname is isprintdev :)
D
i've looked through several solutions but couldnt find an answer, so i'm trying to run a stateful set on the cluster, but the pod fails to run because of unbound claim. I'm running t2.large machines with Bottlerocket host types.
kubectl get events
28m Warning FailedScheduling pod/carabbitmq-0 pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
28m Normal Scheduled pod/carabbitmq-0 Successfully assigned default/carabbitmq-0 to ip-x.compute.internal
28m Normal SuccessfulAttachVolume pod/carabbitmq-0 AttachVolume.Attach succeeded for volume "pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7"
28m Normal Pulled pod/carabbitmq-0 Container image "busybox:1.30.1" already present on machine
kubectl get pv,pvc + describe
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/data-carabbitmq-0 Bound pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7 30Gi RWO gp2 12m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7 30Gi RWO Retain Bound rabbitmq/data-carabbitmq-0 gp2 12m
describe pv:
Name: pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7
Labels: failure-domain.beta.kubernetes.io/region=eu-west-1
failure-domain.beta.kubernetes.io/zone=eu-west-1b
Annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: gp2
Status: Bound
Claim: rabbitmq/data-carabbitmq-0
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 30Gi
Node Affinity:
Required Terms:
Term 0: failure-domain.beta.kubernetes.io/zone in [eu-west-1b]
failure-domain.beta.kubernetes.io/region in [eu-west-1]
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: aws://eu-west-1b/vol-xx
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
describe pvc:
Name: data-carabbitmq-0
Namespace: rabbitmq
StorageClass: gp2
Status: Bound
Volume: pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7
Labels: app=rabbitmq-ha
release=rabbit-mq
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 30Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: carabbitmq-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ProvisioningSucceeded 36m persistentvolume-controller Successfully provisioned volume pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7 using kubernetes.io/aws-ebs
The storage type is gp2.
Name: gp2
IsDefaultClass: Yes
Annotations: storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/aws-ebs
Parameters: encrypted=true,type=gp2
AllowVolumeExpansion: <unset>
MountOptions:
debug
ReclaimPolicy: Retain
VolumeBindingMode: Immediate
Events: <none>
I'm not sure what i'm missing, same configuration used to work until i switched to "t" type of EC2s
So, it was weird but i had some readiness probe that failed its healthchecks, i thought that it was because the volume was not mounted well.
The healthcheck basically did some request to localhost, which it had issues on (not sure why) - changing to 127.0.0.1 made the check pass, and then the volume error disappeard.
So - if you have this weird issue (volumes were mounted, but you still get that error) - check the pod's probes.
I'm trying to attach the dummy-attachable FlexVolume sample for Kubernetes which seems to initialize normally according to my logs on both the nodes and master:
Loaded volume plugin "flexvolume-k8s/dummy-attachable
But when I try to attach the volume to a pod, the attach method never gets called from the master. The logs from the node read:
flexVolume driver k8s/dummy-attachable: using default GetVolumeName for volume dummy-attachable
operationExecutor.VerifyControllerAttachedVolume started for volume "dummy-attachable"
Operation for "\"flexvolume-k8s/dummy-attachable/dummy-attachable\"" failed. No retries permitted until 2019-04-22 13:42:51.21390334 +0000 UTC m=+4814.674525788 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"dummy-attachable\" (UniqueName: \"flexvolume-k8s/dummy-attachable/dummy-attachable\") pod \"nginx-dummy-attachable\"
Here's how I'm attempting to mount the volume:
apiVersion: v1
kind: Pod
metadata:
name: nginx-dummy-attachable
namespace: default
spec:
containers:
- name: nginx-dummy-attachable
image: nginx
volumeMounts:
- name: dummy-attachable
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: dummy-attachable
flexVolume:
driver: "k8s/dummy-attachable"
Here is the ouput of kubectl describe pod nginx-dummy-attachable:
Name: nginx-dummy-attachable
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: [node id]
Start Time: Wed, 24 Apr 2019 08:03:21 -0400
Labels: <none>
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container nginx-dummy-attachable
Status: Pending
IP:
Containers:
nginx-dummy-attachable:
Container ID:
Image: nginx
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/data from dummy-attachable (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hcnhj (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
dummy-attachable:
Type: FlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin)
Driver: k8s/dummy-attachable
FSType:
SecretRef: nil
ReadOnly: false
Options: map[]
default-token-hcnhj:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hcnhj
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 41s (x6 over 11m) kubelet, [node id] Unable to mount volumes for pod "nginx-dummy-attachable_default([id])": timeout expired waiting for volumes to attach or mount for pod "default"/"nginx-dummy-attachable". list of unmounted volumes=[dummy-attachable]. list of unattached volumes=[dummy-attachable default-token-hcnhj]
I added debug logging to the FlexVolume, so I was able to verify that the attach method was never called on the master node. I'm not sure what I'm missing here.
I don't know if this matters, but the cluster is being launched with KOPS. I've tried with both k8s 1.11 and 1.14 with no success.
So this is a fun one.
Even though kubelet initializes the FlexVolume plugin on master, kube-controller-manager, which is containerized in KOPs, is the application that's actually responsible for attaching the volume to the pod. KOPs doesn't mount the default plugin directory /usr/libexec/kubernetes/kubelet-plugins/volume/exec into the kube-controller-manager pod, so it doesn't know anything about your FlexVolume plugins on master.
There doesn't appear to be a non-hacky way to do this other than to use a different Kubernetes deployment tool until KOPs addresses this problem.
This is 2nd question following 1st question at
PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"
I am setting up a kubernetes lab using one node only and learning to setup kubernetes nfs. I am following kubernetes nfs example step by step from the following link: https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs
Based on feedback provided by 'helmbert', I modified the content of
https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
It works and I don't see the event "PersistentVolumeClaim is not bound: “nfs-pv-provisioning-demo”" anymore.
$ cat nfs-server-local-pv01.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data01"
$ cat nfs-server-local-pvc01.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pv-provisioning-demo
labels:
demo: nfs-pv-provisioning
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv01 10Gi RWO Retain Available 4s
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pv-provisioning-demo Bound pv01 10Gi RWO 2m
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-server-nlzlv 1/1 Running 0 1h
$ kubectl describe pods nfs-server-nlzlv
Name: nfs-server-nlzlv
Namespace: default
Node: lab-kube-06/10.0.0.6
Start Time: Tue, 21 Nov 2017 19:32:21 +0000
Labels: role=nfs-server
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"nfs-server","uid":"b1b00292-cef2-11e7-8ed3-000d3a04eb...
Status: Running
IP: 10.32.0.3
Created By: ReplicationController/nfs-server
Controlled By: ReplicationController/nfs-server
Containers:
nfs-server:
Container ID: docker://1ea76052920d4560557cfb5e5bfc9f8efc3af5f13c086530bd4e0aded201955a
Image: gcr.io/google_containers/volume-nfs:0.8
Image ID: docker-pullable://gcr.io/google_containers/volume-nfs#sha256:83ba87be13a6f74361601c8614527e186ca67f49091e2d0d4ae8a8da67c403ee
Ports: 2049/TCP, 20048/TCP, 111/TCP
State: Running
Started: Tue, 21 Nov 2017 19:32:43 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/exports from mypvc (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-grgdz (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
mypvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-pv-provisioning-demo
ReadOnly: false
default-token-grgdz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-grgdz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
I continued the rest of steps and reached the "Setup the fake backend" section and ran the following command:
$ kubectl create -f examples/volumes/nfs/nfs-busybox-rc.yaml
I see status 'ContainerCreating' and never change to 'Running' for both nfs-busybox pods. Is this because the container image is for Google Cloud as shown in the yaml?
https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-server-rc.yaml
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
Do I have to replace that 'image' line to something else because I don't use Google Cloud for this lab? I only have a single node in my lab. Do I have to rewrite the definition of 'containers' above? What should I replace the 'image' line with? Do I need to download dockerized 'nfs image' from somewhere?
$ kubectl describe pvc nfs
Name: nfs
Namespace: default
StorageClass:
Status: Bound
Volume: nfs
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Capacity: 1Mi
Access Modes: RWX
Events: <none>
$ kubectl describe pv nfs
Name: nfs
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller=yes
StorageClass:
Status: Bound
Claim: default/nfs
Reclaim Policy: Retain
Access Modes: RWX
Capacity: 1Mi
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 10.111.29.157
Path: /
ReadOnly: false
Events: <none>
$ kubectl get rc
NAME DESIRED CURRENT READY AGE
nfs-busybox 2 2 0 25s
nfs-server 1 1 1 1h
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-busybox-lmgtx 0/1 ContainerCreating 0 3m
nfs-busybox-xn9vz 0/1 ContainerCreating 0 3m
nfs-server-nlzlv 1/1 Running 0 1h
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20m
nfs-server ClusterIP 10.111.29.157 <none> 2049/TCP,20048/TCP,111/TCP 9s
$ kubectl describe services nfs-server
Name: nfs-server
Namespace: default
Labels: <none>
Annotations: <none>
Selector: role=nfs-server
Type: ClusterIP
IP: 10.111.29.157
Port: nfs 2049/TCP
TargetPort: 2049/TCP
Endpoints: 10.32.0.3:2049
Port: mountd 20048/TCP
TargetPort: 20048/TCP
Endpoints: 10.32.0.3:20048
Port: rpcbind 111/TCP
TargetPort: 111/TCP
Endpoints: 10.32.0.3:111
Session Affinity: None
Events: <none>
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs 1Mi RWX Retain Bound default/nfs 38m
pv01 10Gi RWO Retain Bound default/nfs-pv-provisioning-demo 1h
I see repeating events - MountVolume.SetUp failed for volume "nfs" : mount failed: exit status 32
$ kubectl describe pod nfs-busybox-lmgtx
Name: nfs-busybox-lmgtx
Namespace: default
Node: lab-kube-06/10.0.0.6
Start Time: Tue, 21 Nov 2017 20:39:35 +0000
Labels: name=nfs-busybox
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"nfs-busybox","uid":"15d683c2-cefc-11e7-8ed3-000d3a04e...
Status: Pending
IP:
Created By: ReplicationController/nfs-busybox
Controlled By: ReplicationController/nfs-busybox
Containers:
busybox:
Container ID:
Image: busybox
Image ID:
Port: <none>
Command:
sh
-c
while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mnt from nfs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-grgdz (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
nfs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs
ReadOnly: false
default-token-grgdz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-grgdz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned nfs-busybox-lmgtx to lab-kube-06
Normal SuccessfulMountVolume 17m kubelet, lab-kube-06 MountVolume.SetUp succeeded for volume "default-token-grgdz"
Warning FailedMount 17m kubelet, lab-kube-06 MountVolume.SetUp failed for volume "nfs" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/15d8d6d6-cefc-11e7-8ed3-000d3a04ebcd/volumes/kubernetes.io~nfs/nfs --scope -- mount -t nfs 10.111.29.157:/ /var/lib/kubelet/pods/15d8d6d6-cefc-11e7-8ed3-000d3a04ebcd/volumes/kubernetes.io~nfs/nfs
Output: Running scope as unit run-43641.scope.
mount: wrong fs type, bad option, bad superblock on 10.111.29.157:/,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so.
Warning FailedMount 9m (x4 over 15m) kubelet, lab-kube-06 Unable to mount volumes for pod "nfs-busybox-lmgtx_default(15d8d6d6-cefc-11e7-8ed3-000d3a04ebcd)": timeout expired waiting for volumes to attach/mount for pod "default"/"nfs-busybox-lmgtx". list of unattached/unmounted volumes=[nfs]
Warning FailedMount 4m (x8 over 15m) kubelet, lab-kube-06 (combined from similar events): Unable to mount volumes for pod "nfs-busybox-lmgtx_default(15d8d6d6-cefc-11e7-8ed3-000d3a04ebcd)": timeout expired waiting for volumes to attach/mount for pod "default"/"nfs-busybox-lmgtx". list of unattached/unmounted volumes=[nfs]
Warning FailedSync 2m (x7 over 15m) kubelet, lab-kube-06 Error syncing pod
$ kubectl describe pod nfs-busybox-xn9vz
Name: nfs-busybox-xn9vz
Namespace: default
Node: lab-kube-06/10.0.0.6
Start Time: Tue, 21 Nov 2017 20:39:35 +0000
Labels: name=nfs-busybox
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"nfs-busybox","uid":"15d683c2-cefc-11e7-8ed3-000d3a04e...
Status: Pending
IP:
Created By: ReplicationController/nfs-busybox
Controlled By: ReplicationController/nfs-busybox
Containers:
busybox:
Container ID:
Image: busybox
Image ID:
Port: <none>
Command:
sh
-c
while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mnt from nfs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-grgdz (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
nfs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs
ReadOnly: false
default-token-grgdz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-grgdz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 59m (x6 over 1h) kubelet, lab-kube-06 Unable to mount volumes for pod "nfs-busybox-xn9vz_default(15d7fb5e-cefc-11e7-8ed3-000d3a04ebcd)": timeout expired waiting for volumes to attach/mount for pod "default"/"nfs-busybox-xn9vz". list of unattached/unmounted volumes=[nfs]
Warning FailedMount 7m (x32 over 1h) kubelet, lab-kube-06 (combined from similar events): MountVolume.SetUp failed for volume "nfs" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/15d7fb5e-cefc-11e7-8ed3-000d3a04ebcd/volumes/kubernetes.io~nfs/nfs --scope -- mount -t nfs 10.111.29.157:/ /var/lib/kubelet/pods/15d7fb5e-cefc-11e7-8ed3-000d3a04ebcd/volumes/kubernetes.io~nfs/nfs
Output: Running scope as unit run-59365.scope.
mount: wrong fs type, bad option, bad superblock on 10.111.29.157:/,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so.
Warning FailedSync 2m (x31 over 1h) kubelet, lab-kube-06 Error syncing pod
Had the same problem,
sudo apt install nfs-kernel-server
directly on the nodes fixed it for ubuntu 18.04 server.
NFS server running on AWS EC2.
My pod was stuck in ContainerCreating state
I was facing this issue because of the Kubernetes cluster node CIDR range was not present in the inbound rule of Security Group of my AWS EC2 instance(where my NFS server was running )
Solution:
Added my Kubernetes cluser Node CIDR range to inbound rule of Security Group.
Installed the following nfs libraries on node machine of CentOS worked for me.
yum install -y nfs-utils nfs-utils-lib
Installing the nfs-common library in ubuntu worked for me.