I am trying to use Microk8s storage addon but my PVC and pod are stuck at pending and I don't know what is wrong. I am also using the "registry" addon which uses the storage and that one works without a problem.
FYI:
I already restarted the microk8s multiple times and even totally deleted and reinstalled it but the problem remained.
Yaml files:
# =================== pvc.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wws-registry-claim
spec:
volumeName: registry-pvc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: microk8s-hostpath
# =================== deployment.yaml (just spec section)
spec:
serviceName: registry
replicas: 1
selector:
matchLabels:
io.kompose.service: registry
template:
metadata:
labels:
io.kompose.service: registry
spec:
containers:
- image: {{ .Values.image }}
name: registry-master
ports:
- containerPort: 28015
- containerPort: 29015
- containerPort: 8080
resources:
requests:
cpu: {{ .Values.request_cpu }}
memory: {{ .Values.request_memory }}
limits:
cpu: {{ .Values.limit_cpu }}
memory: {{ .Values.limit_memory }}
volumeMounts:
- mountPath: /data
name: rdb-local-data
env:
- name: RUN_ENV
value: 'kubernetes'
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumes:
- name: rdb-local-data
persistentVolumeClaim:
claimName: wws-registry-claim
Cluster info:
$ kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
container-registry registry-claim Bound pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca 20Gi RWX microk8s-hostpath 56m
default wws-registry-claim Pending registry-pvc 0 microk8s-hostpath 23m
$ kubectl get pv -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca 20Gi RWX Delete Bound container-registry/registry-claim microk8s-hostpath 56m
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-9b8997588-vk5vt 1/1 Running 0 57m
hostpath-provisioner-7b9cb5cdb4-wxcp6 1/1 Running 0 57m
metrics-server-v0.2.1-598c8978c-74krr 2/2 Running 0 57m
tiller-deploy-77855d9dcf-4cvsv 1/1 Running 0 46m
$ kubectl -n kube-system logs hostpath-provisioner-7b9cb5cdb4-wxcp6
I0322 12:31:31.231110 1 controller.go:293] Starting provisioner controller 87fc12df-8b0a-11eb-b910-ee8a00c41384!
I0322 12:31:31.231963 1 controller.go:893] scheduleOperation[lock-provision-container-registry/registry-claim[dfef8e65-0618-4980-8b3c-e6e9efc5b0ca]]
I0322 12:31:31.235618 1 leaderelection.go:154] attempting to acquire leader lease...
I0322 12:31:31.237785 1 leaderelection.go:176] successfully acquired lease to provision for pvc container-registry/registry-claim
I0322 12:31:31.237841 1 controller.go:893] scheduleOperation[provision-container-registry/registry-claim[dfef8e65-0618-4980-8b3c-e6e9efc5b0ca]]
I0322 12:31:31.239011 1 hostpath-provisioner.go:86] creating backing directory: /var/snap/microk8s/common/default-storage/container-registry-registry-claim-pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca
I0322 12:31:31.239102 1 controller.go:627] volume "pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca" for claim "container-registry/registry-claim" created
I0322 12:31:31.244798 1 controller.go:644] volume "pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca" for claim "container-registry/registry-claim" saved
I0322 12:31:31.244813 1 controller.go:680] volume "pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca" provisioned for claim "container-registry/registry-claim"
I0322 12:31:33.243345 1 leaderelection.go:196] stopped trying to renew lease to provision for pvc container-registry/registry-claim, task succeeded
$ kubectl get sc
NAME PROVISIONER AGE
microk8s-hostpath microk8s.io/hostpath 169m
$ kubectl get sc -o yaml
apiVersion: v1
items:
- apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"microk8s-hostpath"},"provisioner":"microk8s.io/hostpath"}
creationTimestamp: "2021-03-22T12:31:25Z"
name: microk8s-hostpath
resourceVersion: "2845"
selfLink: /apis/storage.k8s.io/v1/storageclasses/microk8s-hostpath
uid: e94b5653-e261-4e1f-b646-e272e0c8c493
provisioner: microk8s.io/hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Microk8s inspect:
$ microk8s.inspect
Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-flanneld is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-apiserver is running
Service snap.microk8s.daemon-apiserver-kicker is running
Service snap.microk8s.daemon-proxy is running
Service snap.microk8s.daemon-kubelet is running
Service snap.microk8s.daemon-scheduler is running
Service snap.microk8s.daemon-controller-manager is running
Service snap.microk8s.daemon-etcd is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Copy VM name (or none) to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy current linux distribution to the final report tarball
Copy openSSL information to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster
WARNING: Docker is installed.
Add the following lines to /etc/docker/daemon.json:
{
"insecure-registries" : ["localhost:32000"]
}
and then restart docker with: sudo systemctl restart docker
Building the report tarball
Report tarball is at /var/snap/microk8s/1671/inspection-report-20210322_143034.tar.gz
I found the problem. Since the "host-provisioner" takes care of creating PV we should not pass the volumeName in our PVC yaml file. When I removed that field the provisioner could make a PV and bound my PVC to it and now my pod has started.
Now my PVC is:
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wws-registry-claim
spec:
# volumeName: registry-pvc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: microk8s-hostpath
Related
As the documentation states:
For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod
receives one PersistentVolumeClaim. In the nginx example above, each
Pod receives a single PersistentVolume with a StorageClass of
my-storage-class and 1 Gib of provisioned storage. If no StorageClass
is specified, then the default StorageClass will be used. When a Pod
is (re)scheduled onto a node, its volumeMounts mount the
PersistentVolumes associated with its PersistentVolume Claims. Note
that, the PersistentVolumes associated with the Pods' PersistentVolume
Claims are not deleted when the Pods, or StatefulSet are deleted. This
must be done manually.
The part I'm interested in is this: If no StorageClassis specified, then the default StorageClass will be used
I create a StatefulSet like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: ches
name: ches
spec:
serviceName: ches
replicas: 1
selector:
matchLabels:
app: ches
template:
metadata:
labels:
app: ches
spec:
serviceAccountName: ches-serviceaccount
nodeSelector:
ches-worker: "true"
volumes:
- name: data
hostPath:
path: /data/test
containers:
- name: ches
image: [here I have the repo]
imagePullPolicy: Always
securityContext:
privileged: true
args:
- server
- --console-address
- :9011
- /data
env:
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: ches-keys
key: access-key
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: ches-keys
key: secret-key
ports:
- containerPort: 9000
hostPort: 9011
resources:
limits:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: data
mountPath: /data
imagePullSecrets:
- name: edge-storage-token
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Of course I have already created the secrets, imagePullSecrets etc and I have labeled the node as ches-worker.
When I apply the yaml file, the pod is in Pending status and kubectl describe pod ches-0 -n ches gives the following error:
Warning FailedScheduling 6s default-scheduler 0/1 nodes are
available: 1 pod has unbound immediate PersistentVolumeClaims.
preemption: 0/1 nodes are available: 1 Preemption is not helpful for
scheduling
Am I missing something here?
You need to create a PV in order to get a PVC bound. If you want the PVs automatically created from PVC claims you need a Provisioner installed in your Cluster.
First create a PV with at least the amout of space need by your PVC.
Then you can apply your deployment yaml which contains the PVC claim.
K3s when installed, also downloads a storage class which makes it as default.
Check with kubectl get storageclass:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE
ALLOWVOLUMEEXPANSION AGE local-path rancher.io/local-path Delete
WaitForFirstConsumer false 8s
K8s cluster on the other hand, does not download also a default storage class.
In order to solve the problem:
Download rancher.io/local-path storage class:
kubectl apply -f
https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
Check with kubectl get storageclass
Make this storage class (local-path) the default:
kubectl patch
storageclass local-path -p '{"metadata":
{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
I'm using Kubernetes - v1.24.7 on Ubuntu 18.04.6 LTS and facing problem with the NFS - Persistent Volume mount. When i tried to deploy my Jenkins deployment file it always fails with below errors.
$ kubectl describe pod jenkins-6786789d5d-m26zw -n jenkins
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned jenkins/jenkins-6786789d5d-m26zw to worker-3
Warning FailedMount 5m31s (x2 over 14m) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[kube-api-access-65npd data]: timed out waiting for the condition
Warning FailedMount 3m17s (x8 over 23m) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data kube-api-access-65npd]: timed out waiting for the condition
Warning FailedMount 3m6s (x19 over 25m) kubelet MountVolume.SetUp failed for volume "pv-nfs" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nfsvers=4.1 192.168.72.136:/mnt/nfs/stg/jenkins /var/lib/kubelet/pods/853c44ed-bf2b-4e6a-b666-c1adab7f7f4b/volumes/kubernetes.io~nfs/pv-nfs
Output: mount.nfs: mounting 192.168.72.136:/mnt/nfs/stg/jenkins failed, reason given by server: No such file or directory
The below External NFS mount path provided by our IT-Storage Administrator.
192.168.72.136:/nfs-volume
The below packages have already been installed on master and nodes.
apt install nfs-common
apt install cifs-utils
apt install nfs-kernel-server
In my master and workers(Host Machine) i have added below in /etc/fstab and i could mount the nfs volume.
192.168.72.136:/nfs-volume /mnt/nfs/stg/ nfs defaults 0 0
However still same problem persisting while Kubernetes application deployment, Also tried with below option in /etc/fstab but same result.
192.168.72.136:/nfs-volume /mnt/nfs/stg/ nfs rw,hard,intr 0 0
My pv & pvc volume status.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs 100Gi RWX Retain Bound jenkins/pvc-nfs nfs 11s
$ kubectl get pvc -n jenkins
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs Bound pv-nfs 100Gi RWX nfs 17s
My PersistentVolume and Deployment yml as follows.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs
labels:
type: pv-nfs
spec:
storageClassName: nfs
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- nfsvers=4.0
nfs:
server: 192.168.72.136
path: "/mnt/nfs/stg/jenkins"
readOnly: false
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: jenkins
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
securityContext:
fsGroup: 0
runAsUser: 0
serviceAccountName: admin
containers:
- name: jenkins
image: jenkins/jenkins:latest
securityContext:
privileged: true
runAsUser: 0
ports:
- containerPort: 8080
volumeMounts:
- name: data
mountPath: /var/jenkins_home
volumes:
- name: data
persistentVolumeClaim:
claimName: pvc-nfs
Directory /mnt/nfs/stg/jenkins existing in NFS. Please let me know what I'm missing here?
Thanks for helping.
When the storage IT administrator has exported NFS share: /nfs-volume from 192.168.72.136, then in the PersistentVolume spec, the path should be /nfs-volume.
I tried to set up a rabbitmq cluster in a kubernetes envirnoment that has NFS PVs with the help of this tutorial. Unfortunately it seems like the rabbitmq wants to change the owner of /usr/lib/rabbitmq, but when I have a NFS directory mounted there, I get an error:
$ kubectl logs rabbitmq-0 -f
chown: /var/lib/rabbitmq: Operation not permitted
chown: /var/lib/rabbitmq: Operation not permitted
I guess I have two options: fork the rabbitmq and remove the chown and build my own images or make kubernetes/nfs work nicely. I would not like to make my own fork and getting kubernetes/nfs working nicely does not sound like it should be my problem. Any other ideas?
This is what i tried to reproduce this issue.
I was installed kubernetes cluster using kubeadm on redhat 7 and below is the cluster ,node details
ENVIRONMENT DETAILS:
[root#master tmp]# kubectl cluster-info
Kubernetes master is running at https://192.168.56.4:6443
KubeDNS is running at https://192.168.56.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root#master tmp]#
[root#master tmp]# kubectl get no
NAME STATUS ROLES AGE VERSION
master.k8s Ready master 8d v1.16.2
node1.k8s Ready <none> 7d22h v1.16.3
node2.k8s Ready <none> 7d21h v1.16.3
[root#master tmp]#
First i have set the nfs configuration on both master and worker nodes by running below steps on both master and worker nodes.here master node is nfs server and both worker nodes are nfs clients
NFS SETUP:
yum install nfs-utils nfs-utils-lib =============================================================>>>>> on nfs server,client
yum install portmap =============================================================>>>>> on nfs server,client
mkdir /nfsroot =============================>>>>>>>>>>>>>>>>>>on nfs server
[root#master ~]# cat /etc/exports =============================================================>>>>> on nfs server
/nfsroot 192.168.56.5/255.255.255.0(rw,sync,no_root_squash)
/nfsroot 192.168.56.6/255.255.255.0(rw,sync,no_root_squash)
exportfs -r =============================================================>>>>> on nfs server
service nfs start =============================================================>>>>> on nfs server,client
showmount -e =============================================================>>>>> on nfs server,client
Now nfs setup is ready and will apply rabbitmq k8s setup
RABBITMQ K8S SETUP:
First step is to create persistent volumes using the nfs mount which we created in above step
[root#master tmp]# cat /root/rabbitmq-pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-pv-1
spec:
accessModes:
- ReadWriteOnce
- ReadOnlyMany
nfs:
server: 192.168.56.4
path: /nfsroot
capacity:
storage: 1Mi
persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-pv-2
spec:
accessModes:
- ReadWriteOnce
- ReadOnlyMany
nfs:
server: 192.168.56.4
path: /nfsroot
capacity:
storage: 1Mi
persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-pv-3
spec:
accessModes:
- ReadWriteOnce
- ReadOnlyMany
nfs:
server: 192.168.56.4
path: /nfsroot
capacity:
storage: 1Mi
persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-pv-4
spec:
accessModes:
- ReadWriteOnce
- ReadOnlyMany
nfs:
server: 192.168.56.4
path: /nfsroot
capacity:
storage: 1Mi
persistentVolumeReclaimPolicy: Recycle
After applied the above manifest ,it created pv's as below
[root#master ~]# kubectl apply -f rabbitmq-pv.yaml
persistentvolume/rabbitmq-pv-1 created
persistentvolume/rabbitmq-pv-2 created
persistentvolume/rabbitmq-pv-3 created
persistentvolume/rabbitmq-pv-4 created
[root#master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
rabbitmq-pv-1 1Mi RWO,ROX Recycle Available 5s
rabbitmq-pv-2 1Mi RWO,ROX Recycle Available 5s
rabbitmq-pv-3 1Mi RWO,ROX Recycle Available 5s
rabbitmq-pv-4 1Mi RWO,ROX Recycle Available 5s
[root#master ~]#
No need to create persistentvolumeclaim ,since it will be automatically taken care while running statefulset manifest by volumeclaimtemplate option
now lets create the secret which you have mentioned as below
[root#master tmp]# kubectl create secret generic rabbitmq-config --from-literal=erlang-cookie=c-is-for-cookie-thats-good-enough-for-me
secret/rabbitmq-config created
[root#master tmp]#
[root#master tmp]# kubectl get secrets
NAME TYPE DATA AGE
default-token-vjsmd kubernetes.io/service-account-token 3 8d
jp-token-cfdzx kubernetes.io/service-account-token 3 5d2h
rabbitmq-config Opaque 1 39m
[root#master tmp]#
Now let submit your rabbitmq manifest by make changes of replacing all loadbalancer service type to nodeport service,since we are not using any cloudprovider environment.Also replace the volume names to rabbitmq-pv,which we have created in pv step.reduced the size from 1Gi to 1Mi,since it is just testing demo
apiVersion: v1
kind: Service
metadata:
# Expose the management HTTP port on each node
name: rabbitmq-management
labels:
app: rabbitmq
spec:
ports:
- port: 15672
name: http
selector:
app: rabbitmq
sessionAffinity: ClientIP
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
# The required headless service for StatefulSets
name: rabbitmq
labels:
app: rabbitmq
spec:
ports:
- port: 5672
name: amqp
- port: 4369
name: epmd
- port: 25672
name: rabbitmq-dist
clusterIP: None
selector:
app: rabbitmq
---
apiVersion: v1
kind: Service
metadata:
# The required headless service for StatefulSets
name: rabbitmq-cluster
labels:
app: rabbitmq
spec:
ports:
- port: 5672
name: amqp
- port: 4369
name: epmd
- port: 25672
name: rabbitmq-dist
type: NodePort
selector:
app: rabbitmq
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbitmq
spec:
serviceName: "rabbitmq"
selector:
matchLabels:
app: rabbitmq
replicas: 4
template:
metadata:
labels:
app: rabbitmq
spec:
terminationGracePeriodSeconds: 10
containers:
- name: rabbitmq
image: rabbitmq:3.6.6-management-alpine
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- >
if [ -z "$(grep rabbitmq /etc/resolv.conf)" ]; then
sed "s/^search \([^ ]\+\)/search rabbitmq.\1 \1/" /etc/resolv.conf > /etc/resolv.conf.new;
cat /etc/resolv.conf.new > /etc/resolv.conf;
rm /etc/resolv.conf.new;
fi;
until rabbitmqctl node_health_check; do sleep 1; done;
if [[ "$HOSTNAME" != "rabbitmq-0" && -z "$(rabbitmqctl cluster_status | grep rabbitmq-0)" ]]; then
rabbitmqctl stop_app;
rabbitmqctl join_cluster rabbit#rabbitmq-0;
rabbitmqctl start_app;
fi;
rabbitmqctl set_policy ha-all "." '{"ha-mode":"exactly","ha-params":3,"ha-sync-mode":"automatic"}'
env:
- name: RABBITMQ_ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: rabbitmq-config
key: erlang-cookie
ports:
- containerPort: 5672
name: amqp
- containerPort: 25672
name: rabbitmq-dist
volumeMounts:
- name: rabbitmq-pv
mountPath: /var/lib/rabbitmq
volumeClaimTemplates:
- metadata:
name: rabbitmq-pv
annotations:
volume.alpha.kubernetes.io/storage-class: default
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Mi # make this bigger in production
After submitted the pod manifest,able to see statefulsets,pods are created
[root#master tmp]# kubectl apply -f rabbitmq.yaml
service/rabbitmq-management created
service/rabbitmq created
service/rabbitmq-cluster created
statefulset.apps/rabbitmq created
[root#master tmp]#
NAME READY STATUS RESTARTS AGE
rabbitmq-0 1/1 Running 0 18m
rabbitmq-1 1/1 Running 0 17m
rabbitmq-2 1/1 Running 0 13m
rabbitmq-3 1/1 Running 0 13m
[root#master ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
rabbitmq-pv-rabbitmq-0 Bound rabbitmq-pv-1 1Mi RWO,ROX 49m
rabbitmq-pv-rabbitmq-1 Bound rabbitmq-pv-3 1Mi RWO,ROX 48m
rabbitmq-pv-rabbitmq-2 Bound rabbitmq-pv-2 1Mi RWO,ROX 44m
rabbitmq-pv-rabbitmq-3 Bound rabbitmq-pv-4 1Mi RWO,ROX 43m
[root#master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rabbitmq ClusterIP None <none> 5672/TCP,4369/TCP,25672/TCP 49m
rabbitmq-cluster NodePort 10.102.250.172 <none> 5672:30574/TCP,4369:31757/TCP,25672:31854/TCP 49m
rabbitmq-management NodePort 10.108.131.46 <none> 15672:31716/TCP 49m
[root#master ~]#
Now i tried to hit the rabbitmq management page using nodeport service by http://192.168.56.6://31716 and able to get the login page
So please let me know if you still face chown issue after you tried like above,so that we can see further by checking podsecuritypolicies applied or not
I am currently trying to deploy the following on Minikube. I used the configuration files to use a hostpath as a persistent storage on minikube node.
apiVersion: v1
kind: PersistentVolume
metadata:
name: "pv-volume"
spec:
capacity:
storage: "20Gi"
accessModes:
- "ReadWriteOnce"
hostPath:
path: /data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "orientdb-pv-claim"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "20Gi"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: orientdbservice
spec:
#replicas: 1
template:
metadata:
name: orientdbservice
labels:
run: orientdbservice
test: orientdbservice
spec:
containers:
- name: orientdbservice
image: orientdb:latest
env:
- name: ORIENTDB_ROOT_PASSWORD
value: "rootpwd"
ports:
- containerPort: 2480
name: orientdb
volumeMounts:
- name: orientdb-config
mountPath: /data/orientdb/config
- name: orientdb-databases
mountPath: /data/orientdb/databases
- name: orientdb-backup
mountPath: /data/orientdb/backup
volumes:
- name: orientdb-config
persistentVolumeClaim:
claimName: orientdb-pv-claim
- name: orientdb-databases
persistentVolumeClaim:
claimName: orientdb-pv-claim
- name: orientdb-backup
persistentVolumeClaim:
claimName: orientdb-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: orientdbservice
labels:
run: orientdbservice
spec:
type: NodePort
selector:
run: orientdbservice
ports:
- protocol: TCP
port: 2480
name: http
which results in following
#kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-volume 20Gi RWO Retain Available 4h
pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO Delete Bound default/orientdb-pv-claim standard 4h
#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
orientdb-pv-claim Bound pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO
#kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
orientdbservice 10.0.0.16 <nodes> 2480:30552/TCP 4h
#kubectl get pods
NAME READY STATUS RESTARTS AGE
orientdbservice-458328598-zsmw5 0/1 ContainerCreating 0 4h
#kubectl describe pod orientdbservice-458328598-zsmw5
Events:
FirstSeen LastSeen Count From SubObjectPath TypeReason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4h 1m 37 kubelet, minikube Warning FailedMount Unable to mount volumes for pod "orientdbservice-458328598-zsmw5_default(392b1298-78ff-11e7-a46d-1277ec3dd2b5)": timeout expired waiting for volumes to attach/mount for pod "default"/"orientdbservice-458328598-zsmw5". list of unattached/unmounted volumes=[orientdb-databases]
4h 1m 37 kubelet, minikube Warning FailedSync Error syncing pod
I see the following error
Unable to mount volumes for pod,timeout expired waiting for volumes to attach/mount for pod
Is there something incorrect in way I am creating Persistent Volume and PersistentVolumeClaim on my node.
minikube version: v0.20.0
Appreciate all the help
Your configuration is fine.
Tested under minikube v0.24.0, minikube v0.25.0 and minikube v0.26.1 without any problem.
Take in mind that minikube is under active development, and, specially if you're under windows, is like they say experimental software.
Update to a newer version of minikube and redeploy it. This should solve the problem.
You can check for updates with the minikube update-check command which results in something like this:
$ minikube update-check
CurrentVersion: v0.25.0
LatestVersion: v0.26.1
To upgrade minikube simply type minikube delete which deletes your current minikube installation and download the new release as described.
$ minikube delete
There is a newer version of minikube available (v0.26.1). Download it here:
https://github.com/kubernetes/minikube/releases/tag/v0.26.1
To disable this notification, run the following:
minikube config set WantUpdateNotification false
Deleting local Kubernetes cluster...
Machine deleted.
For somereason the provisioner provisioner: k8s.io/minikube-hostpath in minikube doesn't work.
So:
delete default storage class kubectl delete storageclass standard
create following storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: docker.io/hostpath
reclaimPolicy: Retain
Also in your volume mounts, you have one PVC bound to one PV, so instead of multiple volumes just have one volume and mount them with different subpaths, that will create three subdirectories(backup, config & databases) on your host's /data directory:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: orientdbservice
spec:
#replicas: 1
template:
metadata:
name: orientdbservice
labels:
run: orientdbservice
test: orientdbservice
spec:
containers:
- name: orientdbservice
image: orientdb:latest
env:
- name: ORIENTDB_ROOT_PASSWORD
value: "rootpwd"
ports:
- containerPort: 2480
name: orientdb
volumeMounts:
- name: orientdb
mountPath: /data/orientdb/config
subPath: config
- name: orientdb
mountPath: /data/orientdb/databases
subPath: databases
- name: orientdb
mountPath: /data/orientdb/backup
subPath: backup
volumes:
- name: orientdb
persistentVolumeClaim:
claimName: orientdb-pv-claim
- Now deploy your yaml: kubectl create -f yourorientdb.yaml
I am currently trying to reproduce this tutorial on Minikube:
http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html
I updated the configuration files to use a hostpath as a persistent storage on minikube node.
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: myclaim
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: myclaim
Which result in the following:
kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv0001 1Gi RWO Retain Available 17s
pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3 1Gi RWO Delete Bound default/myclaim 11s
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
myclaim Bound pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3 1Gi RWO 14s
kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 3d
mongo None <none> 27017/TCP 53s
kubectl get pod
No resources found.
kubectl describe service mongo
Name: mongo
Namespace: default
Labels: name=mongo
Selector: role=mongo
Type: ClusterIP
IP: None
Port: <unset> 27017/TCP
Endpoints: <none>
Session Affinity: None
No events.
kubectl get statefulsets
NAME DESIRED CURRENT AGE
mongo 3 0 4h
kubectl describe statefulsets mongo
Name: mongo
Namespace: default
Image(s): mongo,cvallance/mongo-k8s-sidecar
Selector: environment=test,role=mongo
Labels: environment=test,role=mongo
Replicas: 0 current / 3 desired
Annotations: <none>
CreationTimestamp: Thu, 30 Mar 2017 18:23:56 +0200
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------------- -------
1s 1s 4 {statefulset } WarningFailedCreate pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
1s 1s 4 {statefulset } WarningFailedCreate pvc: myclaim-mongo-1, error: PersistentVolumeClaim "myclaim-mongo-1" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
1s 0s 4 {statefulset } WarningFailedCreate pvc: myclaim-mongo-2, error: PersistentVolumeClaim "myclaim-mongo-2" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
kubectl get ev | grep mongo
29s 1m 15 mongo StatefulSet Warning FailedCreate {statefulset } pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
29s 1m 15 mongo StatefulSet Warning FailedCreate {statefulset } pvc: myclaim-mongo-1, error: PersistentVolumeClaim "myclaim-mongo-1" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
29s 1m 15 mongo StatefulSet Warning FailedCreate {statefulset } pvc: myclaim-mongo-2, error: PersistentVolumeClaim "myclaim-mongo-2" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
kubectl describe pvc myclaim
Name: myclaim
Namespace: default
StorageClass: standard
Status: Bound
Volume: pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3
Labels: <none>
Capacity: 1Gi
Access Modes: RWO
No events.
minikube version: v0.17.1
It seems that the service is not able to load pods, which makes it complicated to debug with kubectl logs.
Is there something wrong with the way I am creating a persistent volume on my node ?
Thanks a lot
TL; DR
In the situation described in the question the problem was that the Pods for the StatefulSet did not start up at all therefore the Service had no targets. The reason for not starting up was:
WarningFailedCreate pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]`
And since the volume by default is defined as required the Pod won't start without it. So edit the StatefulSet's volumeClaimTemplate to have:
volumeClaimTemplates:
- metadata:
name: myclaim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
(There is no need to create the PersistentVolumeClaim manually.)
More general solution
If can't connect a Service try this command:
kubectl describe service myservicename
And if you see something like this in the output:
Endpoints: <none>
That means there are no targets (usually Pods) running or the targets are not ready. To find out which one is the case do:
kubectl describe endpoint myservicename
It will list all endpoints, ready or not. If not ready, investigate the readinessProbe in the Pod. If doesn't exist then try to find out why by looking at the StatefulSet (Deployment, ReplicaSet, ReplicationController, etc) itself for messages (the Events section):
kubectl describe statefulset mystatefulsetname
This information is available if you do:
kubectl get ev | grep something
If you are sure they are running and ready then the labels on the Pods and the Service do not match up.