Running MongoDB on Kubernetes Minikube with local persistent storage - mongodb

I am currently trying to reproduce this tutorial on Minikube:
http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html
I updated the configuration files to use a hostpath as a persistent storage on minikube node.
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: myclaim
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: myclaim
Which result in the following:
kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv0001 1Gi RWO Retain Available 17s
pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3 1Gi RWO Delete Bound default/myclaim 11s
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
myclaim Bound pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3 1Gi RWO 14s
kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 3d
mongo None <none> 27017/TCP 53s
kubectl get pod
No resources found.
kubectl describe service mongo
Name: mongo
Namespace: default
Labels: name=mongo
Selector: role=mongo
Type: ClusterIP
IP: None
Port: <unset> 27017/TCP
Endpoints: <none>
Session Affinity: None
No events.
kubectl get statefulsets
NAME DESIRED CURRENT AGE
mongo 3 0 4h
kubectl describe statefulsets mongo
Name: mongo
Namespace: default
Image(s): mongo,cvallance/mongo-k8s-sidecar
Selector: environment=test,role=mongo
Labels: environment=test,role=mongo
Replicas: 0 current / 3 desired
Annotations: <none>
CreationTimestamp: Thu, 30 Mar 2017 18:23:56 +0200
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------------- -------
1s 1s 4 {statefulset } WarningFailedCreate pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
1s 1s 4 {statefulset } WarningFailedCreate pvc: myclaim-mongo-1, error: PersistentVolumeClaim "myclaim-mongo-1" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
1s 0s 4 {statefulset } WarningFailedCreate pvc: myclaim-mongo-2, error: PersistentVolumeClaim "myclaim-mongo-2" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
kubectl get ev | grep mongo
29s 1m 15 mongo StatefulSet Warning FailedCreate {statefulset } pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
29s 1m 15 mongo StatefulSet Warning FailedCreate {statefulset } pvc: myclaim-mongo-1, error: PersistentVolumeClaim "myclaim-mongo-1" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
29s 1m 15 mongo StatefulSet Warning FailedCreate {statefulset } pvc: myclaim-mongo-2, error: PersistentVolumeClaim "myclaim-mongo-2" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
kubectl describe pvc myclaim
Name: myclaim
Namespace: default
StorageClass: standard
Status: Bound
Volume: pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3
Labels: <none>
Capacity: 1Gi
Access Modes: RWO
No events.
minikube version: v0.17.1
It seems that the service is not able to load pods, which makes it complicated to debug with kubectl logs.
Is there something wrong with the way I am creating a persistent volume on my node ?
Thanks a lot

TL; DR
In the situation described in the question the problem was that the Pods for the StatefulSet did not start up at all therefore the Service had no targets. The reason for not starting up was:
WarningFailedCreate pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]`
And since the volume by default is defined as required the Pod won't start without it. So edit the StatefulSet's volumeClaimTemplate to have:
volumeClaimTemplates:
- metadata:
name: myclaim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
(There is no need to create the PersistentVolumeClaim manually.)
More general solution
If can't connect a Service try this command:
kubectl describe service myservicename
And if you see something like this in the output:
Endpoints: <none>
That means there are no targets (usually Pods) running or the targets are not ready. To find out which one is the case do:
kubectl describe endpoint myservicename
It will list all endpoints, ready or not. If not ready, investigate the readinessProbe in the Pod. If doesn't exist then try to find out why by looking at the StatefulSet (Deployment, ReplicaSet, ReplicationController, etc) itself for messages (the Events section):
kubectl describe statefulset mystatefulsetname
This information is available if you do:
kubectl get ev | grep something
If you are sure they are running and ready then the labels on the Pods and the Service do not match up.

Related

0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims

As the documentation states:
For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod
receives one PersistentVolumeClaim. In the nginx example above, each
Pod receives a single PersistentVolume with a StorageClass of
my-storage-class and 1 Gib of provisioned storage. If no StorageClass
is specified, then the default StorageClass will be used. When a Pod
is (re)scheduled onto a node, its volumeMounts mount the
PersistentVolumes associated with its PersistentVolume Claims. Note
that, the PersistentVolumes associated with the Pods' PersistentVolume
Claims are not deleted when the Pods, or StatefulSet are deleted. This
must be done manually.
The part I'm interested in is this: If no StorageClassis specified, then the default StorageClass will be used
I create a StatefulSet like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: ches
name: ches
spec:
serviceName: ches
replicas: 1
selector:
matchLabels:
app: ches
template:
metadata:
labels:
app: ches
spec:
serviceAccountName: ches-serviceaccount
nodeSelector:
ches-worker: "true"
volumes:
- name: data
hostPath:
path: /data/test
containers:
- name: ches
image: [here I have the repo]
imagePullPolicy: Always
securityContext:
privileged: true
args:
- server
- --console-address
- :9011
- /data
env:
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: ches-keys
key: access-key
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: ches-keys
key: secret-key
ports:
- containerPort: 9000
hostPort: 9011
resources:
limits:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: data
mountPath: /data
imagePullSecrets:
- name: edge-storage-token
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Of course I have already created the secrets, imagePullSecrets etc and I have labeled the node as ches-worker.
When I apply the yaml file, the pod is in Pending status and kubectl describe pod ches-0 -n ches gives the following error:
Warning FailedScheduling 6s default-scheduler 0/1 nodes are
available: 1 pod has unbound immediate PersistentVolumeClaims.
preemption: 0/1 nodes are available: 1 Preemption is not helpful for
scheduling
Am I missing something here?
You need to create a PV in order to get a PVC bound. If you want the PVs automatically created from PVC claims you need a Provisioner installed in your Cluster.
First create a PV with at least the amout of space need by your PVC.
Then you can apply your deployment yaml which contains the PVC claim.
K3s when installed, also downloads a storage class which makes it as default.
Check with kubectl get storageclass:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE
ALLOWVOLUMEEXPANSION AGE local-path rancher.io/local-path Delete
WaitForFirstConsumer false 8s
K8s cluster on the other hand, does not download also a default storage class.
In order to solve the problem:
Download rancher.io/local-path storage class:
kubectl apply -f
https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
Check with kubectl get storageclass
Make this storage class (local-path) the default:
kubectl patch
storageclass local-path -p '{"metadata":
{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

How to set pvc with statefulset in kubernetes?

On GKE, I set a statefulset resource as
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
selector:
matchLabels:
app: redis
updateStrategy:
type: RollingUpdate
replicas: 3
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
resources:
limits:
memory: 2Gi
ports:
- containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /usr/share/redis
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: redis-data-pvc
Want to use pvc so created this one. (This step was did before the statefulset deployment)
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
When check the resource in kubernetes
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
redis-data-pvc Bound pvc-6163d1f8-fb3d-44ac-a91f-edef1452b3b9 10Gi RWO standard 132m
The default Storage Class is standard.
kubectl get storageclass
NAME PROVISIONER
standard (default) kubernetes.io/gce-pd
But when check the statafulset's deployment status. It always wrong.
# Describe its pod details
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 22s default-scheduler persistentvolumeclaim "redis-data-pvc" not found
Warning FailedScheduling 17s (x2 over 20s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
Normal Created 2s (x2 over 3s) kubelet Created container redis
Normal Started 2s (x2 over 3s) kubelet Started container redis
Warning BackOff 0s (x2 over 1s) kubelet Back-off restarting failed container
Why can't it find the redis-data-pvc name?
What you have done, should work. Make sure that the PersistentVolumeClaim and the StatefulSet is located in the same namespace.
Thats said, this is an easier solution, and that let you easier scale up to more replicas:
When using StatefulSet and PersistentVolumeClaim, use the volumeClaimTemplates: field in the StatefulSet instead.
The volumeClaimTemplates: will be used to create unique PVCs for each replica, and they have unique naming ending with e.g. -0 where the number is an ordinal used for the replicas in a StatefulSet.
So instead, use a SatefuleSet manifest like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
selector:
matchLabels:
app: redis
updateStrategy:
type: RollingUpdate
replicas: 3
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
resources:
limits:
memory: 2Gi
ports:
- containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /usr/share/redis
volumeClaimTemplates: // this will be used to create PVC
- metadata:
name: redis-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi

Cannot get my stateful service to run. Pods can't get scheduled onto nodes

I have been trying for a while to get a stateful service to start on my kubernetes cluster.
The cluster has one master and one worker. It's running on top of AWS EC2 instances running with Ubuntu 18.04.
I've tried everything that I can think of but when I create the stateful service, the pods won't get scheduled onto the nodes.
I believe that it has something to do with the PV's, but I can't figure out what.
Also, I'm having a hard time getting any diagnostics. Trying to run kubectl logs on the pod and container returns nothing.
I first tried using local hardware, i.e. a local mount, but that didn't fix the problem.
I've now created an AWS EBS volume and have created a PV that references this.
The PV binds to it correctly, but I still can't get kubernetes to schedule the pods on the worker node.
Here are the .yaml config files that I'm using.
The first one creates the storageclass called 'fast'
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
Here is the yaml file that creates the PV.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
Finally, here's the statefulset yaml file
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: lab4a
name: apache-http
spec:
selector:
matchLabels:
app: httpd
serviceName: "httpd-service"
replicas: 3
template:
metadata:
namespace: lab4a
labels:
app: httpd
spec:
terminationGracePeriodSeconds: 10
containers:
- name: httpd
image: httpd:latest
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/local/apache2/htdocs
volumeClaimTemplates:
- metadata:
name: web-pvc
namespace: lab4a
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "fast"
resources:
requests:
storage: 10Gi
kubectl get pv gives me:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
ebs-pv 10Gi RWX Retain Available 31m
So it stands to reason, at least as far as I can tell, that the PV is ready to go.
From my understanding, I don't need to supply a PV Claim manually as the volumeClaimTemplates section in the statefulset yaml file will do this dynamically.
kubectl get all -n lab4a gives me:
NAME READY STATUS RESTARTS AGE
pod/web-0 0/1 Pending 0 16m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx ClusterIP None <none> 80/TCP 16m
NAME READY AGE
statefulset.apps/web 0/2 16m
when I run kubectl describe pod web-0 -n lab4a I get the following:
Name: web-0
Namespace: lab4a
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: app=nginx
controller-revision-hash=web-b46f789c4
statefulset.kubernetes.io/pod-name=web-0
Annotations: <none>
Status: Pending
IP:
Controlled By: StatefulSet/web
Containers:
nginx:
Image: k8s.gcr.io/nginx-slim:0.8
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/usr/share/nginx/html from www (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mjclk (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
www:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: www-web-0
ReadOnly: false
default-token-mjclk:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mjclk
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 35s (x14 over 16m) default-scheduler 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.
I have no idea what's failing, and I don't know what else to try to debug this problem. Is kubernetes failing to bind the persistent volume to the node? Or is it some other issue?
Any help appreciated.
Thanks
(1) Your Storage
AWS EBS does not provide ReadWriteMany (see table in the docs https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes).
You can
Use ReadWriteOnce instead (proposed).
Set up an in-cluster NFS that hosts PVs that allow ReadWriteMany if you do have an actual need for this.
(2) Your Taints and Tolerations
Your Pod's tolerations look okay; can you provide insight on your nodes' taints? Did you fiddle around with kubectl taint ... before on this cluster? Is this a managed cluster or did you set it up on your own on AWS machines?

k8s - Cinder "0/x nodes are available: x node(s) had volume node affinity conflict"

I have my own cluster k8s. I'm trying to link the cluster to openstack / cinder.
When I'm creating a PVC, I can a see a PV in k8s and the volume in Openstack.
But when I'm linking a pod with the PVC, I have the message k8s - Cinder "0/x nodes are available: x node(s) had volume node affinity conflict".
My yml test:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: classic
provisioner: kubernetes.io/cinder
parameters:
type: classic
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-infra-consuldata4
namespace: infra
spec:
storageClassName: classic
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: consul
namespace: infra
labels:
app: consul
spec:
replicas: 1
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
containers:
- name: consul
image: consul:1.4.3
volumeMounts:
- name: data
mountPath: /consul
resources:
requests:
cpu: 100m
limits:
cpu: 500m
command: ["consul", "agent", "-server", "-bootstrap", "-ui", "-bind", "0.0.0.0", "-client", "0.0.0.0", "-data-dir", "/consul"]
volumes:
- name: data
persistentVolumeClaim:
claimName: pvc-infra-consuldata4
The result:
kpro describe pvc -n infra
Name: pvc-infra-consuldata4
Namespace: infra
StorageClass: classic
Status: Bound
Volume: pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"pvc-infra-consuldata4","namespace":"infra"},"spec":...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ProvisioningSucceeded 61s persistentvolume-controller Successfully provisioned volume pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c using kubernetes.io/cinder
Mounted By: consul-85684dd7fc-j84v7
kpro describe po -n infra consul-85684dd7fc-j84v7
Name: consul-85684dd7fc-j84v7
Namespace: infra
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: app=consul
pod-template-hash=85684dd7fc
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/consul-85684dd7fc
Containers:
consul:
Image: consul:1.4.3
Port: <none>
Host Port: <none>
Command:
consul
agent
-server
-bootstrap
-ui
-bind
0.0.0.0
-client
0.0.0.0
-data-dir
/consul
Limits:
cpu: 2
Requests:
cpu: 500m
Environment: <none>
Mounts:
/consul from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nxchv (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-infra-consuldata4
ReadOnly: false
default-token-nxchv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nxchv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 36s (x6 over 2m40s) default-scheduler 0/6 nodes are available: 6 node(s) had volume node affinity conflict.
Why K8s successful create the Cinder volume, but it can't schedule the pod ?
Try finding out the nodeAffinity of your persistent volume:
$ kubctl describe pv pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [xxx]
Then try to figure out if xxx matches the node label yyy that your pod is supposed to run on:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
yyy Ready worker 8d v1.15.3
If they don't match, you will have the "x node(s) had volume node affinity conflict" error, and you need to re-create the persistent volume with the correct nodeAffinity configuration.
I also ran into this issue when I forgot to deploy the EBS CSI driver before I tried to get my pod to connect to it.
kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"

Kubernetes Minikube with local persistent storage

I am currently trying to deploy the following on Minikube. I used the configuration files to use a hostpath as a persistent storage on minikube node.
apiVersion: v1
kind: PersistentVolume
metadata:
name: "pv-volume"
spec:
capacity:
storage: "20Gi"
accessModes:
- "ReadWriteOnce"
hostPath:
path: /data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "orientdb-pv-claim"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "20Gi"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: orientdbservice
spec:
#replicas: 1
template:
metadata:
name: orientdbservice
labels:
run: orientdbservice
test: orientdbservice
spec:
containers:
- name: orientdbservice
image: orientdb:latest
env:
- name: ORIENTDB_ROOT_PASSWORD
value: "rootpwd"
ports:
- containerPort: 2480
name: orientdb
volumeMounts:
- name: orientdb-config
mountPath: /data/orientdb/config
- name: orientdb-databases
mountPath: /data/orientdb/databases
- name: orientdb-backup
mountPath: /data/orientdb/backup
volumes:
- name: orientdb-config
persistentVolumeClaim:
claimName: orientdb-pv-claim
- name: orientdb-databases
persistentVolumeClaim:
claimName: orientdb-pv-claim
- name: orientdb-backup
persistentVolumeClaim:
claimName: orientdb-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: orientdbservice
labels:
run: orientdbservice
spec:
type: NodePort
selector:
run: orientdbservice
ports:
- protocol: TCP
port: 2480
name: http
which results in following
#kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-volume 20Gi RWO Retain Available 4h
pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO Delete Bound default/orientdb-pv-claim standard 4h
#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
orientdb-pv-claim Bound pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO
#kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
orientdbservice 10.0.0.16 <nodes> 2480:30552/TCP 4h
#kubectl get pods
NAME READY STATUS RESTARTS AGE
orientdbservice-458328598-zsmw5 0/1 ContainerCreating 0 4h
#kubectl describe pod orientdbservice-458328598-zsmw5
Events:
FirstSeen LastSeen Count From SubObjectPath TypeReason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4h 1m 37 kubelet, minikube Warning FailedMount Unable to mount volumes for pod "orientdbservice-458328598-zsmw5_default(392b1298-78ff-11e7-a46d-1277ec3dd2b5)": timeout expired waiting for volumes to attach/mount for pod "default"/"orientdbservice-458328598-zsmw5". list of unattached/unmounted volumes=[orientdb-databases]
4h 1m 37 kubelet, minikube Warning FailedSync Error syncing pod
I see the following error
Unable to mount volumes for pod,timeout expired waiting for volumes to attach/mount for pod
Is there something incorrect in way I am creating Persistent Volume and PersistentVolumeClaim on my node.
minikube version: v0.20.0
Appreciate all the help
Your configuration is fine.
Tested under minikube v0.24.0, minikube v0.25.0 and minikube v0.26.1 without any problem.
Take in mind that minikube is under active development, and, specially if you're under windows, is like they say experimental software.
Update to a newer version of minikube and redeploy it. This should solve the problem.
You can check for updates with the minikube update-check command which results in something like this:
$ minikube update-check
CurrentVersion: v0.25.0
LatestVersion: v0.26.1
To upgrade minikube simply type minikube delete which deletes your current minikube installation and download the new release as described.
$ minikube delete
There is a newer version of minikube available (v0.26.1). Download it here:
https://github.com/kubernetes/minikube/releases/tag/v0.26.1
To disable this notification, run the following:
minikube config set WantUpdateNotification false
Deleting local Kubernetes cluster...
Machine deleted.
For somereason the provisioner provisioner: k8s.io/minikube-hostpath in minikube doesn't work.
So:
delete default storage class kubectl delete storageclass standard
create following storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: docker.io/hostpath
reclaimPolicy: Retain
Also in your volume mounts, you have one PVC bound to one PV, so instead of multiple volumes just have one volume and mount them with different subpaths, that will create three subdirectories(backup, config & databases) on your host's /data directory:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: orientdbservice
spec:
#replicas: 1
template:
metadata:
name: orientdbservice
labels:
run: orientdbservice
test: orientdbservice
spec:
containers:
- name: orientdbservice
image: orientdb:latest
env:
- name: ORIENTDB_ROOT_PASSWORD
value: "rootpwd"
ports:
- containerPort: 2480
name: orientdb
volumeMounts:
- name: orientdb
mountPath: /data/orientdb/config
subPath: config
- name: orientdb
mountPath: /data/orientdb/databases
subPath: databases
- name: orientdb
mountPath: /data/orientdb/backup
subPath: backup
volumes:
- name: orientdb
persistentVolumeClaim:
claimName: orientdb-pv-claim
- Now deploy your yaml: kubectl create -f yourorientdb.yaml