I have a scenario to mount two Dynamic PVC in Minikube, when i tried the below manifest file, I can see only one volume mounted in the POD, when i describe pod, i can see both volumes bounded, not sure what is the cause.
apiVersion: v1
metadata:
name: niranjan-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: praveen-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
kind: Pod
apiVersion: v1
metadata:
name: praveen-pod
spec:
volumes:
- name: niranjan-pv
persistentVolumeClaim:
claimName: niranjan-pvc
- name: praveen-pv
persistentVolumeClaim:
claimName: praveen-pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/tmp/niranjan-mount"
name: niranjan-pv
volumeMounts:
- mountPath: "/tmp/praveen-mount"
name: praveen-pv ```
[enter image description here][1]
[enter image description here][2]
[enter image description here][3]
[1]: https://i.stack.imgur.com/uyNKg.png
[2]: https://i.stack.imgur.com/3nDPl.png
[3]: https://i.stack.imgur.com/oOrq2.png
For the volumeMounts, you should provide only one value with an array:
kind: Pod
apiVersion: v1
metadata:
name: praveen-pod
spec:
volumes:
- name: niranjan-pv
persistentVolumeClaim:
claimName: niranjan-pvc
- name: praveen-pv
persistentVolumeClaim:
claimName: praveen-pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/tmp/niranjan-mount"
name: niranjan-pv
- mountPath: "/tmp/praveen-mount"
name: praveen-pv
If not one of those variables will override the value of the second one, and you will have only one mounted volume.
Related
Can we mount directly the main dirs of containers to volumes as part of kubernetes podspec.
For ex:
/mnt
/dev
/var
All files and subdir of /mnt, /dev, /var should be mounted to volumes as part of podspec.
How can we do this?
For development purposes, you can create a hostPath Persistent Volume, but if you want to implement this for production, I strongly recommend you to use some NFS.
Here you have an example on how to use a NFS in a Pod definition:
kind: Pod
apiVersion: v1
metadata:
name: nfs-in-a-pod
spec:
containers:
- name: app
image: alpine
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs # change the destination you like the share to be mounted to
volumes:
- name: nfs-volume
nfs:
server: nfs.example.com # change this to your NFS server
path: /share1 # change this to the relevant share
And here an example of a hostPath Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
In the hostPath example, all files inside /mnt/data will be mounted as /usr/share/nginx/html in the Pod.
I want to share multiple volumes using PersistentVolume reqource of kubernetes.
I want to share "/opt/*" folders in pod. But not the "/opt":
kind: PersistentVolume
apiVersion: v1
metadata:
name: demo
namespace: demo-namespace
labels:
app: myApp
chart: "my-app"
name: myApp
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: "myApp-data"
hostPath:
path: /opt/*
But in pod I am not able to see shared volume. If I share only "/opt" folder then it goes shown
in pod.
Is there anything I am missing?
If you want to share a folder among some pods or deployments or statefulsets you should create PersistentVolumeClaim and it's access mode should be ReadeWriteMany.So here is an example of PersistentVolumeClaim which has ReadeWriteMany mode
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
then in your pods you should use it as below ...
apiVersion: v1
kind: Pod
metadata:
name: mypod01
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: c01
image: alpine
volumeMounts:
- mountPath: "/opt"
name: task-pv-storage
apiVersion: v1
kind: Pod
metadata:
name: mypod02
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: c02
image: alpine
volumeMounts:
- mountPath: "/opt"
name: task-pv-storage
Need to create a single pod with multiple containers for MySQL, MongoDB, MySQL. My question is should I need to create persistence volume and persistence volume claim for each container and specify the volume in pod configuration or single PV & PVC is enough for all the containers in a single pod-like below configs.
Could you verify below configuration is enough or not?
PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypod-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypod-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mypod
labels:
app: mypod
spec:
replicas: 1
selector:
matchLabels:
app: mypod
template:
metadata:
labels:
app: mypod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: mypod-pvc
containers:
- name: mysql
image: mysql/mysql-server:latest
ports:
- containerPort: 3306
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mysql"
name: task-pv-storage
- name: mongodb
image: openshift/mongodb-24-centos7
ports:
- containerPort: 27017
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mongodb"
name: task-pv-storage
- name: mssql
image: mcr.microsoft.com/mssql/server
ports:
- containerPort: 1433
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/opt/mssql"
name: task-pv-storage
imagePullSecrets:
- name: devplat
You should not be running multiple database containers inside a single pod.
Consider running each database in a separate statefulset.
follow below reference for mysql
https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
You need to adopt similar approach for mongodb or other databases as well.
i try to deploy a container but unfortunately i have an error when i try to execute kubectl apply -f *.yaml
the error is :
error validating data: ValidationError(Pod.spec.containers[1]):
unknown field "persistentVolumeClaim" in io.k8s.api.core.v1.Container;
i dont understand why i get the error because i wrote claimName: under persistentVolumeClaim: in my pd.yaml config :(
Pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: /apps/karaf/etc
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-claim
PersistentVolumeClaimKaraf.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: karaf-conf-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
Deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: karaf
namespace: poc
spec:
replicas: 1
template:
metadata:
labels:
app: karaf
spec:
containers:
- name: karaf
image: "xxx/karaf:ids-1.1.0"
imagePullPolicy: Always
ports:
- containerPort: 6443
- containerPort: 6100
- containerPort: 6101
resources:
volumeMounts:
- mountPath: /apps/karaf/etc
name: karaf-conf
volumes:
- name: karaf-conf
persistentVolumeClaim:
claimName: karaf-conf
The reason you're seeing that error is due to you specifying a persistentVolumeClaim under your pod spec's container specifications. As you can see from the auto generated docs here: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core
persistentVolumeClaims aren't supported at this level/API object, which is what's giving the error you're seeing.
You should modify the pod.yml to specify this as a volume instead.
e.g.:
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: /apps/karaf/etc
volumes:
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-claim
According to kubernetes documentation, persistentVolumeClaim is a part of .spec.volume level, not .spec.container level of a pod object.
The correct pod.yaml is:
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
volumes:
- name: efgkaraf-conf-storage
persistentVolumeClaim:
claimName: efgkaraf-conf-claim
containers:
- name: karaf
image: docker-all.attanea.net/library/efgkaraf:ids-1.1.0
volumeMounts:
- name: efgkaraf-conf-storage
mountPath: /apps/karaf/etc
I'm trying to deploy a persistent storage for couch DB and it is failing out with the error
kubectl create -f couch_persistant_deploy.yaml
error: error validating "couch_persistant_deploy.yaml": error validating data: couldn't find type: v1.Deployment; if you choose to ignore these errors, turn validation off with --validate=false
Create volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/sda1/data/test
Claim volume.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
labels:
app: couchdb
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Deploy the VM.yaml
apiVersion: extensions/v1beta1
#apiVersion: v1
kind: Deployment
#kind: ReplicationController
metadata:
name: couchdb
spec:
replicas: 1
template:
metadata:
labels:
app: couchdb
spec:
containers:
- name: couchdb
image: "couchdb"
imagePullPolicy: Always
env:
- name: COUCHDB_USER
value: admin
- name: COUCHDB_PASSWORD
value: password
ports:
- name: couchdb
containerPort: 5984
- name: epmd
containerPort: 4369
containerPort: 9100
volumeMounts:
- mountPath: "/opt/couchdb/data"
name: task-pv-storage
imagePullSecrets:
- name: registrypullsecret2
#volumes:
#- name: database-storage
# emptyDir: {}
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
Any leads is really appreciated.
Your error message should be like this:
error: error validating "couch_persistant_deploy.yaml": error validating data: ValidationError(Deployment.spec.template.spec.volumes[0]): unknown field "claimName" in io.k8s.api.core.v1.Volume; if you choose to ignore these errors, turn validation off with --validate=false
See, error message is specific: unknown field "claimName" in io.k8s.api.core.v1.Volume
You need to put claimName under persistentVolumeClaim.
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim # fix is here
But you did
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim # invalid
Which makes your Deployment object invalid