I want to share multiple volumes using PersistentVolume reqource of kubernetes.
I want to share "/opt/*" folders in pod. But not the "/opt":
kind: PersistentVolume
apiVersion: v1
metadata:
name: demo
namespace: demo-namespace
labels:
app: myApp
chart: "my-app"
name: myApp
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: "myApp-data"
hostPath:
path: /opt/*
But in pod I am not able to see shared volume. If I share only "/opt" folder then it goes shown
in pod.
Is there anything I am missing?
If you want to share a folder among some pods or deployments or statefulsets you should create PersistentVolumeClaim and it's access mode should be ReadeWriteMany.So here is an example of PersistentVolumeClaim which has ReadeWriteMany mode
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
then in your pods you should use it as below ...
apiVersion: v1
kind: Pod
metadata:
name: mypod01
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: c01
image: alpine
volumeMounts:
- mountPath: "/opt"
name: task-pv-storage
apiVersion: v1
kind: Pod
metadata:
name: mypod02
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: c02
image: alpine
volumeMounts:
- mountPath: "/opt"
name: task-pv-storage
Related
I would like to understand if there is a way to mount a single PV that I manually created with ReadWriteMany to all the members of the replicaSet either using PodSpec or StatfulSet.
Yes, you can do if your PV is supporting the ReadWriteMany.
Here sharing the YAML for example
PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
spec:
replicas: 2
selector:
matchLabels:
run: busybox
template:
metadata:
labels:
run: busybox
spec:
containers:
- args:
- sh
image: busybox
name: busybox
stdin: true
tty: true
volumeMounts:
- name: pvc
mountPath: "/mnt"
restartPolicy: Always
volumes:
- name: pvc
persistentVolumeClaim:
claimName: test-claim
Can we mount directly the main dirs of containers to volumes as part of kubernetes podspec.
For ex:
/mnt
/dev
/var
All files and subdir of /mnt, /dev, /var should be mounted to volumes as part of podspec.
How can we do this?
For development purposes, you can create a hostPath Persistent Volume, but if you want to implement this for production, I strongly recommend you to use some NFS.
Here you have an example on how to use a NFS in a Pod definition:
kind: Pod
apiVersion: v1
metadata:
name: nfs-in-a-pod
spec:
containers:
- name: app
image: alpine
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs # change the destination you like the share to be mounted to
volumes:
- name: nfs-volume
nfs:
server: nfs.example.com # change this to your NFS server
path: /share1 # change this to the relevant share
And here an example of a hostPath Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
In the hostPath example, all files inside /mnt/data will be mounted as /usr/share/nginx/html in the Pod.
I get this error message:
Deployment.apps "nginxapp" is invalid: spec.template.spec.containers[0].volumeMounts[0].name: Not found: "nginx-claim"
Now, I thought deployment made a claim to a persistant storage, so these are det files I've run in order:
First, persistant volume to /data as that is persistent on minikube (https://minikube.sigs.k8s.io/docs/handbook/persistent_volumes/):
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
Then, for my nginx deployment I made a claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Before service I run the deployment, which is the one giving me the error above, looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginxapp
name: nginxapp
spec:
replicas: 1
volumes:
- persistentVolumeClaim:
claimName: nginx-claim
selector:
matchLabels:
app: nginxapp
template:
metadata:
labels:
app: nginxapp
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/data/www"
name: nginx-claim
Where did I go wrong? Isnt it deployment -> volume claim -> volume?
Am I doing it right? The persistent volume is pod-wide (?) and therefore generally named. But the claim is per deployment? So thats why I named it nginx-claim. I might be mistaken here, but should not bug up this simple run doh.
In my deployment i set mountPath: "/data/www", this should follow the directory already set in persistent volume definition, or is building on that? So in my case I get /data/data/www?
Try change to:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-claim
spec:
storageClassName: local-storage # <-- changed
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
At your deployment spec add:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginxapp
name: nginxapp
spec:
replicas: 1
selector:
matchLabels:
app: nginxapp
template:
metadata:
labels:
app: nginxapp
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: nginx-claim
mountPath: "/data/www"
volumes:
- name: nginx-claim # <-- added
persistentVolumeClaim:
claimName: nginx-claim
Looks like name: is missing under volumes: in the deployment manifest. Can you try the following in the deployment manifest:
volumes:
- name: nginx-claim
persistentVolumeClaim:
claimName: nginx-claim
Here is the documentation.
I'm trying to replace the in-memory storage of Grafana deployment with persistent storage using kustomize. What I'm trying to do is that I'm removing the in-memory storage and then mapping persistent storage. But When I'm deploying it then it is giving me an error.
Error
The Deployment "grafana" is invalid: spec.template.spec.containers[0].volumeMounts[1].name: Not found: "grafana-storage"
Kustomize version
{Version:kustomize/v4.0.5 GitCommit:9e8e7a7fe99ec9fbf801463e8607928322fc5245 BuildDate:2021-03-08T20:53:03Z GoOs:linux GoArch:amd64}
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://github.com/prometheus-operator/kube-prometheus
- grafana-pvc.yaml
patchesStrategicMerge:
- grafana-patch.yaml
grafana-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-storage
namespace: monitoring
labels:
billingType: "hourly"
region: sng01
zone: sng01
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: ibmc-file-bronze
grafana-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
namespace: monitoring
spec:
template:
spec:
volumes:
# use persistent storage for storing users instead of in-memory storage
- $patch: delete <---- trying to remove the previous volume
name: grafana-storage
- name: grafana-storage
persistentVolumeClaim:
claimName: grafana-storage
containers:
- name: grafana
volumeMounts:
- name: grafana-storage
mountPath: /var/lib/grafana
please help.
The $patch: delete doesn't seem to work as I would expect.
It may be nice to open an issue on kustomize github: https://github.com/kubernetes-sigs/kustomize/issues and ask developers about it.
Although here is the patch I tried, and it seems to work:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
namespace: monitoring
spec:
template:
spec:
volumes:
- name: grafana-storage
emptyDir: null
persistentVolumeClaim:
claimName: grafana-storage
containers:
- name: grafana
volumeMounts:
- name: grafana-storage
mountPath: /var/lib/grafana
Based on https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md
The following should also work in theory:
spec:
volumes:
- $retainKeys:
- name
- persistentVolumeClaim
name: grafana-storage
persistentVolumeClaim:
claimName: grafana-storage
But in practise it doesn't, and I think that's because kustomize has its own implementaions of strategic merge (different that k8s).
I am trying to create a Kubernetes pod with a single container which has two external volumes mounted on it. My .yml pod file is:
apiVersion: v1
kind: Pod
metadata:
name: my-project
labels:
name: my-project
spec:
containers:
- image: my-username/my-project
name: my-project
ports:
- containerPort: 80
name: nginx-http
- containerPort: 443
name: nginx-ssl-https
imagePullPolicy: Always
volumeMounts:
- mountPath: /home/projects/my-project/media/upload
name: pd-data
- mountPath: /home/projects/my-project/backups
name: pd2-data
imagePullSecrets:
- name: vpregistrykey
volumes:
- name: pd-data
persistentVolumeClaim:
claimName: pd-claim
- name: pd2-data
persistentVolumeClaim:
claimName: pd2-claim
I am using Persistent Volumes and Persisten Volume Claims, as such:
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pd-disk
labels:
name: pd-disk
spec:
capacity:
storage: 250Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: "pd-disk"
fsType: "ext4"
PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pd-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
I have initially created my disks using the command:
$ gcloud compute disks create --size 250GB pd-disk
Same goes for the second disk and second PV and PVC. Everything seems to work ok when I create the pod, no errors are thrown. Now comes the weird part: one of the paths is being mounted correctly (and is therefor persistent) and the other one is being erased every time I restart the pod...
I have tried re-creating everything from scratch, but nothing changes. Also, from the pod description, both volumes seem to be correctly mounted:
$ kubectl describe pod my-project
Name: my-project
...
Volumes:
pd-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd-claim
ReadOnly: false
pd2-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd2-claim
ReadOnly: false
Any help is appreciated. Thanks.
The Kubernetes documentation states:
Volumes can not mount onto other volumes or have hard links to other
volumes
I had the same issue and in my case the problem was that both volume mounts had overlapping mountPaths, i.e. both started with /var/.
They mounted without issues after fixing that.
I do not see any direct problem for which such behavior as explained above has occurred! But what I can rather ask you to try is to use a "Deployment" instead of a "Pod" as suggested by many here, especially when using PVs and PVCs. Deployment takes care of many things to maintain the "Desired State". I have attached my code below for your reference which works and both the volumes are persistent even after deleting/terminating/restarting as this is managed by the Deployment's desired state.
Two difference which you would find in my code from yours are:
I have a deployment object instead of pod
I am using GlusterFs for my volume.
Deployment yml.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
namespace: platform
labels:
component: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
component: nginx
spec:
nodeSelector:
role: app-1
containers:
- name: nginx
image: vip-intOAM:5001/nginx:1.15.3
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/etc/nginx/conf.d/"
name: nginx-confd
- mountPath: "/var/www/"
name: nginx-web-content
volumes:
- name: nginx-confd
persistentVolumeClaim:
claimName: glusterfsvol-nginx-confd-pvc
- name: nginx-web-content
persistentVolumeClaim:
claimName: glusterfsvol-nginx-web-content-pvc
One of my PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: glusterfsvol-nginx-confd-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
glusterfs:
endpoints: gluster-cluster
path: nginx-confd
readOnly: false
persistentVolumeReclaimPolicy: Retain
claimRef:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
PVC for the above
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi