How do I define a persistent storage claim for my deployment? - kubernetes

I get this error message:
Deployment.apps "nginxapp" is invalid: spec.template.spec.containers[0].volumeMounts[0].name: Not found: "nginx-claim"
Now, I thought deployment made a claim to a persistant storage, so these are det files I've run in order:
First, persistant volume to /data as that is persistent on minikube (https://minikube.sigs.k8s.io/docs/handbook/persistent_volumes/):
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
Then, for my nginx deployment I made a claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Before service I run the deployment, which is the one giving me the error above, looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginxapp
name: nginxapp
spec:
replicas: 1
volumes:
- persistentVolumeClaim:
claimName: nginx-claim
selector:
matchLabels:
app: nginxapp
template:
metadata:
labels:
app: nginxapp
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/data/www"
name: nginx-claim
Where did I go wrong? Isnt it deployment -> volume claim -> volume?
Am I doing it right? The persistent volume is pod-wide (?) and therefore generally named. But the claim is per deployment? So thats why I named it nginx-claim. I might be mistaken here, but should not bug up this simple run doh.
In my deployment i set mountPath: "/data/www", this should follow the directory already set in persistent volume definition, or is building on that? So in my case I get /data/data/www?

Try change to:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-claim
spec:
storageClassName: local-storage # <-- changed
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
At your deployment spec add:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginxapp
name: nginxapp
spec:
replicas: 1
selector:
matchLabels:
app: nginxapp
template:
metadata:
labels:
app: nginxapp
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: nginx-claim
mountPath: "/data/www"
volumes:
- name: nginx-claim # <-- added
persistentVolumeClaim:
claimName: nginx-claim

Looks like name: is missing under volumes: in the deployment manifest. Can you try the following in the deployment manifest:
volumes:
- name: nginx-claim
persistentVolumeClaim:
claimName: nginx-claim
Here is the documentation.

Related

How can I mount folder correctly in kubernetes

I'm trying to run nodered in my minikube kubernetes cluster ("cluster", its one node :D).
The docker command to do this is by example:
docker run -it -p 1880:1880 -v /home/user/node_red_data:/data --name mynodered nodered/node-red
But I'm not running it in docker, I'm trying to run it in minikube. The documentation of minikube states that /data on the host is persisted. So what I wanted was a /data/nodered to be mounted up as /data on the nodered container.
I started with adding a storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
Then added persistent storage:
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
Then a persistent volume claim for the nodered:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nodered-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
And then the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: "/data"
subPath: "nodered"
I've checked kubernetes dasboard and everything is green and volume is bound. I created a simple http service in nodered and deployed it. It's running but nothing is saved. So if the deployment goes down and gets redeployed it will be empty.
I've checked the /data and /data/nodered folders on the minikube instance running in docker but they are empty.
Your deployment spec should return error, try the following:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: /data/nodered
# subPath: nodered <-- not needed in your case
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim

PersistentVolume not utilized in Docker Desktop (MacOS) hosted Kubernetes cluster

I have deployed (using Helm) some services in the K8s cluster hosted by Docker Desktop (MacOS). One of the "services" is MongoDB, for which I'm trying to set up a PersistedVolume, so that the actual data will be retained in a MacOS local directory between cluster (re)installs (or MongoDB pod replacements). Everything "works" per se, but the MongoDB container process keeps setting up its local directory /data/db, as if nothing is really setup in terms of Persistent Volumes. I've been pulling my hair for a while now and thought an extra set of eyes might spot whats wrong or missing.
I have several other resources deployed, e.g a small Micronaut based backend service which exposes an API to read from the MongoDB instance. All of that works just fine.
Here are the descriptors involved for MongoDB:
PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Deployment:
kind: Deployment
metadata:
name: fo-persons-mongodb
namespace: fo
labels:
app: fo-persons-mongodb
spec:
replicas: 1
selector:
matchLabels:
app: fo-persons-mongodb
template:
metadata:
labels:
app: fo-persons-mongodb
spec:
volumes:
- name: fo-persons-mongodb-volume-pvc
persistentVolumeClaim:
claimName: persons-mongodb-pvc
containers:
- name: fo-persons-mongodb
image: mongo
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
volumeMounts:
- name: fo-persons-mongodb-volume-pvc
mountPath: "/data/db"
Service:
apiVersion: v1
kind: Service
metadata:
name: fo-persons-mongodb
namespace: fo
spec:
type: ClusterIP
selector:
app: fo-persons-mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /Users/mike/kubernetes/fo/persons/mongodb
Alright! I got it working. Seems I'd made two mistakes. Below are the updated descriptors for the PersistentVolumeClaim and PersistentVolume:
Error #1: Not setting the storageClassName in the spec of the PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local-storage
Error #2: Not setting the node affinity and not using local.path instead of hostPath, both in the PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
local:
path: /Users/mike/kubernetes/fo/persons/mongodb
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop

Kubernetes/Minikube: After mounting volume, pod mounted directory is empty

I am new to k8s. I am following official tutorial on setting up Nginx pods in k8s using minikube, mounting a volume and serving index.html.
When I mount and go to hompe page, i receive this error that
directory index of "/usr/share/nginx/html/" is forbidden.
If I dont mount anything, i receive a "Welcome to Nginx" page.
This is the content of that folder before mount. And after mount is empty
root#00c1:/usr/share/nginx/html# ls -l
total 8
-rw-r--r-- 1 root root 494 Jul 23 11:45 50x.html
-rw-r--r-- 1 root root 612 Jul 23 11:45 index.html
Why is mounted folder inside pod empty after mounting?
This is my setup
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/my_username/test/html"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Mi
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-hello-rc
spec:
replicas: 2
selector:
app: hello-nginx-tester
template:
metadata:
labels:
app: hello-nginx-tester
spec:
securityContext:
fsGroup: 1000
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
---
apiVersion: v1
kind: Service
metadata:
name: nginx-tester
labels:
app: hello-nginx-tester
spec:
type: NodePort
ports:
- port: 80
nodePort: 30500
protocol: TCP
selector:
app: hello-nginx-tester
Any info would be appreciated. thanks
I've checked your configuration on my running k8s environment. After some adjustments the following manifest works smoothly for me:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/my_username/test/html"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Mi
volumeName: task-pv-volume
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-hello-rc
spec:
replicas: 2
selector:
app: hello-nginx-tester
template:
metadata:
labels:
app: hello-nginx-tester
spec:
securityContext:
fsGroup: 1000
volumes:
- name: task-pv-volume
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-volume
---
apiVersion: v1
kind: Service
metadata:
name: nginx-tester
labels:
app: hello-nginx-tester
spec:
type: NodePort
ports:
- port: 80
nodePort: 30500
protocol: TCP
selector:
app: hello-nginx-tester
The owner of the directory "/usr/share/nginx/html" will be 1000 because you have set the SecurityContext fsGroup values. Hence you have unable to access the directory. if you remove the SecurityContext section, the owner of the mounted volume will be set as root. You won't get access issues.

How to define PVC for specific path in a single node in kubernetes

I am running local k8s cluster and defining PV as hostPath for mysql pods.
Sharing all the configuration details below .
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The problem I am getting is as mysql pod is running in k8s cluster ,when its deleted and recreate ,it will choose any one of the node and deployed .So mysql hostpath always mounted to specific node .Is it a good idea to fix the node for mysql or any other options are there ?please share if any idea .
you have below choices
Use node selector or node affinity to ensure that pod gets scheduled on the node where the mount is created OR
Use local persistent volumes. it is supported on kubernetes 1.14 and above
Why are you using a PVC and a PV? Actually, for hostPath, you don't even need to create the PV object. It just gets it.
You should use a StatefulSet if you want a pod that is re-created to get the storage it was using the previous one (state).
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: mysql
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-persistent-storage
spec:
accessModes: ["ReadWriteOnce"]
# storageClassName: "standard"
resources:
requests:
storage: 2Gi
This statefulSet fails, but it is a mysql thing. As reference, should serve.

NFS PersistentVolume data lost when deleting K8s deployment

I create a PersistentVolume using NFS as below, when I delete the deployment I lose my data. If I exec into the postgres container the DB that was created before is not there anymore.
Using AWS EKS, I managed to deleted a deployment without losing any data.
Any help as to why this happens?
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /mnt/pv001
server: 164.10.0.1
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: metabase-postgres-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Deployment
...
spec:
volumes:
- name: metabase-postgres-storage
persistentVolumeClaim:
claimName: metabase-postgres-persistent-volume-claim
...
I had the wrong mountPath, must be /var/lib/postgresql/data
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: edw-pg
spec:
serviceName: postgres-cluster-ip-service
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
terminationGracePeriodSeconds: 10
containers:
- name: postgres
image: postgres:10.7
ports:
- containerPort: 5432
volumeMounts:
- name: edw-persistent-storage-claim
mountPath: /var/lib/postgresql/data
readOnly: false
subPath: postgres
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: pgpassword
key: PGPASSWORD
volumeClaimTemplates:
- metadata:
name: edw-persistent-storage-claim
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 50Gi