I create a PersistentVolume using NFS as below, when I delete the deployment I lose my data. If I exec into the postgres container the DB that was created before is not there anymore.
Using AWS EKS, I managed to deleted a deployment without losing any data.
Any help as to why this happens?
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /mnt/pv001
server: 164.10.0.1
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: metabase-postgres-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Deployment
...
spec:
volumes:
- name: metabase-postgres-storage
persistentVolumeClaim:
claimName: metabase-postgres-persistent-volume-claim
...
I had the wrong mountPath, must be /var/lib/postgresql/data
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: edw-pg
spec:
serviceName: postgres-cluster-ip-service
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
terminationGracePeriodSeconds: 10
containers:
- name: postgres
image: postgres:10.7
ports:
- containerPort: 5432
volumeMounts:
- name: edw-persistent-storage-claim
mountPath: /var/lib/postgresql/data
readOnly: false
subPath: postgres
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: pgpassword
key: PGPASSWORD
volumeClaimTemplates:
- metadata:
name: edw-persistent-storage-claim
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 50Gi
Related
Trying to mount Kubernetes volume to a pod(running as non-root) with fsGroup SecurityContext option. But the volume is still mounted as root and getting permission denied from the pod when trying to do write operations on the filesystem
Created the Persistent volume:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-demo
spec:
storageClassName: nfs
capacity:
storage: 10Gi
nfs:
server: xxx.xxx.xxx.xxx
path: /nfs/data/demo
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
Persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: demo-vol
spec:
storageClassName: nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeName: pv-demo
StatefulSet for the application deployment. The container image starts as user 1001(belonging to group 0)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: demo-app
labels:
app: demo-app
spec:
replicas: 1
serviceName: demo-app
selector:
matchLabels:
app: demo-app
spec:
securityContext:
fsGroup: 0
fsGroupChangePolicy: "OnRootMismatch"
containers:
- name: demo-app-container
image: <theImage>
volumeMounts:
- mountPath: /store
name: demo-vol
volumes:
- name: demo-vol
persistentVolumeClaim:
claimName: demo-vol
I'm trying to run nodered in my minikube kubernetes cluster ("cluster", its one node :D).
The docker command to do this is by example:
docker run -it -p 1880:1880 -v /home/user/node_red_data:/data --name mynodered nodered/node-red
But I'm not running it in docker, I'm trying to run it in minikube. The documentation of minikube states that /data on the host is persisted. So what I wanted was a /data/nodered to be mounted up as /data on the nodered container.
I started with adding a storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
Then added persistent storage:
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
Then a persistent volume claim for the nodered:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nodered-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
And then the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: "/data"
subPath: "nodered"
I've checked kubernetes dasboard and everything is green and volume is bound. I created a simple http service in nodered and deployed it. It's running but nothing is saved. So if the deployment goes down and gets redeployed it will be empty.
I've checked the /data and /data/nodered folders on the minikube instance running in docker but they are empty.
Your deployment spec should return error, try the following:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: /data/nodered
# subPath: nodered <-- not needed in your case
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim
I get this error message:
Deployment.apps "nginxapp" is invalid: spec.template.spec.containers[0].volumeMounts[0].name: Not found: "nginx-claim"
Now, I thought deployment made a claim to a persistant storage, so these are det files I've run in order:
First, persistant volume to /data as that is persistent on minikube (https://minikube.sigs.k8s.io/docs/handbook/persistent_volumes/):
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
Then, for my nginx deployment I made a claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Before service I run the deployment, which is the one giving me the error above, looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginxapp
name: nginxapp
spec:
replicas: 1
volumes:
- persistentVolumeClaim:
claimName: nginx-claim
selector:
matchLabels:
app: nginxapp
template:
metadata:
labels:
app: nginxapp
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/data/www"
name: nginx-claim
Where did I go wrong? Isnt it deployment -> volume claim -> volume?
Am I doing it right? The persistent volume is pod-wide (?) and therefore generally named. But the claim is per deployment? So thats why I named it nginx-claim. I might be mistaken here, but should not bug up this simple run doh.
In my deployment i set mountPath: "/data/www", this should follow the directory already set in persistent volume definition, or is building on that? So in my case I get /data/data/www?
Try change to:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-claim
spec:
storageClassName: local-storage # <-- changed
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
At your deployment spec add:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginxapp
name: nginxapp
spec:
replicas: 1
selector:
matchLabels:
app: nginxapp
template:
metadata:
labels:
app: nginxapp
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: nginx-claim
mountPath: "/data/www"
volumes:
- name: nginx-claim # <-- added
persistentVolumeClaim:
claimName: nginx-claim
Looks like name: is missing under volumes: in the deployment manifest. Can you try the following in the deployment manifest:
volumes:
- name: nginx-claim
persistentVolumeClaim:
claimName: nginx-claim
Here is the documentation.
The following is my kubernetes/openshift deployment configuration template, along with its persistent volumes and persistent volume claims:
apiVersion: v1
kind: DeploymentConfig
metadata:
name: pythonApp
creationTimestamp: null
annotations:
openshift.io/image.insecureRepository: "true"
spec:
replicas: 1
strategy:
type: Recreate
revisionHistoryLimit: 2
template:
metadata:
labels:
app: pythonApp
creationTimestamp: null
spec:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "backend"
containers:
- name: backend
imagePullPolicy: IfNotPresent
image: <img-name>
command: ["sh", "-c"]
args: ['python manage.py runserver']
resources: {}
volumeMounts:
- mountPath: /pythonApp/configs
name: configs
restartPolicy: Always
volumes:
- name: configs
persistentVolumeClaim:
claimName: "configs-volume"
status: {}
---------------------------------------------------------------
apiVersion: v1
kind: PersistentVolume
metadata:
name: "configs-volume"
storageClassName: manual
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteMany
nfs:
path: /mnt/k8sMount/configs
server: <server-ip>
---------------------------------------------------------------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "configs-volume-claim"
creationTimestamp: null
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: "configs-volume"
After the deployment, when I exec inside the container (using oc exec or kubectl exec) and check the /pythonApp/configs folder, it is found to be empty, when really it's supposed to have some configuration files from the used image.
Is this issue caused due to the fact that /pythonApp/configs is mounted to the persistent nfs volume mount path /mnt/k8sMount/configs, which will be initially empty?
How could this be solved?
Environment
Kubernetes version: 1.11
Openshift version: 3.11
When I try to create a persistentVolume on Okteto Cloud with the following definition:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
I get the following error:
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=persistentvolumes", GroupVersionKind: "/v1, Kind=PersistentVolume"
Name: "postgres-pv", Namespace: ""
Object: &{map["apiVersion":"v1" "kind":"PersistentVolume" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "labels":map["app":"postgres" "type":"local"] "name":"postgres-pv"] "spec":map["accessModes":["ReadWriteMany"] "capacity":map["storage":"5Gi"] "hostPath":map["path":"/mnt/data"]]]}
from server for: "deploy/k8s.postgres.yml": persistentvolumes "postgres-pv" is forbidden: User "system:serviceaccount:okteto:07e6fdbf-55c2-4642-81e3-051e8309000f" cannot get resource "persistentvolumes" in API group "" at the cluster scope
However according to the Okteto cloud docs, persistentVolumes seem to be authorized.
How would I create one on there ?
.
For context I'm trying to reproduce a simple postgres deployment (no replication, no backups).
Here's my complete deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: ClusterIP
ports:
- name: postgres
port: 5432
selector:
app: postgres
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgresadmin
POSTGRES_PASSWORD: admin123
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Persistent volumes is a cluster-wide resource and it is not allowed.
The docs are wrong, thanks for pointing it out.
You can create instead PersistentVolumeClaims using the default storage class (and remove the persistent volume manifest):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Hope it helps :-)