Persistent volume Kubernetes on Google Cloud - kubernetes

I have a Redis pod on my Kubernetes cluster on Google Cloud. I have built PV and the claim.
kind: PersistentVolume
apiVersion: v1
metadata:
name: redis-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: my-size
accessModes:
- ReadWriteOnce
hostPath:
path: "/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: postgres
name: redis-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: my size
I also mounted it in my deployment.yaml
volumeMounts:
- mountPath: /data
name: redis-pv-claim
volumes:
- name: redis-pv-claim
persistentVolumeClaim:
claimName: redis-pv-claim
I can't see any error while running describe pod
Volumes:
redis-pv-claim:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: redis-pv-claim
ReadOnly: false
But it just can't save any key. After every deployment, the "/data" folder is just empty.
My NFS is active now but i still cant keep data .
Describe pvc
Namespace: my namespace
StorageClass: nfs-client
Status: Bound
Volume: pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-class: nfs-client
volume.beta.kubernetes.io/storage-provisioner: cluster.local/ext1-nfs-client-provisioner
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWX
VolumeMode: Filesystem
Mounted By: my grafana pod
Events: <none>
Describe pod gives me an error though.
Warning FailedMount 18m kubelet, gke-devcluster-pool-1-36e6a393-rg7d MountVolume.SetUp failed for volume "pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3" : mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/8f7b6630-ed9b-427a-9ada-b75e1805ed60/volumes/kubernetes.io~nfs/pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3 --scope -- /
home/kubernetes/containerized_mounter/mounter mount -t nfs 192.168.1.21:/mnt/nfs/development-test-claim-pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3 /var/lib/kubelet/pods/8f7b6630-ed9b-427a-9ada-b75e1805ed60
/volumes/kubernetes.io~nfs/pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3
Output: Running scope as unit: run-ra5925a8488ef436897bd44d526c57841.scope
Mount failed: mount failed: exit status 32
Mounting command: chroot

Working redis with PV and PVC on GKE
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
type: LoadBalancer
ports:
- port: 6379
name: redis
selector:
app: redis
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: redis
spec:
selector:
matchLabels:
app: redis
serviceName: redis
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redislabs/rejson
args: ["--requirepass", "password", "--appendonly", "no", "--loadmodule", "/usr/lib/redis/modules/rejson.so"]
ports:
- containerPort: 6379
name: redis
resources:
limits:
cpu: .50
memory: 1500Mi
requests:
cpu: .25
memory: 1000Mi
volumeMounts:
- name: redis-volume
mountPath: /data
volumeClaimTemplates:
- metadata:
name: redis-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
you can update image in this stateful sets as per need.

Related

WaitForFirstConsumer PersistentVolumeClaim waiting for first consumer to be created before binding. Auto provisioning does not work

Hi I know this might be a possible duplicate, but I cannot get the answer from this question.
I have a prometheus deployment and would like to give it a persistent volume.
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
labels:
app: prometheus-server
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--storage.tsdb.retention.time=60d"
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
resources:
requests:
cpu: 500m
memory: 500M
limits:
cpu: 1
memory: 1Gi
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
persistentVolumeClaim:
claimName: prometheus-pv-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-pv-claim
spec:
storageClassName: default
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
Now both the pvc and the deployment cannot be scheduled because the pvc waits for the deployment and the other way around. As far as I am concerned we have a cluster with automatic provisioning, thus I cannot just create a pv. How can I solve this problem, because other deployments do use pvc in the same style and it works.
Its because of namespace. PVC is a namespaced object you can look here. Your PVC is on the default namespace. Moving it to monitoring namespace should work.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-pv-claim
namespace: monitoring
spec:
storageClassName: default
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
Does your PVC deploy the correct namespace with deployment?
And make sure the StorageClass has volumeBindingMode: WaitForFirstConsumer
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: default
...
volumeBindingMode: WaitForFirstConsumer

POD in pending status because PV and PVC unable to delete

I am having a problem related with PV and PVC and I really do not know how to fix it. Could someone give some insight about it?
Github -> https://github.com/Sarony11/wordpress-k8s/tree/main/k8s
Files:
wordpress-pv.yaml (added after comments)
apiVersion: v1
kind: PersistentVolume
metadata:
name: wp1-pv
labels:
app: wordpress
spec:
capacity:
storage: 5Gi
storageClassName: standard
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /tmp/data
wordpress-pv-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp1-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard
volumeName: wp1-html
wordpress-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: wp1-app
spec:
replicas: 3
selector:
matchLabels:
app: wordpress
tier: frontend
component: app
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
component: app
spec:
containers:
- image: wordpress:5.6.0-apache
name: wp1
env:
- name: WORDPRESS_DB_HOST
value: "35.204.214.81"
- name: WORDPRESS_DB_USER
value: wordpressdb
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wp1-port
volumeMounts:
- name: wp1-pv
mountPath: /var/www/html
volumes:
- name: wp1-pv
persistentVolumeClaim:
claimName: wp1-pv-claim
COMANDS
administrator#master-ubuntu:~/learning/wordpress-k8s$ microk8s.kubectl apply -k k8s
secret/mysql-pass-7m4b7ft482 created
service/wp1-clusterip-service created
ingress.networking.k8s.io/ingress-service created
persistentvolume/wp1-pv created
persistentvolumeclaim/wp1-pv-claim created
administrator#master-ubuntu:~/learning/wordpress-k8s$ microk8s.kubectl get
pv,pvc -n default
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/wp1-pv 5Gi RWO Retain Available standard 74s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/wp1-pv-claim Pending wp1-html 0 standard
74s
administrator#master-ubuntu:~/learning/wordpress-k8s$ microk8s.kubectl describe pv wp1-pv
Name: wp1-pv
Labels: app=wordpress
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass: standard
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /tmp/data
HostPathType:
Events: <none>
administrator#master-ubuntu:~/learning/wordpress-k8s$ microk8s.kubectl describe pvc wp1-pv-claim
Name: wp1-pv-claim
Namespace: default
StorageClass: standard
Status: Pending
Volume: wp1-html
Labels: app=wordpress
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 0
Access Modes:
VolumeMode: Filesystem
Mounted By: wp1-app-54ddf5fb78-7f8j6
wp1-app-54ddf5fb78-hgqmj
wp1-app-54ddf5fb78-sxn9j
Events: <none>
The result is still Pending for both PV and PVS and of course, pods are in Pending.
Please check/share the status of pvc and pv.
check the status
#kubectl get pvc,pv -n <namespace>
Describe the pvc and pv.
#kubectl describe pvc <PVC-name> -n namespace
#kubectl describe pv <PV-name> -n namespace
So the problem here is that you're not creating any persistent volume.
That's why your pvc remains pending as is your pod.
To make it work, you need to provide a PV that matches the pvc spec
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeName: "wp-html"
Below is an example for a PV that should match your pvc
apiVersion: v1
kind: PersistentVolume
metadata:
name: wp-html
labels:
app: wordpress
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /data/pv/wp-html
This PV will store the data under the /data/pv/wp-html folder on your local node.
I have tested it on my environment and can give you some advices.
Before starting read how to properly configure dynamic provisioning in Kubernetes - dynamics-provisioning-kubernetes.
Your pv yaml is ok.
Delete volumeName field from you pvc yaml file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp1-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard
Thanks to this your pvc will be bound successfully to existing pv.
Change volumeMounts value in your Deployment yaml file, for example to wp1-pv-storage:
apiVersion: apps/v1
kind: Deployment
metadata:
name: wp1-app
spec:
replicas: 3
selector:
matchLabels:
app: wordpress
tier: frontend
component: app
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
component: app
spec:
containers:
- image: wordpress:5.6.0-apache
name: wp1
env:
- name: WORDPRESS_DB_HOST
value: "35.204.214.81"
- name: WORDPRESS_DB_USER
value: wordpressdb
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wp1-port
volumeMounts:
- name: wp1-pv-storage
mountPath: /var/www/html
volumes:
- name: wp1-pv-storage
persistentVolumeClaim:
claimName: wp1-pv-claim

Pod/Container directory used to mount nfs path gets empty after deployment

The following is my kubernetes/openshift deployment configuration template, along with its persistent volumes and persistent volume claims:
apiVersion: v1
kind: DeploymentConfig
metadata:
name: pythonApp
creationTimestamp: null
annotations:
openshift.io/image.insecureRepository: "true"
spec:
replicas: 1
strategy:
type: Recreate
revisionHistoryLimit: 2
template:
metadata:
labels:
app: pythonApp
creationTimestamp: null
spec:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "backend"
containers:
- name: backend
imagePullPolicy: IfNotPresent
image: <img-name>
command: ["sh", "-c"]
args: ['python manage.py runserver']
resources: {}
volumeMounts:
- mountPath: /pythonApp/configs
name: configs
restartPolicy: Always
volumes:
- name: configs
persistentVolumeClaim:
claimName: "configs-volume"
status: {}
---------------------------------------------------------------
apiVersion: v1
kind: PersistentVolume
metadata:
name: "configs-volume"
storageClassName: manual
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteMany
nfs:
path: /mnt/k8sMount/configs
server: <server-ip>
---------------------------------------------------------------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "configs-volume-claim"
creationTimestamp: null
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: "configs-volume"
After the deployment, when I exec inside the container (using oc exec or kubectl exec) and check the /pythonApp/configs folder, it is found to be empty, when really it's supposed to have some configuration files from the used image.
Is this issue caused due to the fact that /pythonApp/configs is mounted to the persistent nfs volume mount path /mnt/k8sMount/configs, which will be initially empty?
How could this be solved?
Environment
Kubernetes version: 1.11
Openshift version: 3.11

Kubernetes NFS storage using PV and PVC

I have a 3 nodes cluster running in VirtualBox and I'm trying to create a NFS storage using PV and PVC, but it seems that I'm doing something wrong.
I have the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
labels:
type: nfs
spec:
capacity:
storage: 100Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /redis/data
server: 192.168.56.2 #ip of my master-node
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 100Mi
storageClassName: slow
selector:
matchLabels:
type: nfs
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-master
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: data
mountPath: "/redis/data"
ports:
- containerPort: 6379
volumes:
- name: data
persistentVolumeClaim:
claimName: redis-pvc
I've already installed nfs-common in all my nodes.
Whenever creating the PV, PVC and POD the pod does not start and I get the following:
Warning FailedMount 30s kubelet, kubenode02 MountVolume.SetUp failed for volume "redis-pv" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/9326d818-b78a-42cc-bcff-c487fc8840a4/volumes/kubernetes.io~nfs/redis-pv --scope -- mount -t nfs -o hard,nfsvers=4.1 192.168.56.2:/redis/data /var/lib/kubelet/pods/9326d818-b78a-42cc-bcff-c487fc8840a4/volumes/kubernetes.io~nfs/redis-pv
Output: Running scope as unit run-rc316990c37b14a3ba24d5aedf66a3f6a.scope.
mount.nfs: Connection timed out
Here is the status of kubectl get pv, pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/redis-pv 100Mi RWO Retain Bound default/redis-pvc slow 8s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/redis-pvc Bound redis-pv 100Mi RWO slow 8s
Any ideas of what am I missing?
1 - you need to install your NFS Server: Follow the instructions in this link:
https://vitux.com/install-nfs-server-and-client-on-ubuntu/
2- Create your sharedfolder where you want to persist your data<
mount 192.168.56.2:/mnt/sharedfolder /mnt/shared/folder_client
3- Change in PV.yaml the following instructions:
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
labels:
type: nfs
spec:
capacity:
storage: 100Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /mnt/sharedfolder
server: 192.168.56.2 #ip of my master-node

Share nfs volume between kubernetes clusters

We have the setup in GKE with two different clusters. One cluster is running a nfs-server and on that cluster we have a persistent-volume which points to the server. This PV is then mounted on a pod running on this cluster. The second cluster also has a PV and a pod that should mount the same nfs volume. Here is where the problem occurs. Where we point out the server it does not work with using the nfs-server clusterIp address. This is understandable but I wonder how to best achieve this.
The setup is basically this:
Persistent Volume and Persistent Volume Claim used by NFS
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 20Gi
storageClassName: manual
accessModes:
- ReadWriteMany
gcePersistentDisk:
pdName: files
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
NFS server deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: nfs-pvc
NFS-server service
apiVersion: v1
kind: Service
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
Persistent volume and Persistent Volume Claim used by the pods:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 20Gi
storageClassName: manual
accessModes:
- ReadWriteMany
nfs:
server: 10.4.0.20
path: "/"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
Part of deployment file for pod mounting the nfs
volumes:
- name: files
persistentVolumeClaim:
claimName: nfs
Output of kubectl get pv and kubectl get pvc
user#HP-EliteBook:~/Downloads$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs 100Gi RWX Retain Bound default/nfs manual 286d
nfs-pv 100Gi RWO Retain Bound default/nfs-pvc manual 286d
user#HP-EliteBook:~/Downloads$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs Bound nfs 100Gi RWX manual 286d
nfs-pvc Bound nfs-pv 100Gi RWO manual 286d
The ip in the PV used by the pods is the problem. The pod on the same cluster can connect to it but not the pod on the other cluster. I can use the actual podIP from the other cluster but the podIP changes with every deploy so that is not a working solution. What is the best way to get around this problem, I only want this second cluster to have access to the nfs server and not opening it to the world for example.