Pod has unbound PersistentVolumeClaims but volume claims is bounded - kubernetes

I want to create a statefulset elasticsearch in kubernetes on virtualbox. I'm not using cloud provider so i create two persistent volume localy for my two replicas of my statefulset :
pv0:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-elk-0
namespace: elk
labels:
type: local
spec:
storageClassName: gp2
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data/pv0"
pv1:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-elk-1
namespace: elk
labels:
type: local
spec:
storageClassName: gp2
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data/pv1"
Statefulset :
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: elasticsearch-logging
namespace: elk
labels:
k8s-app: elasticsearch-logging
version: v5.6.2
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
serviceName: elasticsearch-logging
replicas: 2
selector:
matchLabels:
k8s-app: elasticsearch-logging
version: v5.6.2
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v5.6.2
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: elasticsearch-logging
containers:
- image: gcr.io/google-containers/elasticsearch:v5.6.2
name: elasticsearch-logging
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
resources:
limits:
cpu: 0.1
volumeMounts:
- name: elasticsearch-logging
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
initContainers:
- image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-logging-init
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: elasticsearch-logging
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: gp2
resources:
requests:
storage: 5Gi
It seems like the persistant volume is correctly bounced but the pod are always in crash loop and restart every time. It's it because of the use of the initContainer or something wrong with my yaml ?

Add more Ram, scale up the cluster.

Related

How do I define a persistent storage claim for my deployment?

I get this error message:
Deployment.apps "nginxapp" is invalid: spec.template.spec.containers[0].volumeMounts[0].name: Not found: "nginx-claim"
Now, I thought deployment made a claim to a persistant storage, so these are det files I've run in order:
First, persistant volume to /data as that is persistent on minikube (https://minikube.sigs.k8s.io/docs/handbook/persistent_volumes/):
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
Then, for my nginx deployment I made a claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Before service I run the deployment, which is the one giving me the error above, looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginxapp
name: nginxapp
spec:
replicas: 1
volumes:
- persistentVolumeClaim:
claimName: nginx-claim
selector:
matchLabels:
app: nginxapp
template:
metadata:
labels:
app: nginxapp
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/data/www"
name: nginx-claim
Where did I go wrong? Isnt it deployment -> volume claim -> volume?
Am I doing it right? The persistent volume is pod-wide (?) and therefore generally named. But the claim is per deployment? So thats why I named it nginx-claim. I might be mistaken here, but should not bug up this simple run doh.
In my deployment i set mountPath: "/data/www", this should follow the directory already set in persistent volume definition, or is building on that? So in my case I get /data/data/www?
Try change to:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-claim
spec:
storageClassName: local-storage # <-- changed
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
At your deployment spec add:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginxapp
name: nginxapp
spec:
replicas: 1
selector:
matchLabels:
app: nginxapp
template:
metadata:
labels:
app: nginxapp
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: nginx-claim
mountPath: "/data/www"
volumes:
- name: nginx-claim # <-- added
persistentVolumeClaim:
claimName: nginx-claim
Looks like name: is missing under volumes: in the deployment manifest. Can you try the following in the deployment manifest:
volumes:
- name: nginx-claim
persistentVolumeClaim:
claimName: nginx-claim
Here is the documentation.

PersistentVolume not utilized in Docker Desktop (MacOS) hosted Kubernetes cluster

I have deployed (using Helm) some services in the K8s cluster hosted by Docker Desktop (MacOS). One of the "services" is MongoDB, for which I'm trying to set up a PersistedVolume, so that the actual data will be retained in a MacOS local directory between cluster (re)installs (or MongoDB pod replacements). Everything "works" per se, but the MongoDB container process keeps setting up its local directory /data/db, as if nothing is really setup in terms of Persistent Volumes. I've been pulling my hair for a while now and thought an extra set of eyes might spot whats wrong or missing.
I have several other resources deployed, e.g a small Micronaut based backend service which exposes an API to read from the MongoDB instance. All of that works just fine.
Here are the descriptors involved for MongoDB:
PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Deployment:
kind: Deployment
metadata:
name: fo-persons-mongodb
namespace: fo
labels:
app: fo-persons-mongodb
spec:
replicas: 1
selector:
matchLabels:
app: fo-persons-mongodb
template:
metadata:
labels:
app: fo-persons-mongodb
spec:
volumes:
- name: fo-persons-mongodb-volume-pvc
persistentVolumeClaim:
claimName: persons-mongodb-pvc
containers:
- name: fo-persons-mongodb
image: mongo
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
volumeMounts:
- name: fo-persons-mongodb-volume-pvc
mountPath: "/data/db"
Service:
apiVersion: v1
kind: Service
metadata:
name: fo-persons-mongodb
namespace: fo
spec:
type: ClusterIP
selector:
app: fo-persons-mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /Users/mike/kubernetes/fo/persons/mongodb
Alright! I got it working. Seems I'd made two mistakes. Below are the updated descriptors for the PersistentVolumeClaim and PersistentVolume:
Error #1: Not setting the storageClassName in the spec of the PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local-storage
Error #2: Not setting the node affinity and not using local.path instead of hostPath, both in the PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
local:
path: /Users/mike/kubernetes/fo/persons/mongodb
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop

Kubernetes: gcePersistentDisk is not working

I am using gcePersistentDisk as volume in kubernetes. Currently all my VM , Disks are on under same availability zone and account of google cloud. But still gcePersistentDisk is not mounting. Below are my deployment file.
Stoage Class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd-sc
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain # Retain storage even if we delete PVC
parameters:
type: pd-ssd
Volume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd-sc
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
storageClassName: ssd-sc
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: nfs-disk
fsType: ext4
Service and Containers
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache-service
spec:
selector:
matchLabels:
app: apache
replicas: 1
template:
metadata:
labels:
app: apache
spec:
volumes:
- name: service-pv-storage
gcePersistentDisk:
pdName: nfs-disk
fsType: ext4
containers:
- name: apache
image: mobingi/ubuntu-apache2-php7:7.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/var/www/html"
name: service-pv-storage
---
apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30751
type: NodePort
When i am using this deployment it give me following error on pod descriptions.
Note :- I am not using kubernetes google engine. I have setup my custom cluster.

Kubernetes/Minikube: After mounting volume, pod mounted directory is empty

I am new to k8s. I am following official tutorial on setting up Nginx pods in k8s using minikube, mounting a volume and serving index.html.
When I mount and go to hompe page, i receive this error that
directory index of "/usr/share/nginx/html/" is forbidden.
If I dont mount anything, i receive a "Welcome to Nginx" page.
This is the content of that folder before mount. And after mount is empty
root#00c1:/usr/share/nginx/html# ls -l
total 8
-rw-r--r-- 1 root root 494 Jul 23 11:45 50x.html
-rw-r--r-- 1 root root 612 Jul 23 11:45 index.html
Why is mounted folder inside pod empty after mounting?
This is my setup
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/my_username/test/html"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Mi
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-hello-rc
spec:
replicas: 2
selector:
app: hello-nginx-tester
template:
metadata:
labels:
app: hello-nginx-tester
spec:
securityContext:
fsGroup: 1000
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
---
apiVersion: v1
kind: Service
metadata:
name: nginx-tester
labels:
app: hello-nginx-tester
spec:
type: NodePort
ports:
- port: 80
nodePort: 30500
protocol: TCP
selector:
app: hello-nginx-tester
Any info would be appreciated. thanks
I've checked your configuration on my running k8s environment. After some adjustments the following manifest works smoothly for me:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/my_username/test/html"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Mi
volumeName: task-pv-volume
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-hello-rc
spec:
replicas: 2
selector:
app: hello-nginx-tester
template:
metadata:
labels:
app: hello-nginx-tester
spec:
securityContext:
fsGroup: 1000
volumes:
- name: task-pv-volume
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-volume
---
apiVersion: v1
kind: Service
metadata:
name: nginx-tester
labels:
app: hello-nginx-tester
spec:
type: NodePort
ports:
- port: 80
nodePort: 30500
protocol: TCP
selector:
app: hello-nginx-tester
The owner of the directory "/usr/share/nginx/html" will be 1000 because you have set the SecurityContext fsGroup values. Hence you have unable to access the directory. if you remove the SecurityContext section, the owner of the mounted volume will be set as root. You won't get access issues.

NFS PersistentVolume data lost when deleting K8s deployment

I create a PersistentVolume using NFS as below, when I delete the deployment I lose my data. If I exec into the postgres container the DB that was created before is not there anymore.
Using AWS EKS, I managed to deleted a deployment without losing any data.
Any help as to why this happens?
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /mnt/pv001
server: 164.10.0.1
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: metabase-postgres-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Deployment
...
spec:
volumes:
- name: metabase-postgres-storage
persistentVolumeClaim:
claimName: metabase-postgres-persistent-volume-claim
...
I had the wrong mountPath, must be /var/lib/postgresql/data
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: edw-pg
spec:
serviceName: postgres-cluster-ip-service
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
terminationGracePeriodSeconds: 10
containers:
- name: postgres
image: postgres:10.7
ports:
- containerPort: 5432
volumeMounts:
- name: edw-persistent-storage-claim
mountPath: /var/lib/postgresql/data
readOnly: false
subPath: postgres
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: pgpassword
key: PGPASSWORD
volumeClaimTemplates:
- metadata:
name: edw-persistent-storage-claim
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 50Gi