How to replace the in-memory storage with persistent storage using kustomize - kubernetes

I'm trying to replace the in-memory storage of Grafana deployment with persistent storage using kustomize. What I'm trying to do is that I'm removing the in-memory storage and then mapping persistent storage. But When I'm deploying it then it is giving me an error.
Error
The Deployment "grafana" is invalid: spec.template.spec.containers[0].volumeMounts[1].name: Not found: "grafana-storage"
Kustomize version
{Version:kustomize/v4.0.5 GitCommit:9e8e7a7fe99ec9fbf801463e8607928322fc5245 BuildDate:2021-03-08T20:53:03Z GoOs:linux GoArch:amd64}
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://github.com/prometheus-operator/kube-prometheus
- grafana-pvc.yaml
patchesStrategicMerge:
- grafana-patch.yaml
grafana-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-storage
namespace: monitoring
labels:
billingType: "hourly"
region: sng01
zone: sng01
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: ibmc-file-bronze
grafana-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
namespace: monitoring
spec:
template:
spec:
volumes:
# use persistent storage for storing users instead of in-memory storage
- $patch: delete <---- trying to remove the previous volume
name: grafana-storage
- name: grafana-storage
persistentVolumeClaim:
claimName: grafana-storage
containers:
- name: grafana
volumeMounts:
- name: grafana-storage
mountPath: /var/lib/grafana
please help.

The $patch: delete doesn't seem to work as I would expect.
It may be nice to open an issue on kustomize github: https://github.com/kubernetes-sigs/kustomize/issues and ask developers about it.
Although here is the patch I tried, and it seems to work:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
namespace: monitoring
spec:
template:
spec:
volumes:
- name: grafana-storage
emptyDir: null
persistentVolumeClaim:
claimName: grafana-storage
containers:
- name: grafana
volumeMounts:
- name: grafana-storage
mountPath: /var/lib/grafana
Based on https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md
The following should also work in theory:
spec:
volumes:
- $retainKeys:
- name
- persistentVolumeClaim
name: grafana-storage
persistentVolumeClaim:
claimName: grafana-storage
But in practise it doesn't, and I think that's because kustomize has its own implementaions of strategic merge (different that k8s).

Related

Share multiple folders in pod using persistent volumes

I want to share multiple volumes using PersistentVolume reqource of kubernetes.
I want to share "/opt/*" folders in pod. But not the "/opt":
kind: PersistentVolume
apiVersion: v1
metadata:
name: demo
namespace: demo-namespace
labels:
app: myApp
chart: "my-app"
name: myApp
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: "myApp-data"
hostPath:
path: /opt/*
But in pod I am not able to see shared volume. If I share only "/opt" folder then it goes shown
in pod.
Is there anything I am missing?
If you want to share a folder among some pods or deployments or statefulsets you should create PersistentVolumeClaim and it's access mode should be ReadeWriteMany.So here is an example of PersistentVolumeClaim which has ReadeWriteMany mode
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
then in your pods you should use it as below ...
apiVersion: v1
kind: Pod
metadata:
name: mypod01
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: c01
image: alpine
volumeMounts:
- mountPath: "/opt"
name: task-pv-storage
apiVersion: v1
kind: Pod
metadata:
name: mypod02
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: c02
image: alpine
volumeMounts:
- mountPath: "/opt"
name: task-pv-storage

How to deploy MariaDB on kubernetes with some default schema and data?

For some context, I'm trying to build a staging / testing system on kubernetes which starts with deploying a mariadb on the cluster with some schema and data. I have a trunkated / clensed db dump from prod to help me with that. Let's call that file : dbdump.sql which is present in my local box in the path /home/rjosh/database/script/ . After much reasearch here is what my yaml file looks like:
apiVersion: v1
kind: PersistentVolume
metadata:
name: m3ma-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: m3ma-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
---
apiVersion: v1
kind: Service
metadata:
name: m3ma
spec:
ports:
- port: 3306
selector:
app: m3ma
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: m3ma
spec:
selector:
matchLabels:
app: m3ma
strategy:
type: Recreate
template:
metadata:
labels:
app: m3ma
spec:
containers:
- image: mariadb:10.2
name: m3ma
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: m3ma
volumeMounts:
- name: m3ma-persistent-storage
mountPath: /var/lib/mysql/
- name: m3ma-host-path
mountPath: /docker-entrypoint-initdb.d/
volumes:
- name: m3ma-persistent-storage
persistentVolumeClaim:
claimName: m3ma-pv-claim
- name: m3ma-host-path
hostPath:
path: /home/smaikap/database/script/
type: Directory
The MariaDB instance is coming up but not with the schema and data that is present in /home/rjosh/database/script/dbdump.sql.
Basically, the mount is not working. If I connect to the pod and check /docker-entrypoint-initdb.d/ there is nothing. How do I go about this?
A bit more details. Currently, I'm testing it on minikube. But, soon it will have to work on GKE cluster. Looking at the documentation, hostPath is not the choice for GKE. So, what the correct way of doing this?
Are you sure your home directory is visible to Kubernetes? Minikube generally creates a little VM to run things in, which wouldn't have your home dir in it. The more usual way to handle this would be to make a very small new Docker image yourself like:
FROM mariadb:10.2
COPY dbdump.sql /docker-entrypoint-initdb.d/
And then push it to a registry somewhere, and then use that image instead.

PersistentVolumeClaim unknown in kubernetes

i try to deploy a container but unfortunately i have an error when i try to execute kubectl apply -f *.yaml
the error is :
error validating data: ValidationError(Pod.spec.containers[1]):
unknown field "persistentVolumeClaim" in io.k8s.api.core.v1.Container;
i dont understand why i get the error because i wrote claimName: under persistentVolumeClaim: in my pd.yaml config :(
Pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: /apps/karaf/etc
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-claim
PersistentVolumeClaimKaraf.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: karaf-conf-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
Deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: karaf
namespace: poc
spec:
replicas: 1
template:
metadata:
labels:
app: karaf
spec:
containers:
- name: karaf
image: "xxx/karaf:ids-1.1.0"
imagePullPolicy: Always
ports:
- containerPort: 6443
- containerPort: 6100
- containerPort: 6101
resources:
volumeMounts:
- mountPath: /apps/karaf/etc
name: karaf-conf
volumes:
- name: karaf-conf
persistentVolumeClaim:
claimName: karaf-conf
The reason you're seeing that error is due to you specifying a persistentVolumeClaim under your pod spec's container specifications. As you can see from the auto generated docs here: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core
persistentVolumeClaims aren't supported at this level/API object, which is what's giving the error you're seeing.
You should modify the pod.yml to specify this as a volume instead.
e.g.:
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: /apps/karaf/etc
volumes:
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-claim
According to kubernetes documentation, persistentVolumeClaim is a part of .spec.volume level, not .spec.container level of a pod object.
The correct pod.yaml is:
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
volumes:
- name: efgkaraf-conf-storage
persistentVolumeClaim:
claimName: efgkaraf-conf-claim
containers:
- name: karaf
image: docker-all.attanea.net/library/efgkaraf:ids-1.1.0
volumeMounts:
- name: efgkaraf-conf-storage
mountPath: /apps/karaf/etc

Kubectl create for persistent storage erroring out

I'm trying to deploy a persistent storage for couch DB and it is failing out with the error
kubectl create -f couch_persistant_deploy.yaml
error: error validating "couch_persistant_deploy.yaml": error validating data: couldn't find type: v1.Deployment; if you choose to ignore these errors, turn validation off with --validate=false
Create volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/sda1/data/test
Claim volume.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
labels:
app: couchdb
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Deploy the VM.yaml
apiVersion: extensions/v1beta1
#apiVersion: v1
kind: Deployment
#kind: ReplicationController
metadata:
name: couchdb
spec:
replicas: 1
template:
metadata:
labels:
app: couchdb
spec:
containers:
- name: couchdb
image: "couchdb"
imagePullPolicy: Always
env:
- name: COUCHDB_USER
value: admin
- name: COUCHDB_PASSWORD
value: password
ports:
- name: couchdb
containerPort: 5984
- name: epmd
containerPort: 4369
containerPort: 9100
volumeMounts:
- mountPath: "/opt/couchdb/data"
name: task-pv-storage
imagePullSecrets:
- name: registrypullsecret2
#volumes:
#- name: database-storage
# emptyDir: {}
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
Any leads is really appreciated.
Your error message should be like this:
error: error validating "couch_persistant_deploy.yaml": error validating data: ValidationError(Deployment.spec.template.spec.volumes[0]): unknown field "claimName" in io.k8s.api.core.v1.Volume; if you choose to ignore these errors, turn validation off with --validate=false
See, error message is specific: unknown field "claimName" in io.k8s.api.core.v1.Volume
You need to put claimName under persistentVolumeClaim.
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim # fix is here
But you did
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim # invalid
Which makes your Deployment object invalid

Multiple Volume mounts with Kubernetes: one works, one doesn't

I am trying to create a Kubernetes pod with a single container which has two external volumes mounted on it. My .yml pod file is:
apiVersion: v1
kind: Pod
metadata:
name: my-project
labels:
name: my-project
spec:
containers:
- image: my-username/my-project
name: my-project
ports:
- containerPort: 80
name: nginx-http
- containerPort: 443
name: nginx-ssl-https
imagePullPolicy: Always
volumeMounts:
- mountPath: /home/projects/my-project/media/upload
name: pd-data
- mountPath: /home/projects/my-project/backups
name: pd2-data
imagePullSecrets:
- name: vpregistrykey
volumes:
- name: pd-data
persistentVolumeClaim:
claimName: pd-claim
- name: pd2-data
persistentVolumeClaim:
claimName: pd2-claim
I am using Persistent Volumes and Persisten Volume Claims, as such:
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pd-disk
labels:
name: pd-disk
spec:
capacity:
storage: 250Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: "pd-disk"
fsType: "ext4"
PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pd-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
I have initially created my disks using the command:
$ gcloud compute disks create --size 250GB pd-disk
Same goes for the second disk and second PV and PVC. Everything seems to work ok when I create the pod, no errors are thrown. Now comes the weird part: one of the paths is being mounted correctly (and is therefor persistent) and the other one is being erased every time I restart the pod...
I have tried re-creating everything from scratch, but nothing changes. Also, from the pod description, both volumes seem to be correctly mounted:
$ kubectl describe pod my-project
Name: my-project
...
Volumes:
pd-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd-claim
ReadOnly: false
pd2-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd2-claim
ReadOnly: false
Any help is appreciated. Thanks.
The Kubernetes documentation states:
Volumes can not mount onto other volumes or have hard links to other
volumes
I had the same issue and in my case the problem was that both volume mounts had overlapping mountPaths, i.e. both started with /var/.
They mounted without issues after fixing that.
I do not see any direct problem for which such behavior as explained above has occurred! But what I can rather ask you to try is to use a "Deployment" instead of a "Pod" as suggested by many here, especially when using PVs and PVCs. Deployment takes care of many things to maintain the "Desired State". I have attached my code below for your reference which works and both the volumes are persistent even after deleting/terminating/restarting as this is managed by the Deployment's desired state.
Two difference which you would find in my code from yours are:
I have a deployment object instead of pod
I am using GlusterFs for my volume.
Deployment yml.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
namespace: platform
labels:
component: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
component: nginx
spec:
nodeSelector:
role: app-1
containers:
- name: nginx
image: vip-intOAM:5001/nginx:1.15.3
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/etc/nginx/conf.d/"
name: nginx-confd
- mountPath: "/var/www/"
name: nginx-web-content
volumes:
- name: nginx-confd
persistentVolumeClaim:
claimName: glusterfsvol-nginx-confd-pvc
- name: nginx-web-content
persistentVolumeClaim:
claimName: glusterfsvol-nginx-web-content-pvc
One of my PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: glusterfsvol-nginx-confd-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
glusterfs:
endpoints: gluster-cluster
path: nginx-confd
readOnly: false
persistentVolumeReclaimPolicy: Retain
claimRef:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
PVC for the above
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi