PersistentVolumeClaim unknown in kubernetes - kubernetes

i try to deploy a container but unfortunately i have an error when i try to execute kubectl apply -f *.yaml
the error is :
error validating data: ValidationError(Pod.spec.containers[1]):
unknown field "persistentVolumeClaim" in io.k8s.api.core.v1.Container;
i dont understand why i get the error because i wrote claimName: under persistentVolumeClaim: in my pd.yaml config :(
Pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: /apps/karaf/etc
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-claim
PersistentVolumeClaimKaraf.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: karaf-conf-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
Deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: karaf
namespace: poc
spec:
replicas: 1
template:
metadata:
labels:
app: karaf
spec:
containers:
- name: karaf
image: "xxx/karaf:ids-1.1.0"
imagePullPolicy: Always
ports:
- containerPort: 6443
- containerPort: 6100
- containerPort: 6101
resources:
volumeMounts:
- mountPath: /apps/karaf/etc
name: karaf-conf
volumes:
- name: karaf-conf
persistentVolumeClaim:
claimName: karaf-conf

The reason you're seeing that error is due to you specifying a persistentVolumeClaim under your pod spec's container specifications. As you can see from the auto generated docs here: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core
persistentVolumeClaims aren't supported at this level/API object, which is what's giving the error you're seeing.
You should modify the pod.yml to specify this as a volume instead.
e.g.:
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: /apps/karaf/etc
volumes:
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-claim

According to kubernetes documentation, persistentVolumeClaim is a part of .spec.volume level, not .spec.container level of a pod object.
The correct pod.yaml is:
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
volumes:
- name: efgkaraf-conf-storage
persistentVolumeClaim:
claimName: efgkaraf-conf-claim
containers:
- name: karaf
image: docker-all.attanea.net/library/efgkaraf:ids-1.1.0
volumeMounts:
- name: efgkaraf-conf-storage
mountPath: /apps/karaf/etc

Related

How to mount two Dynamic PVC in a kubernetes pod

I have a scenario to mount two Dynamic PVC in Minikube, when i tried the below manifest file, I can see only one volume mounted in the POD, when i describe pod, i can see both volumes bounded, not sure what is the cause.
apiVersion: v1
metadata:
name: niranjan-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: praveen-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
kind: Pod
apiVersion: v1
metadata:
name: praveen-pod
spec:
volumes:
- name: niranjan-pv
persistentVolumeClaim:
claimName: niranjan-pvc
- name: praveen-pv
persistentVolumeClaim:
claimName: praveen-pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/tmp/niranjan-mount"
name: niranjan-pv
volumeMounts:
- mountPath: "/tmp/praveen-mount"
name: praveen-pv ```
[enter image description here][1]
[enter image description here][2]
[enter image description here][3]
[1]: https://i.stack.imgur.com/uyNKg.png
[2]: https://i.stack.imgur.com/3nDPl.png
[3]: https://i.stack.imgur.com/oOrq2.png
For the volumeMounts, you should provide only one value with an array:
kind: Pod
apiVersion: v1
metadata:
name: praveen-pod
spec:
volumes:
- name: niranjan-pv
persistentVolumeClaim:
claimName: niranjan-pvc
- name: praveen-pv
persistentVolumeClaim:
claimName: praveen-pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/tmp/niranjan-mount"
name: niranjan-pv
- mountPath: "/tmp/praveen-mount"
name: praveen-pv
If not one of those variables will override the value of the second one, and you will have only one mounted volume.

Can not create ReadWrite filesystem in kubernetes (ReadOnly mount)

Summary
I currently am in the process of learning kubernetes, as such I have decided to start with an application that is simple (Mumble).
Setup
My setup is simple, I have one node (the master) where I have removed the taint so mumble can be deployed on it. This single node is running CentOS Stream but SELinux is disabled.
The issue
The /srv/mumble directory appears to be ReadOnly, and at this point I have tried creating an init container to chown the directory but that fails due to the issue above. This issue appears in both containers, and I am unsure at this point how to change this to allow the mumble application to create files in said directory. The mumble application user runs as user 1000. What am I missing here?
Configs
---
apiVersion: v1
kind: Namespace
metadata:
name: mumble
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mumble-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
type: DirectoryOrCreate
path: "/var/lib/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mumble-pv-claim
namespace: mumble
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mumble-config
namespace: mumble
data:
murmur.ini: |
**cut for brevity**
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mumble-deployment
namespace: mumble
labels:
app: mumble
spec:
replicas: 1
selector:
matchLabels:
app: mumble
template:
metadata:
labels:
app: mumble
spec:
initContainers:
- name: storage-setup
image: busybox:latest
command: ["sh", "-c", "chown -R 1000:1000 /srv/mumble"]
securityContext:
privileged: true
runAsUser: 0
volumeMounts:
- mountPath: "/srv/mumble"
name: mumble-pv-storage
readOnly: false
- name: mumble-config
subPath: murmur.ini
mountPath: "/srv/mumble/config.ini"
readOnly: false
containers:
- name: mumble
image: phlak/mumble
ports:
- containerPort: 64738
env:
- name: TZ
value: "America/Denver"
volumeMounts:
- mountPath: "/srv/mumble"
name: mumble-pv-storage
readOnly: false
- name: mumble-config
subPath: murmur.ini
mountPath: "/srv/mumble/config.ini"
readOnly: false
volumes:
- name: mumble-pv-storage
persistentVolumeClaim:
claimName: mumble-pv-claim
- name: mumble-config
configMap:
name: mumble-config
items:
- key: murmur.ini
path: murmur.ini
---
apiVersion: v1
kind: Service
metadata:
name: mumble-service
spec:
selector:
app: mumble
ports:
- port: 64738
command: ["sh", "-c", "chown -R 1000:1000 /srv/mumble"]
Not the volume that is mounted as read-only, the ConfigMap is always mounted as read-only. Change the command to:
command: ["sh", "-c", "chown 1000:1000 /srv/mumble"] will work.

How to configure pv and pvc for single pod with multiple containers in kubernetes

Need to create a single pod with multiple containers for MySQL, MongoDB, MySQL. My question is should I need to create persistence volume and persistence volume claim for each container and specify the volume in pod configuration or single PV & PVC is enough for all the containers in a single pod-like below configs.
Could you verify below configuration is enough or not?
PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypod-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypod-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mypod
labels:
app: mypod
spec:
replicas: 1
selector:
matchLabels:
app: mypod
template:
metadata:
labels:
app: mypod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: mypod-pvc
containers:
- name: mysql
image: mysql/mysql-server:latest
ports:
- containerPort: 3306
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mysql"
name: task-pv-storage
- name: mongodb
image: openshift/mongodb-24-centos7
ports:
- containerPort: 27017
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mongodb"
name: task-pv-storage
- name: mssql
image: mcr.microsoft.com/mssql/server
ports:
- containerPort: 1433
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/opt/mssql"
name: task-pv-storage
imagePullSecrets:
- name: devplat
You should not be running multiple database containers inside a single pod.
Consider running each database in a separate statefulset.
follow below reference for mysql
https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
You need to adopt similar approach for mongodb or other databases as well.

How to replace the in-memory storage with persistent storage using kustomize

I'm trying to replace the in-memory storage of Grafana deployment with persistent storage using kustomize. What I'm trying to do is that I'm removing the in-memory storage and then mapping persistent storage. But When I'm deploying it then it is giving me an error.
Error
The Deployment "grafana" is invalid: spec.template.spec.containers[0].volumeMounts[1].name: Not found: "grafana-storage"
Kustomize version
{Version:kustomize/v4.0.5 GitCommit:9e8e7a7fe99ec9fbf801463e8607928322fc5245 BuildDate:2021-03-08T20:53:03Z GoOs:linux GoArch:amd64}
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://github.com/prometheus-operator/kube-prometheus
- grafana-pvc.yaml
patchesStrategicMerge:
- grafana-patch.yaml
grafana-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-storage
namespace: monitoring
labels:
billingType: "hourly"
region: sng01
zone: sng01
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: ibmc-file-bronze
grafana-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
namespace: monitoring
spec:
template:
spec:
volumes:
# use persistent storage for storing users instead of in-memory storage
- $patch: delete <---- trying to remove the previous volume
name: grafana-storage
- name: grafana-storage
persistentVolumeClaim:
claimName: grafana-storage
containers:
- name: grafana
volumeMounts:
- name: grafana-storage
mountPath: /var/lib/grafana
please help.
The $patch: delete doesn't seem to work as I would expect.
It may be nice to open an issue on kustomize github: https://github.com/kubernetes-sigs/kustomize/issues and ask developers about it.
Although here is the patch I tried, and it seems to work:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
namespace: monitoring
spec:
template:
spec:
volumes:
- name: grafana-storage
emptyDir: null
persistentVolumeClaim:
claimName: grafana-storage
containers:
- name: grafana
volumeMounts:
- name: grafana-storage
mountPath: /var/lib/grafana
Based on https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md
The following should also work in theory:
spec:
volumes:
- $retainKeys:
- name
- persistentVolumeClaim
name: grafana-storage
persistentVolumeClaim:
claimName: grafana-storage
But in practise it doesn't, and I think that's because kustomize has its own implementaions of strategic merge (different that k8s).

Kubectl create for persistent storage erroring out

I'm trying to deploy a persistent storage for couch DB and it is failing out with the error
kubectl create -f couch_persistant_deploy.yaml
error: error validating "couch_persistant_deploy.yaml": error validating data: couldn't find type: v1.Deployment; if you choose to ignore these errors, turn validation off with --validate=false
Create volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/sda1/data/test
Claim volume.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
labels:
app: couchdb
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Deploy the VM.yaml
apiVersion: extensions/v1beta1
#apiVersion: v1
kind: Deployment
#kind: ReplicationController
metadata:
name: couchdb
spec:
replicas: 1
template:
metadata:
labels:
app: couchdb
spec:
containers:
- name: couchdb
image: "couchdb"
imagePullPolicy: Always
env:
- name: COUCHDB_USER
value: admin
- name: COUCHDB_PASSWORD
value: password
ports:
- name: couchdb
containerPort: 5984
- name: epmd
containerPort: 4369
containerPort: 9100
volumeMounts:
- mountPath: "/opt/couchdb/data"
name: task-pv-storage
imagePullSecrets:
- name: registrypullsecret2
#volumes:
#- name: database-storage
# emptyDir: {}
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
Any leads is really appreciated.
Your error message should be like this:
error: error validating "couch_persistant_deploy.yaml": error validating data: ValidationError(Deployment.spec.template.spec.volumes[0]): unknown field "claimName" in io.k8s.api.core.v1.Volume; if you choose to ignore these errors, turn validation off with --validate=false
See, error message is specific: unknown field "claimName" in io.k8s.api.core.v1.Volume
You need to put claimName under persistentVolumeClaim.
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim # fix is here
But you did
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim # invalid
Which makes your Deployment object invalid