Kubectl create for persistent storage erroring out - deployment

I'm trying to deploy a persistent storage for couch DB and it is failing out with the error
kubectl create -f couch_persistant_deploy.yaml
error: error validating "couch_persistant_deploy.yaml": error validating data: couldn't find type: v1.Deployment; if you choose to ignore these errors, turn validation off with --validate=false
Create volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/sda1/data/test
Claim volume.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
labels:
app: couchdb
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Deploy the VM.yaml
apiVersion: extensions/v1beta1
#apiVersion: v1
kind: Deployment
#kind: ReplicationController
metadata:
name: couchdb
spec:
replicas: 1
template:
metadata:
labels:
app: couchdb
spec:
containers:
- name: couchdb
image: "couchdb"
imagePullPolicy: Always
env:
- name: COUCHDB_USER
value: admin
- name: COUCHDB_PASSWORD
value: password
ports:
- name: couchdb
containerPort: 5984
- name: epmd
containerPort: 4369
containerPort: 9100
volumeMounts:
- mountPath: "/opt/couchdb/data"
name: task-pv-storage
imagePullSecrets:
- name: registrypullsecret2
#volumes:
#- name: database-storage
# emptyDir: {}
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
Any leads is really appreciated.

Your error message should be like this:
error: error validating "couch_persistant_deploy.yaml": error validating data: ValidationError(Deployment.spec.template.spec.volumes[0]): unknown field "claimName" in io.k8s.api.core.v1.Volume; if you choose to ignore these errors, turn validation off with --validate=false
See, error message is specific: unknown field "claimName" in io.k8s.api.core.v1.Volume
You need to put claimName under persistentVolumeClaim.
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim # fix is here
But you did
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim # invalid
Which makes your Deployment object invalid

Related

I have the following YAML file whose errors I want to ignore

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: mysqldb-1
labels:
app.kubernetes.io: mysqldb-1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: mysqldb-2
labels:
app.kubernetes.io: mysqldb-2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
---
apiVersion: v1
kind: Service
metadata:
name: mysqldb-service
labels:
app.kubernetes.io: mysqldb
spec:
ports:
- name: "5306"
port: 5306
targetPort: 3306
selector:
app.kubernetes.io: mysqldb
status:
loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysqldb
labels:
app.kubernetes.io: mysqldb
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io: mysqldb
template:
metadata:
labels:
app.kubernetes.io: mysqldb
spec:
containers:
-name: mysqldb
image: mysql:8.0
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: mysqldb-1
- mountPath: /docker-entrypoint-initdb.d/init.sql
name: mysqldb-2
restartPolicy: Always
volumes:
- name: mysqldb-1
persistentVolumeClaim:
claimName: mysqldb-1
- name: mysqldb-2
persistentVolumeClaim:
claimName: mysqldb-2
status: {}
I got this error
Error from server (BadRequest): error when creating "mysqldb.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.spec"
How do I go about ignoring the errors for an included yaml file?
There are indentation issues in your Deployment resource. Also, add resource limits for the containers.
Try this one:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysqldb
labels:
app.kubernetes.io: mysqldb
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io: mysqldb
template:
metadata:
labels:
app.kubernetes.io: mysqldb
spec:
containers:
- name: mysqldb
image: mysql:8.0
ports:
- containerPort: 3306
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1000m"
volumeMounts:
- mountPath: /var/lib/mysql
name: mysqldb-1
- mountPath: /docker-entrypoint-initdb.d/init.sql
name: mysqldb-2
restartPolicy: Always
volumes:
- name: mysqldb-1
persistentVolumeClaim:
claimName: mysqldb-1
- name: mysqldb-2
persistentVolumeClaim:
claimName: mysqldb-2
status: {}
NOTE: You can use an IDE extension for Kubernetes to get errors and warnings for YAML resource issues easily.

How do I define a persistent storage claim for my deployment?

I get this error message:
Deployment.apps "nginxapp" is invalid: spec.template.spec.containers[0].volumeMounts[0].name: Not found: "nginx-claim"
Now, I thought deployment made a claim to a persistant storage, so these are det files I've run in order:
First, persistant volume to /data as that is persistent on minikube (https://minikube.sigs.k8s.io/docs/handbook/persistent_volumes/):
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
Then, for my nginx deployment I made a claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Before service I run the deployment, which is the one giving me the error above, looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginxapp
name: nginxapp
spec:
replicas: 1
volumes:
- persistentVolumeClaim:
claimName: nginx-claim
selector:
matchLabels:
app: nginxapp
template:
metadata:
labels:
app: nginxapp
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/data/www"
name: nginx-claim
Where did I go wrong? Isnt it deployment -> volume claim -> volume?
Am I doing it right? The persistent volume is pod-wide (?) and therefore generally named. But the claim is per deployment? So thats why I named it nginx-claim. I might be mistaken here, but should not bug up this simple run doh.
In my deployment i set mountPath: "/data/www", this should follow the directory already set in persistent volume definition, or is building on that? So in my case I get /data/data/www?
Try change to:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-claim
spec:
storageClassName: local-storage # <-- changed
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
At your deployment spec add:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginxapp
name: nginxapp
spec:
replicas: 1
selector:
matchLabels:
app: nginxapp
template:
metadata:
labels:
app: nginxapp
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: nginx-claim
mountPath: "/data/www"
volumes:
- name: nginx-claim # <-- added
persistentVolumeClaim:
claimName: nginx-claim
Looks like name: is missing under volumes: in the deployment manifest. Can you try the following in the deployment manifest:
volumes:
- name: nginx-claim
persistentVolumeClaim:
claimName: nginx-claim
Here is the documentation.

Can not create ReadWrite filesystem in kubernetes (ReadOnly mount)

Summary
I currently am in the process of learning kubernetes, as such I have decided to start with an application that is simple (Mumble).
Setup
My setup is simple, I have one node (the master) where I have removed the taint so mumble can be deployed on it. This single node is running CentOS Stream but SELinux is disabled.
The issue
The /srv/mumble directory appears to be ReadOnly, and at this point I have tried creating an init container to chown the directory but that fails due to the issue above. This issue appears in both containers, and I am unsure at this point how to change this to allow the mumble application to create files in said directory. The mumble application user runs as user 1000. What am I missing here?
Configs
---
apiVersion: v1
kind: Namespace
metadata:
name: mumble
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mumble-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
type: DirectoryOrCreate
path: "/var/lib/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mumble-pv-claim
namespace: mumble
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mumble-config
namespace: mumble
data:
murmur.ini: |
**cut for brevity**
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mumble-deployment
namespace: mumble
labels:
app: mumble
spec:
replicas: 1
selector:
matchLabels:
app: mumble
template:
metadata:
labels:
app: mumble
spec:
initContainers:
- name: storage-setup
image: busybox:latest
command: ["sh", "-c", "chown -R 1000:1000 /srv/mumble"]
securityContext:
privileged: true
runAsUser: 0
volumeMounts:
- mountPath: "/srv/mumble"
name: mumble-pv-storage
readOnly: false
- name: mumble-config
subPath: murmur.ini
mountPath: "/srv/mumble/config.ini"
readOnly: false
containers:
- name: mumble
image: phlak/mumble
ports:
- containerPort: 64738
env:
- name: TZ
value: "America/Denver"
volumeMounts:
- mountPath: "/srv/mumble"
name: mumble-pv-storage
readOnly: false
- name: mumble-config
subPath: murmur.ini
mountPath: "/srv/mumble/config.ini"
readOnly: false
volumes:
- name: mumble-pv-storage
persistentVolumeClaim:
claimName: mumble-pv-claim
- name: mumble-config
configMap:
name: mumble-config
items:
- key: murmur.ini
path: murmur.ini
---
apiVersion: v1
kind: Service
metadata:
name: mumble-service
spec:
selector:
app: mumble
ports:
- port: 64738
command: ["sh", "-c", "chown -R 1000:1000 /srv/mumble"]
Not the volume that is mounted as read-only, the ConfigMap is always mounted as read-only. Change the command to:
command: ["sh", "-c", "chown 1000:1000 /srv/mumble"] will work.

How to configure pv and pvc for single pod with multiple containers in kubernetes

Need to create a single pod with multiple containers for MySQL, MongoDB, MySQL. My question is should I need to create persistence volume and persistence volume claim for each container and specify the volume in pod configuration or single PV & PVC is enough for all the containers in a single pod-like below configs.
Could you verify below configuration is enough or not?
PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypod-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypod-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mypod
labels:
app: mypod
spec:
replicas: 1
selector:
matchLabels:
app: mypod
template:
metadata:
labels:
app: mypod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: mypod-pvc
containers:
- name: mysql
image: mysql/mysql-server:latest
ports:
- containerPort: 3306
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mysql"
name: task-pv-storage
- name: mongodb
image: openshift/mongodb-24-centos7
ports:
- containerPort: 27017
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mongodb"
name: task-pv-storage
- name: mssql
image: mcr.microsoft.com/mssql/server
ports:
- containerPort: 1433
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/opt/mssql"
name: task-pv-storage
imagePullSecrets:
- name: devplat
You should not be running multiple database containers inside a single pod.
Consider running each database in a separate statefulset.
follow below reference for mysql
https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
You need to adopt similar approach for mongodb or other databases as well.

PersistentVolumeClaim unknown in kubernetes

i try to deploy a container but unfortunately i have an error when i try to execute kubectl apply -f *.yaml
the error is :
error validating data: ValidationError(Pod.spec.containers[1]):
unknown field "persistentVolumeClaim" in io.k8s.api.core.v1.Container;
i dont understand why i get the error because i wrote claimName: under persistentVolumeClaim: in my pd.yaml config :(
Pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: /apps/karaf/etc
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-claim
PersistentVolumeClaimKaraf.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: karaf-conf-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
Deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: karaf
namespace: poc
spec:
replicas: 1
template:
metadata:
labels:
app: karaf
spec:
containers:
- name: karaf
image: "xxx/karaf:ids-1.1.0"
imagePullPolicy: Always
ports:
- containerPort: 6443
- containerPort: 6100
- containerPort: 6101
resources:
volumeMounts:
- mountPath: /apps/karaf/etc
name: karaf-conf
volumes:
- name: karaf-conf
persistentVolumeClaim:
claimName: karaf-conf
The reason you're seeing that error is due to you specifying a persistentVolumeClaim under your pod spec's container specifications. As you can see from the auto generated docs here: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core
persistentVolumeClaims aren't supported at this level/API object, which is what's giving the error you're seeing.
You should modify the pod.yml to specify this as a volume instead.
e.g.:
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: /apps/karaf/etc
volumes:
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-claim
According to kubernetes documentation, persistentVolumeClaim is a part of .spec.volume level, not .spec.container level of a pod object.
The correct pod.yaml is:
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
volumes:
- name: efgkaraf-conf-storage
persistentVolumeClaim:
claimName: efgkaraf-conf-claim
containers:
- name: karaf
image: docker-all.attanea.net/library/efgkaraf:ids-1.1.0
volumeMounts:
- name: efgkaraf-conf-storage
mountPath: /apps/karaf/etc