When I try to create a persistentVolume on Okteto Cloud with the following definition:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
I get the following error:
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=persistentvolumes", GroupVersionKind: "/v1, Kind=PersistentVolume"
Name: "postgres-pv", Namespace: ""
Object: &{map["apiVersion":"v1" "kind":"PersistentVolume" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "labels":map["app":"postgres" "type":"local"] "name":"postgres-pv"] "spec":map["accessModes":["ReadWriteMany"] "capacity":map["storage":"5Gi"] "hostPath":map["path":"/mnt/data"]]]}
from server for: "deploy/k8s.postgres.yml": persistentvolumes "postgres-pv" is forbidden: User "system:serviceaccount:okteto:07e6fdbf-55c2-4642-81e3-051e8309000f" cannot get resource "persistentvolumes" in API group "" at the cluster scope
However according to the Okteto cloud docs, persistentVolumes seem to be authorized.
How would I create one on there ?
.
For context I'm trying to reproduce a simple postgres deployment (no replication, no backups).
Here's my complete deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: ClusterIP
ports:
- name: postgres
port: 5432
selector:
app: postgres
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgresadmin
POSTGRES_PASSWORD: admin123
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Persistent volumes is a cluster-wide resource and it is not allowed.
The docs are wrong, thanks for pointing it out.
You can create instead PersistentVolumeClaims using the default storage class (and remove the persistent volume manifest):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Hope it helps :-)
Related
I launch under. Error appears: Warning FailedScheduling 40s default-scheduler 0/3 nodes are available: 3 node(s) didn't find available persistent volumes to bind. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
#Conten:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres
data:
POSTGRES_DB: db1
---
apiVersion: v1
kind: Secret
metadata:
name: postgres-secrets
data:
POSTGRES_PASSWORD: password
stringData:
POSTGRES_USER: user
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv2
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /var/lib/postgres
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: "local-storage" # For PV
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:14.7
ports:
- containerPort: 5432
envFrom:
- secretRef:
name: postgres-secrets
- configMapRef:
name: postgres-configmap
volumeMounts:
- name: postgres-database-storage
mountPath: /var/lib/pgsql/data
volumes:
- name: postgres-database-storage
persistentVolumeClaim:
claimName: postgres-pv-claim
I tried to make a pvc of a smaller volume, but this did not help.
I don’t even know what the problem was, after restarting the test cluster everything worked.
I have deployed (using Helm) some services in the K8s cluster hosted by Docker Desktop (MacOS). One of the "services" is MongoDB, for which I'm trying to set up a PersistedVolume, so that the actual data will be retained in a MacOS local directory between cluster (re)installs (or MongoDB pod replacements). Everything "works" per se, but the MongoDB container process keeps setting up its local directory /data/db, as if nothing is really setup in terms of Persistent Volumes. I've been pulling my hair for a while now and thought an extra set of eyes might spot whats wrong or missing.
I have several other resources deployed, e.g a small Micronaut based backend service which exposes an API to read from the MongoDB instance. All of that works just fine.
Here are the descriptors involved for MongoDB:
PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Deployment:
kind: Deployment
metadata:
name: fo-persons-mongodb
namespace: fo
labels:
app: fo-persons-mongodb
spec:
replicas: 1
selector:
matchLabels:
app: fo-persons-mongodb
template:
metadata:
labels:
app: fo-persons-mongodb
spec:
volumes:
- name: fo-persons-mongodb-volume-pvc
persistentVolumeClaim:
claimName: persons-mongodb-pvc
containers:
- name: fo-persons-mongodb
image: mongo
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
volumeMounts:
- name: fo-persons-mongodb-volume-pvc
mountPath: "/data/db"
Service:
apiVersion: v1
kind: Service
metadata:
name: fo-persons-mongodb
namespace: fo
spec:
type: ClusterIP
selector:
app: fo-persons-mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /Users/mike/kubernetes/fo/persons/mongodb
Alright! I got it working. Seems I'd made two mistakes. Below are the updated descriptors for the PersistentVolumeClaim and PersistentVolume:
Error #1: Not setting the storageClassName in the spec of the PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local-storage
Error #2: Not setting the node affinity and not using local.path instead of hostPath, both in the PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
local:
path: /Users/mike/kubernetes/fo/persons/mongodb
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
I am unable to create a mongo container using a deployment:
Persistent volume and persistent volume claims are working fine.
> kubectl logs -f mongo-depl-dc764fb6d-qqdxh
Error from server (BadRequest): container "mongo" in pod "mongo-depl-dc764fb6d-qqdxh" is waiting to start: CreateContainerError
Persistent Volume :
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "E:\\Linux\\mongo"
Persistent Volume Claim :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 0.5Gi
Mongodb Deployment :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
volumes:
- name: mongo-volume
persistentVolumeClaim:
claimName: mongo-pvc
containers:
- name: mongo
image: mongo:3.6.5-jessie
ports:
- name: mongo
containerPort: 27017
volumeMounts:
- name: mongo-volume
mountPath: /data/db
---
apiVersion: v1
kind: Service
metadata:
name: mongo-srv
spec:
type: NodePort
selector:
app: mongo
ports:
- name: mongo
protocol: TCP
port: 27017
targetPort: 27017
I would really appreciate the help. I am running kubernetes on a dev environment on the windows machine.
I am running local k8s cluster and defining PV as hostPath for mysql pods.
Sharing all the configuration details below .
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The problem I am getting is as mysql pod is running in k8s cluster ,when its deleted and recreate ,it will choose any one of the node and deployed .So mysql hostpath always mounted to specific node .Is it a good idea to fix the node for mysql or any other options are there ?please share if any idea .
you have below choices
Use node selector or node affinity to ensure that pod gets scheduled on the node where the mount is created OR
Use local persistent volumes. it is supported on kubernetes 1.14 and above
Why are you using a PVC and a PV? Actually, for hostPath, you don't even need to create the PV object. It just gets it.
You should use a StatefulSet if you want a pod that is re-created to get the storage it was using the previous one (state).
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: mysql
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-persistent-storage
spec:
accessModes: ["ReadWriteOnce"]
# storageClassName: "standard"
resources:
requests:
storage: 2Gi
This statefulSet fails, but it is a mysql thing. As reference, should serve.
I create a PersistentVolume using NFS as below, when I delete the deployment I lose my data. If I exec into the postgres container the DB that was created before is not there anymore.
Using AWS EKS, I managed to deleted a deployment without losing any data.
Any help as to why this happens?
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /mnt/pv001
server: 164.10.0.1
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: metabase-postgres-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Deployment
...
spec:
volumes:
- name: metabase-postgres-storage
persistentVolumeClaim:
claimName: metabase-postgres-persistent-volume-claim
...
I had the wrong mountPath, must be /var/lib/postgresql/data
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: edw-pg
spec:
serviceName: postgres-cluster-ip-service
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
terminationGracePeriodSeconds: 10
containers:
- name: postgres
image: postgres:10.7
ports:
- containerPort: 5432
volumeMounts:
- name: edw-persistent-storage-claim
mountPath: /var/lib/postgresql/data
readOnly: false
subPath: postgres
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: pgpassword
key: PGPASSWORD
volumeClaimTemplates:
- metadata:
name: edw-persistent-storage-claim
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 50Gi