I am using gcePersistentDisk as volume in kubernetes. Currently all my VM , Disks are on under same availability zone and account of google cloud. But still gcePersistentDisk is not mounting. Below are my deployment file.
Stoage Class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd-sc
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain # Retain storage even if we delete PVC
parameters:
type: pd-ssd
Volume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd-sc
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
storageClassName: ssd-sc
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: nfs-disk
fsType: ext4
Service and Containers
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache-service
spec:
selector:
matchLabels:
app: apache
replicas: 1
template:
metadata:
labels:
app: apache
spec:
volumes:
- name: service-pv-storage
gcePersistentDisk:
pdName: nfs-disk
fsType: ext4
containers:
- name: apache
image: mobingi/ubuntu-apache2-php7:7.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/var/www/html"
name: service-pv-storage
---
apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30751
type: NodePort
When i am using this deployment it give me following error on pod descriptions.
Note :- I am not using kubernetes google engine. I have setup my custom cluster.
Related
I want to mount a folder that is located on my local hard disk (ex. /home/logon/volumes/algovens/ids/app_data/) on a pod:
{{/home/logon/volumes/algovens/ids/app_data/}} SHOULD BE MOUNTED ON {{/app/app_data/}} ON A POD
I create a persistent volume and a persistent volume claim, then import the pvc on my pod .yaml file.
I apply all three yaml files for pv, pvc, pod into kubernetes cluster.
When getting access to the container using interactive bash, I see there's a folder that I've configured to mount from my local hard disk (/app/app_data/ ON POD), but it's still empty.
However, I have some files and folders on my local folder on my local hard disk (/home/logon/volumes/algovens/ids/app_data/)
My configuration files:
identityserver-pv.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: identityserver-pv
labels:
name: identityserver-pv
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
local:
path: /home/logon/volumes/algovens/ids/app_data/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
identityserver-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: identityserver-pvc
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
identityserver-deployment.yaml
# Deployment for identityserver
apiVersion: apps/v1
kind: Deployment
metadata:
name: identityserver-deployment
labels:
app: identityserver-deployment
spec:
replicas: 1
selector:
matchLabels:
app: identityserver
template:
metadata:
labels:
app: identityserver # pod label
spec:
volumes:
- name: identityserver-pvc
persistentVolumeClaim:
claimName: identityserver-pvc
containers:
- name: identityserver-container
image: localhost:5000/algovens-identityserver:v1.0
ports:
- containerPort: 80
volumeMounts:
- name: identityserver-pvc
mountPath: "/app/app_data/"
env:
- name: ALGOVENS_CORS_ORIGINS
valueFrom:
configMapKeyRef:
name: identityserver-config
key: ALGOVENS_CORS_ORIGINS
---
# Service for identityserver
apiVersion: v1
kind: Service
metadata:
name: identityserver-service
spec:
type: NodePort # External service. default value is ClusterIP
selector:
app: identityserver
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30100
Hi i'm trying to create a persistent volume for my mongoDB on kubernetes on google cloud platform and i'm stuck in pending
Here you have my manifest :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: account-mongo-depl
labels:
app: account-mongo
spec:
replicas: 1
serviceName: 'account-mongo'
selector:
matchLabels:
app: account-mongo
template:
metadata:
labels:
app: account-mongo
spec:
volumes:
- name: account-mongo-storage
persistentVolumeClaim:
claimName: account-db-bs-claim
containers:
- name: account-mongo
image: mongo
volumeMounts:
- mountPath: '/data/db'
name: account-mongo-storage
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: account-db-bs-claim
spec:
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: account-mongo-srv
spec:
type: ClusterIP
selector:
app: account-mongo
ports:
- name: account-db
protocol: TCP
port: 27017
targetPort: 27017
Here you have my list of pods
My pods is stuck in pending until fail
If you're on GKE, you should already have dynamic provisioning setup.
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
"do-block-storage"? Were you running on Digital Ocean previously? You should be able remove the storageClassName line and use the default provisioner Google provides.
For example, here's a snippet from one of my own statefulsets
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
## Specify storage class to use nfs (requires nfs provisioner. Leave unset for dynamic provisioning)
#storageClassName: "nfs"
resources:
requests:
storage: 10G
I do not specify a storage class here, so it uses the default one, which provisions as a persistent disk on GCP
$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) kubernetes.io/gce-pd Delete Immediate false 2y245d
I have deployed (using Helm) some services in the K8s cluster hosted by Docker Desktop (MacOS). One of the "services" is MongoDB, for which I'm trying to set up a PersistedVolume, so that the actual data will be retained in a MacOS local directory between cluster (re)installs (or MongoDB pod replacements). Everything "works" per se, but the MongoDB container process keeps setting up its local directory /data/db, as if nothing is really setup in terms of Persistent Volumes. I've been pulling my hair for a while now and thought an extra set of eyes might spot whats wrong or missing.
I have several other resources deployed, e.g a small Micronaut based backend service which exposes an API to read from the MongoDB instance. All of that works just fine.
Here are the descriptors involved for MongoDB:
PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Deployment:
kind: Deployment
metadata:
name: fo-persons-mongodb
namespace: fo
labels:
app: fo-persons-mongodb
spec:
replicas: 1
selector:
matchLabels:
app: fo-persons-mongodb
template:
metadata:
labels:
app: fo-persons-mongodb
spec:
volumes:
- name: fo-persons-mongodb-volume-pvc
persistentVolumeClaim:
claimName: persons-mongodb-pvc
containers:
- name: fo-persons-mongodb
image: mongo
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
volumeMounts:
- name: fo-persons-mongodb-volume-pvc
mountPath: "/data/db"
Service:
apiVersion: v1
kind: Service
metadata:
name: fo-persons-mongodb
namespace: fo
spec:
type: ClusterIP
selector:
app: fo-persons-mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /Users/mike/kubernetes/fo/persons/mongodb
Alright! I got it working. Seems I'd made two mistakes. Below are the updated descriptors for the PersistentVolumeClaim and PersistentVolume:
Error #1: Not setting the storageClassName in the spec of the PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local-storage
Error #2: Not setting the node affinity and not using local.path instead of hostPath, both in the PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
local:
path: /Users/mike/kubernetes/fo/persons/mongodb
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
When I try to create a persistentVolume on Okteto Cloud with the following definition:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
I get the following error:
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=persistentvolumes", GroupVersionKind: "/v1, Kind=PersistentVolume"
Name: "postgres-pv", Namespace: ""
Object: &{map["apiVersion":"v1" "kind":"PersistentVolume" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "labels":map["app":"postgres" "type":"local"] "name":"postgres-pv"] "spec":map["accessModes":["ReadWriteMany"] "capacity":map["storage":"5Gi"] "hostPath":map["path":"/mnt/data"]]]}
from server for: "deploy/k8s.postgres.yml": persistentvolumes "postgres-pv" is forbidden: User "system:serviceaccount:okteto:07e6fdbf-55c2-4642-81e3-051e8309000f" cannot get resource "persistentvolumes" in API group "" at the cluster scope
However according to the Okteto cloud docs, persistentVolumes seem to be authorized.
How would I create one on there ?
.
For context I'm trying to reproduce a simple postgres deployment (no replication, no backups).
Here's my complete deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: ClusterIP
ports:
- name: postgres
port: 5432
selector:
app: postgres
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgresadmin
POSTGRES_PASSWORD: admin123
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Persistent volumes is a cluster-wide resource and it is not allowed.
The docs are wrong, thanks for pointing it out.
You can create instead PersistentVolumeClaims using the default storage class (and remove the persistent volume manifest):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Hope it helps :-)
I want to create a statefulset elasticsearch in kubernetes on virtualbox. I'm not using cloud provider so i create two persistent volume localy for my two replicas of my statefulset :
pv0:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-elk-0
namespace: elk
labels:
type: local
spec:
storageClassName: gp2
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data/pv0"
pv1:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-elk-1
namespace: elk
labels:
type: local
spec:
storageClassName: gp2
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data/pv1"
Statefulset :
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: elasticsearch-logging
namespace: elk
labels:
k8s-app: elasticsearch-logging
version: v5.6.2
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
serviceName: elasticsearch-logging
replicas: 2
selector:
matchLabels:
k8s-app: elasticsearch-logging
version: v5.6.2
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v5.6.2
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: elasticsearch-logging
containers:
- image: gcr.io/google-containers/elasticsearch:v5.6.2
name: elasticsearch-logging
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
resources:
limits:
cpu: 0.1
volumeMounts:
- name: elasticsearch-logging
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
initContainers:
- image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-logging-init
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: elasticsearch-logging
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: gp2
resources:
requests:
storage: 5Gi
It seems like the persistant volume is correctly bounced but the pod are always in crash loop and restart every time. It's it because of the use of the initContainer or something wrong with my yaml ?
Add more Ram, scale up the cluster.