GKE Kubernetes Persistent Volume - kubernetes

I try to use a persistent volume for my rethinkdb server. But I got this error:
Unable to mount volumes for pod "rethinkdb-server-deployment-6866f5b459-25fjb_default(efd90244-7d02-11e8-bffa-42010a8400b9)": timeout expired waiting for volumes to attach/mount for pod "default"/"rethinkdb-server-deployment-
Multi-Attach error for volume "pvc-f115c85e-7c42-11e8-bffa-42010a8400b9" Volume is already used by pod(s) rethinkdb-server-deployment-58f68c8464-4hn9x
I think that Kubernetes deploy a new node without removed the old one so it can't share le volume between both because my pvc is ReadWriteOnce. This persistent volume must be create in an automatic way, so I can't use persistent disk, format it ...
My configuration:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: default
name: rethinkdb-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
apiVersion: apps/v1beta1
kind: Deployment
metadata:
namespace: default
labels:
db: rethinkdb
role: admin
name: rethinkdb-server-deployment
spec:
replicas: 1
selector:
matchLabels:
app: rethinkdb-server
template:
metadata:
name: rethinkdb-server-pod
labels:
app: rethinkdb-server
spec:
containers:
- name: rethinkdb-server
image: gcr.io/$PROJECT_ID/rethinkdb-server:$LAST_VERSION
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 8080
name: admin-port
- containerPort: 28015
name: driver-port
- containerPort: 29015
name: cluster-port
volumeMounts:
- mountPath: /data/rethinkdb_data
name: rethinkdb-storage
volumes:
- name: rethinkdb-storage
persistentVolumeClaim:
claimName: rethinkdb-pvc
How do you manage this?

I see that you’ve added the PersistentVolumeClaim within a deployment. I also see that you are trying to scale the node pool.
A PersistentVolumeClaim will work on a deployment, but only if you are not scaling the deployment. This is why that error message showed up. The error that you are seeing says that that volume is already in use by an existing pod when a new pod is replicated.
Because you are trying to scale the deployment, other replicas will try to mount and use the same volume.
Solution: Deploy the PersistentVolumeClaim in a statefulset object, not a deployment. Instructions on how to deploy a statefulset can be found in this article. With a statefulset, you will be able to attach a PersistentVolumeClaim to a pod, then scale the node pool.

Related

Having trouble deploying database to kubernetes cluster

I am able to deploy the database service itself, but when I try to deploy with a persistent volume claim as well, the deployment silently fails. Below is the deployment.yaml file I am using. The service deploys fine if I remove the first 14 lines that define the persistent volume claim.
apiVersion: apps/v1
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: timescale-pvc-1
namespace: my-namespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: standard
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: timescale
spec:
selector:
matchLabels:
app: timescale
replicas: 1
template:
metadata:
labels:
app: timescale
spec:
containers:
- name: timescale
image: timescale/timescaledb:2.3.0-pg11
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
value: "password"
- name: POSTGRES_DB
value: "metrics"
volumes:
- name: timescaledb-pv
persistentVolumeClaim:
claimName: timescale-pvc-1
Consider StatefulSet for running stateful apps like databases. Deployment is preferred for stateless services.
You are using the below storage class in the PVC.
storageClassName: standard
Ensure the storage class supports dynamic storage provisioning.
Are you creating a PV along with PVC and Deployment? A Deployment, Stateful set or a Pod can only use PVC if there is a PV available.
If you're creating the PV as well then there's a possibility of a different issue. Please share the logs of your Deployment and PVC

CreateContainerError while creating postgresql in k8s

I'm trying to run postgresql db at k8s and there is no errors while creating all from file, but pod at the deployment cant create container.
There is my yaml code:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgresadmin
POSTGRES_PASSWORD: adminpassword
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.18
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
Sevice:
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: ClusterIP
ports:
- port: 5432
selector:
app: postgres
after i'm using:
kubectl create -f filename
i got :
configmap/postgres-config created
persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created
deployment.apps/postgres created
service/postgres created
But when i'm typing:
kubectl get pods
There is an error:
postgres-78496cc865-85kt7 0/1 CreateContainerError 0 13m
this is PV and PVC, no more space at the question to ad that as a code :)
If you describe the pod, you'll see the warning message in there,
Warning FailedScheduling 45s (x2 over 45s) default-scheduler persistentvolumeclaim "postgres-pv-claim" not found
On a high level, a database instance can run within a Kubernetes container. A database instance stores data in files, and the files are stored in persistent volume claims. A PersistentVolumeClaim must be created and made available to a PostgreSQL instance.To create the database instance as a container, you use a deployment configuration. In order to provide an access interface that is independent of the particular container, you create a service that provides access to the database. The service remains unchanged even if a container (or pod) is moved to a different node.
In your case, create a PVC resource and bound it to PV so that will be used by the pod. As currently it does not found that , it went into pending state. This can be achieved in multiple ways, you can either use the hostPath as the local storage,
$ k get pods
NAME READY STATUS RESTARTS AGE
postgres-795cfcd67b-khfgn 1/1 Running 0 18s
Sample PV and PVC configs as below,
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nautilus
spec:
storageClassName: manual
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/mohan"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
You can check the Persistent Volume doc for more details. Also, read more about storage class and StatefulSets for deploying database applications in Kubernetes cluster.
Thanks to all who tried to help me! Problem was at PersistentVolume.spec.hostPath.path. There was an invalid character at the path. I tried to use "./path".

Kubernetes - For Scale, pod is pending when attached the persistent volumes while scaling the pod (GKE)

I have created a deployment in the xyz-namespace namespace, it has PVC. I can create the deployment and able to access it. It is working properly but while scale the deployment from the Kubernetes console then the pod is pending state only.
persistent_claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 5Gi
namespace: xyz-namespace
and deployment object is like below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: db-service
labels:
k8s-app: db-service
Name:db-service
ServiceName: db-service
spec:
selector:
matchLabels:
tier: data
Name: db-service
ServiceName: db-service
strategy:
type: Recreate
template:
metadata:
labels:
app: jenkins
tier: data
Name: db-service
ServiceName: db-service
spec:
hostname: jenkins
initContainers:
- command:
- "/bin/sh"
- "-c"
- chown -R 1000:1000 /var/jenkins_home
image: busybox
imagePullPolicy: Always
name: jenkins-init
volumeMounts:
- name: jenkinsvol
mountPath: "/var/jenkins_home"
containers:
- image: jenkins/jenkins:lts
name: jenkins
ports:
- containerPort: 8080
name: jenkins1
- containerPort: 8080
name: jenkins2
volumeMounts:
- name: jenkinsvol
mountPath: "/var/jenkins_home"
volumes:
- name: jenkinsvol
persistentVolumeClaim:
claimName: jenkins
nodeSelector:
nodegroup: xyz-testing
namespace: xyz-namespace
replicas: 1
Deployment is created fine and working as well but
When I am trying to Scale the deployment from console then the pod is getting stuck and it's pending state only.
If I removed the persistent volume and then scaled it then it is working fine, but with persistent volume, it is not working.
When using standard storage class I assume you are using the default GCEPersisentDisk Volume PlugIn. In this case you cannot set them at all as they are already set by the storage provider (GCP in your case, as you are using GCE perisistent disks), these disks only support ReadWriteOnce(RWO) and ReadOnlyMany (ROX) access modes. If you try to create a ReadWriteMany(RWX) PV that will never come in a success state (your case when set the PVC with accessModes: ReadWriteMany).
Also if any pod tries to attach a ReadWriteOnce volume on some other node, you’ll get following error:
FailedMount Failed to attach volume "pv0001" on node "xyz" with: googleapi: Error 400: The disk resource 'abc' is already being used by 'xyz'
References from above on this article
As mentioned here and here, NFS is the easiest way to get ReadWriteMany as all nodes need to be able to ReadWriteMany to the storage device you are using for your pods.
Then I would suggest you to use an NFS storage option. In case you want to test it, here is a good guide by Google using its Filestore solution which are fully managed NFS file servers.
Your PersistentVolumeClaim is set to:
accessModes:
- ReadWriteOnce
But it should be set to:
accessModes:
- ReadWriteMany
The ReadWriteOnce access mode means, that
the volume can be mounted as read-write by a single node [1].
When you scale your deployment it's most likely scaled to different nodes, therefore you need ReadWriteMany.
[1] https://kubernetes.io/docs/concepts/storage/persistent-volumes/

MongoDB not persistent in kubernetes

I'm trying to configure mongodb deployment inside k8s world.
My mongo deployment file looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: panel-admin-mongo-deployment
spec:
replicas: 1
selector:
matchLabels:
component: panel-admin-mongo
template:
metadata:
labels:
component: panel-admin-mongo
spec:
volumes:
- name: panel-admin-mongo-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: panel-admin-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: panel-admin-mongo-storage
mountPath: /data/db
Mongo service file:
apiVersion: v1
kind: Service
metadata:
name: panel-admin-mongo-cluster-ip-service
spec:
type: ClusterIP
selector:
component: panel-admin-mongo
ports:
- port: 27017
targetPort: 27017
And my persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
When I'm entering the mongodb container using:
kubectl exec -it panel-admin-mongo-deployment-6dcfc5b8c7-mk8d5 sh
and I'm saving some users email and password inside the collection (f.ex. users) everything works fine. But when I shot down the pod and container inside of it, boot up again, the data is gone. Shouldn't be independent of life-cycle of pod? And if yes what am I missing?
I am not a k8s expert, but your problems is that you are not using Kubernetes statfulsets, have a look here
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
AFAIK, for any persistent deployment you need to make your pods using statefulsets.
First, make sure that you pod success usage PVC:
kubectl describe po/${POD_NAME}
and check Volumes section:
Volumes:
prometheus-operator-db:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: prometheus-operator-db-0
ReadOnly: false
If you success usage PVC, need check reclaim-policy for you PV, this value should be persistentVolumeReclaimPolicy":"Retain".

Minio data does not persist through reboot

I deployed Minio on Kubernetes on an Ubuntu desktop. It works fine, except that whenever I reboot the machine, everything that was stored in Minio mysteriously disappears (if I create several buckets with files in them, I come back to a completely blanks slate after the reboot - the buckets, and all their files, are completely gone).
When I set up Minio, I created a persistent volume in Kubernetes which mounts to a folder (/mnt/minio/minio - I have a 4 TB HDD mounted at /mnt/minio with a folder named minio inside that). I noticed that this folder seems to be empty even when I store stuff in Minio, so perhaps Minio is ignoring the persistent volume and using the container storage? However, I don't know why this would be happening; I have both a PV and a PV claim, and kubectl shows that they are bound to each other.
Below are the yaml files I applied to deploy my minio installation:
kind: PersistentVolume
apiVersion: v1
metadata:
name: minio-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/minio/minio"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pv-claim
labels:
app: minio-storage-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 99Gi
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio-deployment
spec:
selector:
matchLabels:
app: minio
strategy:
type: Recreate
template:
metadata:
labels:
# Label is used as selector in the service.
app: minio
spec:
# Refer to the PVC created earlier
volumes:
- name: storage
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: minio-pv-claim
containers:
- name: minio
# Pulls the default Minio image from Docker Hub
image: minio/minio:latest
args:
- server
- /storage
env:
# Minio access key and secret key
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
hostPort: 9000
# Mount the volume into the pod
volumeMounts:
- name: storage # must match the volume name, above
mountPath: "/mnt/minio/minio"
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
you need to mount container's /storage directory in the directory you are mounting on the container /mnt/minio/minio/;
args:
- server
- /mnt/minio/minio/storage
But consider deploying using StatefulSet, so when your pod restarts it will retain everything of the previous pod.