Kubernetes persistent volume not allowing read or write - kubernetes

I'm following this tutorial to create a jenkins server on my Kubernetes server.
I've got a volume
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1000Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/t1/kubernetes/vol/"
and a volume claim
---
# PersistentVolume for Jenkins
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
namespace: ns-jenkins # PV are not scoped to any namespace, but pvc is associated with the namespace
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
If I navigate to the mounted location inside Jenkins and run touch test, I get touch: cannot touch 'test': Permission denied
User looks right.
$ id
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
I've even gone so far as to make the host folder 777 permissions and no luck. What's going on?

Related

Cloned drive setup as one pv (or two mirroring pv's)

I'm running a k3s cluster in two different locations. I'm currently running a PV in one of these locations and I'm trying to develop a configuration that can be read as one PV but clone/mirror that drive to another location, all of this through k3s PV and PVC. Any clever ideas on how to achieve this?
My PV and PVC looks like this:
# pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-data-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 32Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/srv/dev-disk-by-uuid-********-****-****-****-************/kubernetes"
# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-data-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 12Gi

MongoDB Community Kubernetes Operator and Custom Persistent Volumes

I'm trying to deploy a MongoDB replica set by using the MongoDB Community Kubernetes Operator in Minikube.
I followed the instructions on the official GitHub, so:
Install the CRD
Install the necessary roles and role-bindings
Install the Operator Deploy the Replicaset
By default, the operator will creates three pods, each of them automatically linked to a new persistent volume claim bounded to a new persistent volume also created by the operator (so far so good).
However, I would like the data to be saved in a specific volume, mounted in a specific host path. So in order I would need to create three persistent volumes, each mounted to a specific host path, and then automatically I would want to configure the replicaset so that each pod would connect to its respective persistent volume (perhaps using the matchLabels selector).
So I created three volumes by applying the following file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv-00
namespace: $NAMESPACE
labels:
type: local
service: mongo
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mongodata/00"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv-01
namespace: $NAMESPACE
labels:
type: local
service: mongo
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mongodata/01"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv-02
namespace: $NAMESPACE
labels:
type: local
service: mongo
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mongodata/02"
and then I set up the replica set configuration file in the following way, but it still fails to connect the pods to the volumes:
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongo-rs
namespace: $NAMESPACE
spec:
members: 3
type: ReplicaSet
version: "4.4.0"
persistent: true
podSpec:
persistence:
single:
labelSelector:
matchLabels:
type: local
service: mongo
storage: 5Gi
storageClass: manual
statefulSet:
spec:
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
accessModes: [ "ReadWriteOnce", "ReadWriteMany" ]
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: local
service: mongo
storageClassName: manual
security:
authentication:
modes: ["SCRAM"]
users:
- ...
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
I can't find any documentation online, except the mongodb.com_v1_custom_volume_cr.yaml, has anyone faced this problem before? How could I make it work?
I think you could be interested into using local type of volumes. It works, like this:
First, you create a storage class for the local volumes. Something like the following:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Since it has no-provisioner, it will be usable only if you manually create local PVs. WaitForFirstConsumer instead, will prevent attaching a PV to a PVC of a Pod which cannot be scheduled on the host on which the PV is available.
Second, you create the local PVs. Similarly to how you created them in your example, something like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /path/on/the/host
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- the-node-hostname-on-which-the-storage-is-located
Notice the definition, it tells the path on the host, the capacity.. and then it explains on which node of the cluster, such PV can be used (with the nodeAffinity). It also link them to the storage class we created early.. so that if someone (a claim template) requires storage with that class, it will now find this PV.
You can create 3 PVs, on 3 different nodes.. or 3 PVs on the same node at different paths, you can organize things as you desire.
Third, you can now use the local-storage class in claim template. The claim template could be something similar to this:
volumeClaimTemplates:
- metadata:
name: the-name-of-the-pvc
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 5Gi
And each Pod of the StatefulSet will try to be scheduled on a node with a local-storage PV available.
Remember that with local storages or, in general, with volumes that utilize host paths.. you may want to spread the various Pods of your app on different nodes, so that the app may resist the failure of a single node on its own.
In case you want to be able to decide which Pod links to which volume, the easiest way is to create one PV at a time, then wait for the Pod to Bound with it.. before creating the next one. It's not optimal but it's the easiest way.

pod has unbound immediate persistentvolumeclaims after deleting namespace

I have configured the Postgres pod with static provisioning of persistence volume in my local environment . It works fine at the first time but when i delete the namespace and
rerun the pod then its status is pending and give me error
pod has unbound immediate persistentvolumeclaims
I tried to remove the storageClassName from Persistance Volume claim but not works
I also tried to change the storeageclass from manual to block storage but same problem
my yaml file
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
namespace: manhattan
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/manhattan/current/pgdata"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
namespace: manhattan
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: postgres
namespace: manhattan
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: dbr-postgres
image: postgres-custome
tty: true
volumeMounts:
- mountPath: "/var/lib/pgsql/9.3/data"
name: task-pv-storage
nodeSelector:
kubernetes.io/hostname: k8s-master
I want my pod to be running even when i delete the namespace and rerun the pod.yaml file
Data will be kept in the kubernetes node because hostpath uses the node filesystem to store the data. The problem is that if you have multiple nodes, then your pod can start on any other node. To solve this, you can either specify the node where you want your pod to start or implement a nfs or glusterfs in your kubernetes nodes. This might be the cause of your problem.
There is one more thing I can think of that might be your issue. When you remove a namespace all the kubernetes resources inside it are removed as well. There is no easy way to recover those. This means that you have to create the pv, pvc and pod in the new namespace.
I solved this issue by using persistentVolumeReclaimPolicy to recycle. Now I can rebound the persistence volume even after deleting the namespace and recreating it
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/opt/manhattan/current/pgdata"

Getting an error when trying to create a persistent volume

I am trying to create a persistent volume on my kubernetes cluster running on an Amazon AWS EC2 instance (Ubuntu 18.04). I'm getting an error from kubectl when trying to create it.
I've tried looking up the error but I'm not getting any satisfactory search results.
Here is the pv.yaml file that I'm using.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
type: local
spec:
capacity:
storage: 1Gi
storageClassName: manual
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /home/ubuntu/data/pv001
Here's the error that I am getting:
Error from server (BadRequest): error when creating "./mysql-pv.yaml":
PersistentVolume in version "v1" cannot be handled as a
PersistentVolume: v1.PersistentVolume.Spec:
v1.PersistentVolumeSpec.PersistentVolumeSource: HostPath: Capacity:
unmarshalerDecoder: quantities must match the regular expression
'^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte
of ...|":"manual"},"hostPat|..., bigger context ...|city":
{"storage":"1Gi","storageClassName":"manual"},"hostPath":
{"path":"/home/ubuntu/data/pv001"},"p|...
I cannot figure out from the message what the actual error is.
Any help appreciated.
remove the storage class from pv definition. storage class is needed for dynamic provisioning of pv's.
in your case, you are using host path volumes. it should work without storage class.
If you are on k8s 1.14 then look at local volumes. refer the below link
https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/
I don't think it's related to having quotes in the path. It's more about using the right indentation for storageClassName. It should be this instead:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
type: local
spec:
capacity:
storage: 1Gi
storageClassName: manual
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /home/ubuntu/data/pv001
You can remove it too, and it will use the default StorageClass
Try this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
type: local
spec:
capacity:
storage: 1Gi
storageClassName: manual
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /home/ubuntu/data/pv001
storageClassName is under spec and same level as capacity (You put storageClassName under capacity which is wrong).
Read more: https://kubernetes.io/docs/concepts/storage/persistent-volumes/

PersistentVolumeClaim in a namespace does not connect to a PersistentVolume

My PersistentVolumeClaim will not use the PersistentVolume I have prepared for it.
I have this PersistentVolume in monitoring-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
After I have done
kubectl apply -f monitoring-pv.yaml
I can check that it exists with kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
monitoring-volume 50Gi RWO Retain Available 5m
My PersistentVolumeClaim in monitoring-pvc.yaml looks like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitoring-claim
namespace: monitoring
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
selector:
matchLabels:
usage: monitoring
When I do kubectl apply -f monitoring-pvc.yaml it gets created.
I can look at my new PersistentVolumeClaim with get pvc -n monitoringand I see
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
monitoring-claim Pending manual 31s
When I look at my PersistentVolume with kubectl get pv I can see that it's still available:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
monitoring-volume 50Gi RWO Retain Available 16m
I had expected the PersistentVolume to be Boundbut it isn't. When I use a ´PersistentVolumeClaim´ with the same name as this, a new PersistentVolumeClaim is created that is written in /tmp and therefore not very persistent.
When I do the same operations without a namespace for my PersistentVolumeClaim everything seems to work.
I'm on minikube on a Ubuntu 18.04.
What do I need to change to be able to connect the volume with the claim?
When I reviewed my question and compared it to a working solution, I noticed that I had missed storageClassName that was set to manual in an example without a namespace that I was able to use.
My updated PersistentVolumenow looks like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
spec:
storageClassName: manual
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
The only difference is
storageClassName: manual
My preliminary findings is that this was the silly mistake I had done.
Persistent Volume and Volume Claim should in same namespace. You need to add namespace: monitoring. now you can try this below code
for Persistent Volume
monitoring-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
namespace: monitoring
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
for Persistent volume claim
monitoring-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitoring-claim
namespace: monitoring
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
selector:
matchLabels:
usage: monitoring