When configuring a local PV I noticed there is no option to explicitly create the PV directory on the underlying node if it does not already exist, unlike an hostPath PV which provides the DirectoryOrCreate option (as shown in the doc here), a local PV requires you to first create the directory manually and give it the correct ownership and permissions.
I am trying to use a local PV with a directory as a mounted local storage device with this code:
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv-volume
namespace: jenkins
spec:
capacity:
storage: 4Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: "/var/jenkins_home"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-1
However, in order for the PV to be configured correctly, I first need to ssh into the worker-1 node and run the following commands:
sudo mkdir -p /var/jenkins_home
sudo chown -R 1000:1000 /var/jenkins_home
sudo chmod -R 755 /var/jenkins_home
Would it be possible to create the directory and give it the correct ownership and permissions when the PV configuration is applied?
No, you cannot skip this step.
However you can write script which will work in the background and execute commands which will create directory for local volume and assign rights to it. Remember to give execute permission to your script.
For example, create script with proper commands then create ConfigMap which will run created script in also ConfigMap will configure PV.
See example: configmap-volume, configmap-scripts.
Related
Im just starting this trip into cloud native and kubernetes(minikube for now), but im stuck because i cannot pass files and persist into pod containers.
Nginx,php-fpm and mariadb containers. Now, i just need to test app in kubernetes(docker-compose is running ok), that means as i was doing in docker-compose.
How can i mount volumes in this scenario?
Volume file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /var/lib/docker/volumes/sylius-standard-mysql-sylius-dev-data/_data/sylius
Claim File:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Thank you for the guidance...
It depends on which Minikube driver you're using. Check out https://minikube.sigs.k8s.io/docs/handbook/mount/ for a full overview, but basically you have to make sure the host folder is shared with the guest VM, then hostPath volumes will work as normal. You may want to try Docker Desktop instead as it somewhat streamlines this process.
found on minikube github repo: feature request: mount host volumes into docker driver:
Currently we are mounting the /var directory as a docker volume, so
that's the current workaround.
i.e. use this host directory, for getting things into the container ?
See e.g. docker volume inspect minikube for the details on it
So you may want to try to use the /var dir as workaround.
If the previous solution doesn't meet your expectations and you still want to use docker as your minikube driver - don't, because you can't use extra mounts with docker (as far as I know). Use a VM.
Other solution: if you don't like the idea of using a VM, use kind (kuberntes in docker).
kind supports extra mounts. To configure it, check the kind documentation on extra mounts:
Extra Mounts
Extra mounts can be used to pass through storage on the host to a kind node for persisting data, mounting through code etc.
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes:
- role: control-plane # add a mount from /path/to/my/files on the host to /files on the node
extraMounts:
- hostPath: /path/to/my/files/
containerPath: /files
And the rest is like you described: You need to create a PV with the same hostPath as specified by containerPath.
You could also use minikube without any driver by specifying --driver=none with minikube start, which can be usefull in some cases, but check the minikube docs for more information.
I'm trying to set up RabbitMQ on Minikube using the RabbitMQ Cluster Operator:
When I try to attach a persistent volume, I get the following error:
$ kubectl logs -f rabbitmq-rabbitmq-server-0
Configuring logger redirection
20:04:40.081 [warning] Failed to write PID file "/var/lib/rabbitmq/mnesia/rabbit#rabbitmq-rabbitmq-server-0.rabbitmq-rabbitmq-headless.default.pid": permission denied
20:04:40.264 [error] Failed to create Ra data directory at '/var/lib/rabbitmq/mnesia/rabbit#rabbitmq-rabbitmq-server-0.rabbitmq-rabbitmq-headless.default/quorum/rabbit#rabbitmq-rabbitmq-server-0.rabbitmq-rabbitmq-headless.default', file system operation error: enoent
20:04:40.265 [error] Supervisor ra_sup had child ra_system_sup started with ra_system_sup:start_link() at undefined exit with reason {error,"Ra could not create its data directory. See the log for details."} in context start_error
20:04:40.266 [error] CRASH REPORT Process <0.247.0> with 0 neighbours exited with reason: {error,"Ra could not create its data directory. See the log for details."} in ra_system_sup:init/1 line 43
20:04:40.267 [error] CRASH REPORT Process <0.241.0> with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,ra_system_sup,{error,"Ra could not create its data directory. See the log for details."}}},{ra_app,start,[normal,[]]}} in application_master:init/4 line 138
{"Kernel pid terminated",application_controller,"{application_start_failure,ra,{{shutdown,{failed_to_start_child,ra_system_sup,{error,\"Ra could not create its data directory. See the log for details.\"}}},{ra_app,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,ra,{{shutdown,{failed_to_start_child,ra_system_sup,{error,"Ra could not create its data directory. See the log for details."}
Crash dump is being written to: erl_crash.dump...
The issue is that RabbitMQ is not able to set up its data files in the data directory /var/lib/rabbitmq/mnesia due to a lack of permission.
My initial guess was that I needed to specify the data directory as a volumeMount, but that doesn't seem to be configurable according to the documentation.
RabbitMQ's troubleshooting documentation on persistence results in a 404.
I tried to find other resources online with the same problem but none of them were using the RabbitMQ Cluster Operator. I plan on following that route if I'm not able to find a solution but I really would like to solve this issue somehow.
Does anyone have any ideas?
The setup that I have is as follows:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq
spec:
replicas: 1
service:
type: NodePort
persistence:
storageClassName: local-storage
storage: 20Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rabbitmq-pvc
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: rabbitmq-pv
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
capacity:
storage: 20Gi
hostPath:
path: /mnt/app/rabbitmq
type: DirectoryOrCreate
To reproduce this issue on minikube:
Install the rabbitmq operator:
kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"
Apply the manifest file above
kubectl apply -f rabbitmq.yml
Running kubectl get po displays a pod named rabbitmq-rabbitmq-server-0.
Running kubectl logs -f rabbitmq-rabbitmq-server-0 to view the logs displays the above error.
As I alread suggested in comments, you can solve it running:
minikube ssh -- sudo chmod g+w /mnt/app/rabbitmq/
Answering to your question:
Is there a way I can add that to my manifest file rather than having to do it manually?
You can override the rabbitmq statefulset manifest fields to change last line in initContainer command script from chgrp 999 /var/lib/rabbitmq/mnesia/ to this: chown 999:999 /var/lib/rabbitmq/mnesia/.
Complete RabbitmqCluster yaml manifest looks like following:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq
spec:
replicas: 1
service:
type: NodePort
persistence:
storageClassName: local-storage
storage: 20Gi
override:
statefulSet:
spec:
template:
spec:
containers: []
initContainers:
- name: setup-container
command:
- sh
- -c
- cp /tmp/rabbitmq/rabbitmq.conf /etc/rabbitmq/rabbitmq.conf && chown 999:999
/etc/rabbitmq/rabbitmq.conf && echo '' >> /etc/rabbitmq/rabbitmq.conf ; cp /tmp/rabbitmq/advanced.config
/etc/rabbitmq/advanced.config && chown 999:999 /etc/rabbitmq/advanced.config
; cp /tmp/rabbitmq/rabbitmq-env.conf /etc/rabbitmq/rabbitmq-env.conf && chown
999:999 /etc/rabbitmq/rabbitmq-env.conf ; cp /tmp/erlang-cookie-secret/.erlang.cookie
/var/lib/rabbitmq/.erlang.cookie && chown 999:999 /var/lib/rabbitmq/.erlang.cookie
&& chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins
/etc/rabbitmq/enabled_plugins && chown 999:999 /etc/rabbitmq/enabled_plugins
; chown 999:999 /var/lib/rabbitmq/mnesia/ # <- CHANGED THIS LINE
I had the same issue when deploying RabbitMQ in kubernetes inside Vagrant (not minikube though). I was using this setup.
I tried running sudo chmod g+w /mnt/app/rabbitmq/ but had no luck...
Eventually gave up and ended up running minikube inside vagrant using this box and everything worked perfectly fine out of the box! Didn't have to do anything special... Not event manually creating the persistent volume...
inside my nodes
I had this issue in a live version and minikube does not allow to run SSH command then.So what I did was run chmod to the my hostpath provisioner and recreated my rabbitmq-cluster
chmod 777 /tmp/hostpath-provisioner/default/*
I found the answer for this issue. It happen when there is few nodes in cluster.
Solution is to add securityContext: {}
https://github.com/rabbitmq/rabbitmq-website/blob/3ee8e72a7c4fe52e323ba1039eecbf3a67c554f7/site/kubernetes/operator/using-on-openshift.md#arbitrary-user-ids
I am working on deploying Hyperledger Fabric test network on Kubernetes minikube cluster. I intend to use PersistentVolume to share cytpo-config and channel artifacts among various peers and orderers. Following is my PersistentVolume.yaml and PersistentVolumeClaim.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: "/nfsroot"
server: "3.128.203.245"
readOnly: false
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Following is the pod where the above claim is mount on /data
kind: Pod
apiVersion: v1
metadata:
name: test-shell
labels:
name: test-shell
spec:
containers:
- name: shell
image: ubuntu
command: ["/bin/bash", "-c", "while true ; do sleep 10 ; done"]
volumeMounts:
- mountPath: "/data"
name: pv
volumes:
- name: pv
persistentVolumeClaim:
claimName: persistent-volume-claim
NFS is setup on my EC2 instance. I have verified NFS server is working fine and I was able to mount it inside minikube. I am not understanding what wrong am I doing, but any file present inside 3.128.203.245:/nfsroot is not present in test-shell:/data
What point am I missing. I even tried hostPath mount but to no avail. Please help me out.
I think you should check the following things to verify that NFS is mounted successfully or not
run this command on the node where you want to mount.
$showmount -e nfs-server-ip
like in my case $showmount -e 172.16.10.161
Export list for 172.16.10.161:
/opt/share *
use $df -hT command see that Is NFS is mounted or not like in my case it will give output 172.16.10.161:/opt/share nfs4 91G 32G 55G 37% /opt/share
if not mounted then use the following command
$sudo mount -t nfs 172.16.10.161:/opt/share /opt/share
if the above commands show an error then check firewall is allowing nfs or not
$sudo ufw status
if not then allow using the command
$sudo ufw allow from nfs-server-ip to any port nfs
I made the same setup I don't face any issues. My k8s cluster of fabric is running successfully . The hf k8s yaml files can be found at my GitHub repo. There I have deployed the consortium of Banks on hyperledger fabric which is a dynamic multihost blockchain network that means you can add orgs, peers, join peers, create channels, install and instantiate chaincode on the go in an existing running blockchain network.
By default in minikube you should have default StorageClass:
Each StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.
For example, NFS doesn't provide an internal provisioner, but an external provisioner can be used. There are also cases when 3rd party storage vendors provide their own external provisioner.
Change the default StorageClass
In your example this property can lead to problems.
In order to list enabled addons in minikube please use:
minikube addons list
To list all StorageClasses in your cluster use:
kubectl get sc
NAME PROVISIONER
standard (default) k8s.io/minikube-hostpath
Please note that at most one StorageClass can be marked as default. If two or more of them are marked as default, a PersistentVolumeClaim without storageClassName explicitly specified cannot be created.
In your example the most probable scenario is that you have already default StorageClass. Applying those resources caused: new PV creation (without StoraglClass), new PVC creation (with reference to existing default StorageClass). In this situation there is no reference between your custom pv/pvc binding) as an example please take a look:
kubectl get pv,pvc,sc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/nfs 3Gi RWX Retain Available 50m
persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 1Gi RWX Delete Bound default/pvc-nfs standard 50m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc-nfs Bound pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 1Gi RWX standard 50m
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/standard (default) k8s.io/minikube-hostpath Delete Immediate false 103m
This example will not work due to:
new persistentvolume/nfs has been created (without reference to pvc)
new persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 has been created using default StorageClass. In the Claim section we can notice that this pv has been created due to dynamic pv provisioning using default StorageClass with reference to default/pvc-nfs claim (persistentvolumeclaim/pvc-nfs ).
Solution 1.
According to the information from the comments:
Also I am able to connect to it within my minikube and also my actual ubuntu system.
I you are able to mount from inside minikube host this nfs share
If you mounted nfs share into your minikube node, please try to use this example with hostpath volume directly from your pod:
apiVersion: v1
kind: Pod
metadata:
name: test-shell
namespace: default
spec:
volumes:
- name: pv
hostPath:
path: /path/shares # path to nfs mount point on minikube node
containers:
- name: shell
image: ubuntu
command: ["/bin/bash", "-c", "sleep 1000 "]
volumeMounts:
- name: pv
mountPath: /data
Solution 2.
If you are using PV/PVC approach:
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
nfs:
path: "/nfsroot"
server: "3.128.203.245"
readOnly: false
claimRef:
name: persistent-volume-claim
namespace: default
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persistent-volume-claim
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
volumeName: persistent-volume
Note:
If you are not referencing any provisioner associated with your StorageClass
Helper programs relating to the volume type may be required for consumption of a PersistentVolume within a cluster. In this example, the PersistentVolume is of type NFS and the helper program /sbin/mount.nfs is required to support the mounting of NFS filesystems.
Please keep in mind that when you are creating pvc kubernetes persistent-controller is trying to bind pvc with proper pv. During this process different factors are take into account like: storageClassName (default/custom), accessModes, claimRef, volumeName.
In this case you can use:
PersistentVolume.spec.claimRef.name: persistent-volume-claim PersistentVolumeClaim.spec.volumeName: persistent-volume
Note:
The control plane can bind PersistentVolumeClaims to matching PersistentVolumes in the cluster. However, if you want a PVC to bind to a specific PV, you need to pre-bind them.
By specifying a PersistentVolume in a PersistentVolumeClaim, you declare a binding between that specific PV and PVC. If the PersistentVolume exists and has not reserved PersistentVolumeClaims through its claimRef field, then the PersistentVolume and PersistentVolumeClaim will be bound.
The binding happens regardless of some volume matching criteria, including node affinity. The control plane still checks that storage class, access modes, and requested storage size are valid.
Once the PV/pvc were created or in case of any problem with pv/pvc binding please use the following commands to figure current state:
kubectl get pv,pvc,sc
kubectl describe pv
kubectl describe pvc
kubectl describe pod
kubectl get events
I am testing a connection between a persistent volume and a kubernetes pod by running busybox, but am getting "can't open" "no such file or directory. In order to do further testing, I tried running
echo ls /mntpoint/filename
This is obviously not the correct command. I have tried a few other iterations - too many to list here.
I want to run ls of the mountpoint and print to the console. How do I do this?
EDIT
My code was closest to Rohit's suggestion (below), so I made the following edits, but the code still does not work. Please help.
Persistent Volume
apiVersion: v1
kind: PersistentVolume
metadata:
name: data
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: "/mnt/data"
storageClassName: test
Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: test
Pod
apiVersion: v1
kind: Pod
metadata:
name: persistent-volume
spec:
containers:
- name: busybox
command: ['tail', '-f', '/dev/null']
image: busybox
volumeMounts:
- name: data
mountPath: "/data"
volumes:
- name: data
persistentVolumeClaim:
claimName: data
EDIT 2
So, after taking a day off, I came back to my (still running) pod and the command (ls) worked. It works as expected on any directory (e.g. "ls /" or "ls /data").
My current interpretation is that I did not wait long enough before running the command - although that does not seem to explain it since I had been monitoring with "kubectl describe pod ." Also I have run the same test several times with short latency between the "apply" and "exec" commands and the behavior has been consistent today.
I am going to continue to keep playing with this, but I think the current problem has been solved. Than you!
We cannot directly access volume mount on the POD without create a claim. You are missing some steps here.
Create a Persistent Volume. I think you have done this par.
Create Persistent Volume Claim. This will bind your claim to your volume.
Attach your Persistent Volume Claim to the pound.
Once these steps are done you can access the files from your volume in the pod.
Go through the link for detailed information and steps.
Steps you need to follow when dealing with volumes and Kubernetes resources:
Create a Persistent volume.
Create a Persistent volume claim and make sure the state is bound.
Once the PV and PVC are bounded, try to use the PV from a pod/deployment through PVC.
Check the logs of the pod/deployment. You might see the entry of command execution.
Reference: https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
Hope this helps, please try to elaborate more and paste the logs of above-mentioned steps.
I am trying to mount a local directory into a component deployed in kubeflow using ksonnet prototype.
There is no way to mount a local directory into a Kubernetes pod (after all kubeflow and ksonnet just create pods and other Kubernetes resources).
If you want your files to be available in Kubernetes I can think in two options:
Create a custom docker image, copying the folder you want, and push it to a registry. Kubeflow has parameters to customize the images to be deployed.
Use NFS. That way you could mount the NFS volume in your local and also in the pods. To do that you should modify the ksonnet code, since in the last stable version it is not implemented.
If you provide more information about what component are you trying to deploy and which cloud provider you're using, I could help you more
If by local directory you mean local directory on the node, then it is possible to mount a directory on the node’s filesystem inside a pod using HostPath or Local Volumes feature.
A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
A local volume represents a mounted local storage device such as a disk, partition or directory.
Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported yet.
Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume’s node constraints by looking at the node affinity on the PersistentVolume.
For example:
# HostPaht Volume example
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
# local volume example (beta in v1.10)
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 100Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
GlusterFS is also available as Volume or as Persistent Volume (Access modes:ReadWriteOnce,ReadOnlyMany,ReadWriteMany)
A glusterfs volume allows a Glusterfs (an open source networked filesystem) volume to be mounted into your Pod. Unlike emptyDir, which is erased when a Pod is removed, the contents of a glusterfs volume are preserved and the volume is merely unmounted. This means that a glusterfs volume can be pre-populated with data, and that data can be “handed off” between Pods. GlusterFS can be mounted by multiple writers simultaneously.
See the GlusterFS example for more details.