I tried creating persistent volume using the host path. I can bind it to a specific node using node affinity but I didn't provide that. My persistent volume YAML looks like this
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins
labels:
type: fast
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/data
After this I created PVC
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
And finally attached it onto the pod.
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: thinkingmonster/nettools
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
Now in describe command for pv or pvc it does not tell that on which node it has actually kept the volume /mnt/data
and I had to ssh to all nodes to locate the same.
And pod is smart enough to be created on that node only where Kubernetes had mapped host directory to PV
How can I know that on which node Kubernetes has created Persistent volume? Without the requirement to ssh the nodes or check that where is pod running.
It's only when a volume is bound to a claim that it's associated with a particular node. HostPath volumes are a bit different than the regular sort, making it a little less clear. When you get the volume claim, the annotations on it should give you a bunch of information, including what you're looking for. In particular, look for the:
volume.kubernetes.io/selected-node: ${NODE_NAME}
annotation on the PVC. You can see the annotations, along with the other computed configuration, by asking the Kubernetes api server for that info:
kubectl get pvc -o yaml -n ${NAMESPACE} ${PVC_NAME}
Related
I have homelab.
Window Host and Vmware workstation
1 Master Node
3 Worker Nodes
All nodes have the windows drive mounted and available /external
I want to run multiple tools like jenkins, nexus, nessus, etc and want to use persistent volumes in external drive so that even if i create new EKS clusters then volumes stay there for ever and i can reuse them
So i want to know whats the best to use it
Can i create single hostPath PV and then each pod can claim exmaple 20GB from it
Or I have to create PV for each pod with hostPath and then claim it in POD
So is there 1:1 relationship with PV and PVC ? or one PV can have multiple claims in diff folders?
Also if recreate CLuster and create PV from same hostPath , will my data be there ?
You can use local volume instead of hostPath to experiment with SC/PVC/PC. First, you create the StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: shared
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Then you provision the PersistentVolume available on each node, here's an example for one node:
apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-pv-1
spec:
capacity:
storage: 20Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: shared
local:
path: <path to the shared folder>
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <your node name>
And the claim that allows you to mount the provisioned volume in a pod:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-pv-1
spec:
storageClassName: shared
volumeMode: Filesystem
resources:
requests:
storage: 20Gi
accessModes:
- ReadWriteOnce
Here's an example pod that mounts the volume and write to it:
apiVersion: v1
kind: Pod
metadata:
name: busybox-1
spec:
restartPolicy: Never
volumes:
- name: shared
persistentVolumeClaim:
claimName: shared-pv-1
containers:
- name: busybox-1
image: busybox
imagePullPolicy: IfNotPresent
volumeMounts:
- name: shared
mountPath: /data
command: ["ash","-c","while :; do echo \"$(date)\tmessage from busybox-1.\" >> /data/message.txt; sleep 1; done"]
For local volume, by default the data written will require manual cleanup and deletion. A positive side effect for you as you would like the content to persist. If you like go further to experiment CSI alike local volume, you can use this Local Persistence Volume Static Provisioner.
I create a deployment yaml for a microservice.
I am using hostpath volume type for persistentVolume and I have to copy data to a path in host. But I want to mount a directory from container into the host because data is in the container and I need this data in host.
My deployment yaml:
#create persistent volume
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-vol
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /opt/storage/app
#create persistent volume clame
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
#create Deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 1
selector:
matchLabels:
deploy: app
template:
metadata:
labels:
deploy: app
spec:
hostname: app
hostNetwork: false
containers:
- name: app
image: 192.168.10.10:2021/project/app:latest
volumeMounts:
- mountPath: /opt/app
name: project-volume
volumes:
- name: project-volume
persistentVolumeClaim:
claimName: app-pv-claim
Due to information gaps, I am writing a general answer.
First of all you should know:
HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as ReadOnly.
But the use of hostPath also offers a powerful escape hatch for some applications.
If you still want to use it, firstly you should check if both pods (the one that created the data and the second one that want to access the data) are on the same node. The following command will show you that.
kubectl get pods -o wide
All data created by any of pods should stay in hostPath directory and be available for every pod as long as they are running on the same node.
See also this documentation about hostPath.
I have Java API which exports the data to an excel and generates a file on the POD where the request is served.
Now the next request (to download the file) might go to a different POD and the download fails.
How do I get around this?
How do I generate files on all the POD? Or how do I make sure the subsequent request goes to the same POD where file was generated?
I cant give the direct POD URL as it will not be accessible to clients.
Thanks.
You need to use a persistent volumes to share the same files between your containers. You could use the node storage mounted on containers (easiest way) or other distributed file system like NFS, EFS (AWS), GlusterFS etc...
If you you need a simplest to share the file and your pods are in the same node, you could use hostpath to store the file and share the volume with other containers.
Assuming you have a kubernetes cluster that has only one Node, and you want to share the path /mtn/data of your node with your pods:
Create a PersistentVolume:
A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
Create a PersistentVolumeClaim:
Pods use PersistentVolumeClaims to request physical storage
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
Look at the PersistentVolumeClaim:
kubectl get pvc task-pv-claim
The output shows that the PersistentVolumeClaim is bound to your PersistentVolume, task-pv-volume.
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
task-pv-claim Bound task-pv-volume 10Gi RWO manual 30s
Create a deployment with 2 replicas for example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/mnt/data"
name: task-pv-storage
Now you can check inside both container the path /mnt/data has the same files.
If you have cluster with more than 1 node I recommend you to think about the other types of persistent volumes.
References:
Configure persistent volumes
Persistent volumes
Volume Types
I have configured the Postgres pod with static provisioning of persistence volume in my local environment . It works fine at the first time but when i delete the namespace and
rerun the pod then its status is pending and give me error
pod has unbound immediate persistentvolumeclaims
I tried to remove the storageClassName from Persistance Volume claim but not works
I also tried to change the storeageclass from manual to block storage but same problem
my yaml file
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
namespace: manhattan
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/manhattan/current/pgdata"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
namespace: manhattan
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: postgres
namespace: manhattan
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: dbr-postgres
image: postgres-custome
tty: true
volumeMounts:
- mountPath: "/var/lib/pgsql/9.3/data"
name: task-pv-storage
nodeSelector:
kubernetes.io/hostname: k8s-master
I want my pod to be running even when i delete the namespace and rerun the pod.yaml file
Data will be kept in the kubernetes node because hostpath uses the node filesystem to store the data. The problem is that if you have multiple nodes, then your pod can start on any other node. To solve this, you can either specify the node where you want your pod to start or implement a nfs or glusterfs in your kubernetes nodes. This might be the cause of your problem.
There is one more thing I can think of that might be your issue. When you remove a namespace all the kubernetes resources inside it are removed as well. There is no easy way to recover those. This means that you have to create the pv, pvc and pod in the new namespace.
I solved this issue by using persistentVolumeReclaimPolicy to recycle. Now I can rebound the persistence volume even after deleting the namespace and recreating it
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/opt/manhattan/current/pgdata"
I'm starting with GKE (and kubernetes in general) and I want to mount a persistent volume on a pod using a gcePersistentDisk.
I first created a Persistent Disk (project-data) in Compute Engine, then created a PersistentVolume and a PersistentVolumeClaim like so:
apiVersion: v1
kind: PersistentVolume
metadata:
name: project-data
spec:
storageClassName: standard
capacity:
storage: 20G
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: project-data
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: project-data-claim
spec:
storageClassName: standard
volumeName: project-data
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20G
selector:
matchLabels:
app: myapp
After applying this config, I see in GKE/Storage that my PVC is "Bound", but I can't find a way to access my volume in myapp.
I tried to edit the deployment yaml in the console by adding:
volumeMounts:
- mountPath: /data
name: project-data
...but this modification is refused by the console (it seems that this kind of edit is forbidden).
How can I finally see my PersistentVolume as a filesystem in my app?
First of all, PVC should be defined in the volumes section:
volumes:
- name: project-data
persistentVolumeClaim:
claimName: project-data-claim
And if it's refuesed to edit the pod directly, you can edit the yaml file, then apply it:
$ kubectl apply -f your.yaml
Also, since you have the selector defined in your pvc configuration, I think you should have label defined in your pv configuration.