I am trying to configure hostPath as the volume in kubernetes. I have logged into VM server, from where I usually use kubernetes commands such as kubectl.
Below is the pod yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: helloworldanilhostpath
spec:
replicas: 1
template:
metadata:
labels:
run: helloworldanilhostpath
spec:
volumes:
- name: task-pv-storage
hostPath:
path: /home/openapianil/samplePV
type: Directory
containers:
- name: helloworldv1
image: ***/helloworldv1:v1
ports:
- containerPort: 9123
volumeMounts:
- name: task-pv-storage
mountPath: /mnt/sample
In VM server, I have created "/home/openapianil/samplePV" folder and I have a file present in it. It has a sample.txt file.
once I try to create this deployment. it does not happen with error -
Warning FailedMount 28s (x7 over 59s) kubelet, aks-nodepool1-39499429-1 MountVolume.SetUp failed for volume "task-pv-storage" : hostPath type check failed: /home/openapianil/samplePV is not a directory.
Can anyone please help me in understanding the problem here.
hostPath type volumes refer to directories on the Node (VM/machine) where your Pod is scheduled for running (aks-nodepool1-39499429-1 in this case). So you'd need to create this directory at least on that Node.
To make sure your Pod is consistently scheduled on that specific Node you need to set spec.nodeSelector in the PodTemplate:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: helloworldanilhostpath
spec:
replicas: 1
template:
metadata:
labels:
run: helloworldanilhostpath
spec:
nodeSelector:
kubernetes.io/hostname: aks-nodepool1-39499429-1
volumes:
- name: task-pv-storage
hostPath:
path: /home/openapianil/samplePV
type: Directory
containers:
- name: helloworldv1
image: ***/helloworldv1:v1
ports:
- containerPort: 9123
volumeMounts:
- name: task-pv-storage
mountPath: /mnt/sample
In most cases it's a bad idea to use this type of volume; there are some special use cases, but chance are yours is not one them!
If you need local storage for some reason then a slightly better solution is to use local PersistentVolumes.
Related
I'm trying to play with single pod multi container scenario.
The problem is one of my container (directus) is a node app that run as user 'node' with uid 1000
First try, I use hostpath as storage back end. With this, I need to change the host's directory mode with chmod manualy.
Now, I'm trying using longhorn.
And basicaly I don't want to change a host directory mod/ownership each time i deploy this deployment.
Here is my manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: lh-directus
namespace: lh-directus
spec:
replicas: 1
selector:
matchLabels:
app: lh-directus
template:
metadata:
labels:
app: lh-directus
spec:
nodeSelector:
kubernetes.io/os: linux
isGeneralDeployment: "true"
volumes:
- name: lh-directus-uploads-volume
persistentVolumeClaim:
claimName: lh-directus-uploads-pvc
- name: lh-directus-dbdata-volume
persistentVolumeClaim:
claimName: lh-directus-dbdata-pvc
containers:
# Redis Cache
- name: redis
image: redis:6
# Database
- name: database
image: postgres:12
volumeMounts:
- name: lh-directus-dbdata-volume
mountPath: /var/lib/postgresql/data
# Directus
- name: directus
image: directus/directus:latest
securityContext:
fsGroup: 1000
volumeMounts:
- name: lh-directus-uploads-volume
mountPath: /directus/uploads
When I Appy the manifest, I got error
error: error validating "lh-directus.yaml": error validating data: ValidationError(Deployment.spec.template.spec.containers[2].securityContext): unknown field "fsGroup" in io.k8s.api.core.v1.SecurityContext; if you choose to ignore these errors, turn validation off with --validate=false
I reads about initContainer ....
But Kindly please tell me how to fix this problem without initContainer and without manualy set/change host's path ownership/mod.
Sincerely
-bino-
Below is my kubernetes file and I need to do two things
need to mount a folder with a file
need to mount a file with startup script
I have on my local /tmp/zoo folder both the files and my zoo folder files never appear in /bitnami/zookeeper inside the pod.
The below is the updated Service,Deployment,PVC and PV
kubernetes.yaml
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.service.type: nodeport
creationTimestamp: null
labels:
io.kompose.service: zookeeper
name: zookeeper
spec:
ports:
- name: "2181"
port: 2181
targetPort: 2181
selector:
io.kompose.service: zookeeper
type: NodePort
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.service.type: nodeport
creationTimestamp: null
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: zookeeper
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: zookeeper
spec:
containers:
- image: bitnami/zookeeper:3
name: zookeeper
ports:
- containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
resources: {}
volumeMounts:
- mountPath: /bitnami/zoo
name: bitnamidockerzookeeper-zookeeper-data
restartPolicy: Always
volumes:
- name: bitnamidockerzookeeper-zookeeper-data
#hostPath:
#path: /tmp/tmp1
persistentVolumeClaim:
claimName: bitnamidockerzookeeper-zookeeper-data
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: bitnamidockerzookeeper-zookeeper-data
type: local
name: bitnamidockerzookeeper-zookeeper-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: v1
kind: PersistentVolume
metadata:
name: foo
spec:
storageClassName: manual
claimRef:
name: bitnamidockerzookeeper-zookeeper-data
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
hostPath:
path: /tmp/tmp1
status: {}
kind: List
metadata: {}
A service cannot be assigned a volume. In line 4 of your YAML, you specify "Service" when it should be "Pod" and every resource used in Kubernetes must have a name, in metadata you could add it. That should fix the simple problem.
apiVersion: v1
items:
- apiVersion: v1
kind: Pod #POD
metadata:
name: my-pod #A RESOURCE NEEDS A NAME
creationTimestamp: null
labels:
io.kompose.service: zookeeper
spec:
containers:
- image: bitnami/zookeeper:3
name: zookeeper
ports:
- containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
resources: {}
volumeMounts:
- mountPath: /bitnami/zookeeper
name: bitnamidockerzookeeper-zookeeper-data
restartPolicy: Always
volumes:
- name: bitnamidockerzookeeper-zookeeper-data
persistentVolumeClaim:
claimName: bitnamidockerzookeeper-zookeeper-data
status: {}
Now, I don't know what you're using but hostPath works exclusively on a local cluster like Minikube. In production things change drastically. If everything is local, you need to have the directory "/ tmp / zoo" in the node, NOTE not on your local pc but inside the node. For example, if you use minikube then you run minikube ssh to enter the node and there copies "/ tmp / zoo". An excellent guide to this is given in the official kubernetes documentation: https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
There are a few potential issues in your YAML.
First, the accessModes of the PersistentVolume doesn't match the one of the PersistentVolumeClaim. One way to fix that is to list both ReadWriteMany and ReadWriteOnce in the accessModes of the PersistentVolume.
Then, the PersistentVolume doesn't specify a storageClassName. As a result, if you have a StorageClass configured to be the default StorageClass on your cluster (you can see that with kubectl get sc), it will automatically provision a PersistentVolume dynamically instead of using the PersistentVolume that you declared. So you need to specify a storageClassName. The StorageClass doesn't have to exist for real (since we're using static provisioning instead of dynamic anyway).
Next, the claimRef in PersistentVolume needs to mention the Namespace of the PersistentVolumeClaim. As a reminder: PersistentVolumes are cluster resources, so they don't have a Namespace; but PersistentVolumeClaims belong to the same Namespace as the Pod that mounts them.
Another thing is that the path used by Zookeeper data in the bitnami image is /bitnami/zookeeper, not /bitnami/zoo.
You will also need to initialize permissions in that volume, because by default, only root will have write access, and Zookeeper runs as non-root here, and won't have write access to the data subdirectory.
Here is an updated YAML that addresses all these points. I also rewrote the YAML to use the YAML multi-document syntax (resources separated by ---) instead of the kind: List syntax, and I removed a lot of fields that weren't used (like the empty status: fields and the labels that weren't strictly necessary). It works on my KinD cluster, I hope it will also work in your situation.
If your cluster has only one node, this will work fine, but if you have multiple nodes, you might need to tweak things a little bit to make sure that the volume is bound to a specific node (I added a commented out nodeAffinity section in the YAML, but you might also have to change the bind mode - I only have a one-node cluster to test it out right now; but the Kubernetes documentation and blog have abundant details on this; https://stackoverflow.com/a/69517576/580281 also has details about this binding mode thing).
One last thing: in this scenario, I think it might make more sense to use a StatefulSet. It would not make a huge difference but would more clearly indicate intent (Zookeeper is a stateful service) and in the general case (beyond local hostPath volumes) it would avoid having two Zookeeper Pods accessing the volume simultaneously.
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
ports:
- name: "2181"
port: 2181
targetPort: 2181
selector:
io.kompose.service: zookeeper
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: zookeeper
template:
metadata:
labels:
io.kompose.service: zookeeper
spec:
initContainers:
- image: alpine
name: chmod
volumeMounts:
- mountPath: /bitnami/zookeeper
name: bitnamidockerzookeeper-zookeeper-data
command: [ sh, -c, "chmod 777 /bitnami/zookeeper" ]
containers:
- image: bitnami/zookeeper:3
name: zookeeper
ports:
- containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
volumeMounts:
- mountPath: /bitnami/zookeeper
name: bitnamidockerzookeeper-zookeeper-data
volumes:
- name: bitnamidockerzookeeper-zookeeper-data
persistentVolumeClaim:
claimName: bitnamidockerzookeeper-zookeeper-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: bitnamidockerzookeeper-zookeeper-data
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: tmp-tmp1
spec:
storageClassName: manual
claimRef:
name: bitnamidockerzookeeper-zookeeper-data
namespace: default
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
- ReadWriteOnce
hostPath:
path: /tmp/tmp1
#nodeAffinity:
# required:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/hostname
# operator: In
# values:
# - kind-control-plane
Little confuse, if you want to use file path on node as volume for pod, you should do as this:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
but you need to make sure you pod will be scheduler the same node which has the file path.
tl;dr: How do we mount an existing directory in a pod to a PV allowing us to be persistent with our data that will be generated?
We are running K8s 1.16.7 at the moment, with Azure Disk and Azure File integration. We have an image that contains some directories we would like to have stored on a PV for persistency. In Docker, this could be easily handled since the container would write the data to a hostmount. Does anyone know how to solve this issues in Kubernetes? When we do this now, the container boots but the directory (for example: /etc/nginx/conf.d/ as a mount into PV) is empty and there for the pod crashes.
Example:
In the container below, the /usr/src/app is filled with the hello-world application. After deployment of the file below, the container crashes due it not being able to find anything in /usr/src/app (directory is empty due to PV mount).
---
apiVersion: v1
kind: Namespace
metadata:
name: testwebsite
labels:
environment: development
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: normal
namespace: testwebsite
provisioner: disk.csi.azure.com
parameters:
storageaccounttype: Standard_LRS
kind: Managed
resourceGroup: resourcegroup
cachingmode: None
mountOptions:
- dir_mode=0777
- file_mode=0777
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-azurefile
namespace: testwebsite
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: normal
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
namespace: testwebsite
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
volumeMounts:
- name: azurefile01
mountPath: "/usr/src/app"
volumes:
- name: azurefile01
persistentVolumeClaim:
claimName: pvc-azurefile
Goal: Have the data thats in /usr/src/app within the container written to the PV.
Thx in advance!
As far as I understand your requirement, each time your Pod is created, you want its /usr/src/app to contain both the data, generated so far by your app and stored permanently in PersistentVolume as well as the original content of the /usr/src/app being the integral part of your paulbouwer/hello-kubernetes:1.8 image, available under /usr/src/app directory.
You can achieve it in kubernetes by using the init container, which would copy the original content of /usr/src/app directory during Pod startup process to the PersistentVolume which may already contain some data, previously generated by your app. After such volume initialization, the main container will mount the PersistentVolume containing both the data previously generated by your app (if any) as well as the original content of /usr/src/app directory from your image.
Your Deployment may look as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
namespace: testwebsite
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
initContainers:
- name: init-hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
command: ['sh', '-c', 'cp -a /usr/src/app/* /mnt/pv-content/']
volumeMounts:
- name: azurefile01
mountPath: "/mnt/pv-content"
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
volumeMounts:
- name: azurefile01
mountPath: "/usr/src/app"
volumes:
- name: azurefile01
persistentVolumeClaim:
claimName: pvc-azurefile
In order to get the original data from /usr/src/app/ of the paulbouwer/hello-kubernetes:1.8 image, your init container must be also based on that image.
One caveat: paulbouwer/hello-kubernetes:1.8 image must contain cp binary to be able to perform the operation.
As you can see it's not a very "elegant" solution. Well, in fact it isn't. And that's why it is not recommended to mount your PersistentVolume under the directory which already contains some important files, required by your app to run properly. But there is no way to mount a volume under certain mount point and preserve its original content at the same time. It simply doesn't work this way in Linux or other nix-based systems. You either mount the whole volume or you don't mount it at all and preserve the original content of a specific directory. The original content isn't even overwritten. It's still there. It simply remains unavailable while this specific path is used as a mount point for a different volume.
Below is deployment yaml, after deployment, I could access the pod
and I can see the mountPath "/usr/share/nginx/html", but I could not find
"/work-dir" which should be created by initContainer.
Could someone explain me the reason?
Thanks and Rgds
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
The volume at "/work-dir" is mounted by the init container and the "/work-dir" location only exists in the init container. When the init container completes, its file system is gone so the "/work-dir" directory in that init container is "gone". The application (nginx) container mounts the same volume, too, (albeit at a different location) providing mechanism for the two containers to share its content.
Per the docs:
Init containers can run with a different view of the filesystem than
app containers in the same Pod.
The volume mount with a PVC allows you to share the contents of /work-dir/ and /use/share/nginx/html/ but it does not mean the nginx container will have the /work-dir folder. Given this, you may think that you could just mount the path / which would allow you to share all folders underneath. However, a mountPath does not work for /.
So, how do you solve your problem? You could have another pod mount /work-dir/ in case you actually need the folder. Here is an example (pvc and deployment with mounts):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-fs-pvc
namespace: default
labels:
mojix.service: default-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: shared-fs
labels:
mojix.service: shared-fs
spec:
replicas: 1
selector:
matchLabels:
mojix.service: shared-fs
template:
metadata:
creationTimestamp: null
labels:
mojix.service: shared-fs
spec:
terminationGracePeriodSeconds: 3
containers:
- name: nginx-c
image: nginx:latest
volumeMounts:
- name: shared-fs-volume
mountPath: /var/www/static/
- name: alpine-c
image: alpine:latest
command: ["/bin/sleep", "10000s"]
lifecycle:
postStart:
exec:
command: ["/bin/mkdir", "-p", "/work-dir"]
volumeMounts:
- name: shared-fs-volume
mountPath: /work-dir/
volumes:
- name: shared-fs-volume
persistentVolumeClaim:
claimName: shared-fs-pvc
I try to use a persistent volume for my rethinkdb server. But I got this error:
Unable to mount volumes for pod "rethinkdb-server-deployment-6866f5b459-25fjb_default(efd90244-7d02-11e8-bffa-42010a8400b9)": timeout expired waiting for volumes to attach/mount for pod "default"/"rethinkdb-server-deployment-
Multi-Attach error for volume "pvc-f115c85e-7c42-11e8-bffa-42010a8400b9" Volume is already used by pod(s) rethinkdb-server-deployment-58f68c8464-4hn9x
I think that Kubernetes deploy a new node without removed the old one so it can't share le volume between both because my pvc is ReadWriteOnce. This persistent volume must be create in an automatic way, so I can't use persistent disk, format it ...
My configuration:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: default
name: rethinkdb-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
apiVersion: apps/v1beta1
kind: Deployment
metadata:
namespace: default
labels:
db: rethinkdb
role: admin
name: rethinkdb-server-deployment
spec:
replicas: 1
selector:
matchLabels:
app: rethinkdb-server
template:
metadata:
name: rethinkdb-server-pod
labels:
app: rethinkdb-server
spec:
containers:
- name: rethinkdb-server
image: gcr.io/$PROJECT_ID/rethinkdb-server:$LAST_VERSION
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 8080
name: admin-port
- containerPort: 28015
name: driver-port
- containerPort: 29015
name: cluster-port
volumeMounts:
- mountPath: /data/rethinkdb_data
name: rethinkdb-storage
volumes:
- name: rethinkdb-storage
persistentVolumeClaim:
claimName: rethinkdb-pvc
How do you manage this?
I see that you’ve added the PersistentVolumeClaim within a deployment. I also see that you are trying to scale the node pool.
A PersistentVolumeClaim will work on a deployment, but only if you are not scaling the deployment. This is why that error message showed up. The error that you are seeing says that that volume is already in use by an existing pod when a new pod is replicated.
Because you are trying to scale the deployment, other replicas will try to mount and use the same volume.
Solution: Deploy the PersistentVolumeClaim in a statefulset object, not a deployment. Instructions on how to deploy a statefulset can be found in this article. With a statefulset, you will be able to attach a PersistentVolumeClaim to a pod, then scale the node pool.