I have a PVC like below:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: raw-block-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
It is provisioned by ceph-csi plugin. As you can see, it will perform as a block device in a Pod. And the Pod definition is like this:
---
apiVersion: v1
kind: Pod
metadata:
name: pod-with-raw-block-volume
spec:
containers:
- name: fc-container
image: nginx
volumeDevices:
- name: data
devicePath: /dev/xvda
volumes:
- name: data
persistentVolumeClaim:
claimName: raw-block-pvc
I can find it at /dev/xvda.
But when I try to mount it on a dir by mount /dev/xvda /mnt, it failed and showed following:
mount: /mnt: cannot mount /dev/xvda read-only.
Could anyone tell me what is the reason?
When you claim volumeMode as block in pvc , that's a raw block device, /dev/xvda is block device same as your linux hard disk. You can't mount one raw block device, it's not formatted , no file system on it.
If you want to attache storage to directory, you cloud claim filesystem volumemode.
here is a sample, more detail you could visit https://docs.ceph.com/en/latest/rbd/rbd-kubernetes/
$ cat <<EOF > pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
EOF
$ kubectl apply -f pvc.yaml
$ cat <<EOF > pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: csi-rbd-demo-pod
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- name: mypvc
mountPath: /var/lib/www/html
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: rbd-pvc
readOnly: false
EOF
$ kubectl apply -f pod.yaml
Related
The following is my kubernetes/openshift deployment configuration template, along with its persistent volumes and persistent volume claims:
apiVersion: v1
kind: DeploymentConfig
metadata:
name: pythonApp
creationTimestamp: null
annotations:
openshift.io/image.insecureRepository: "true"
spec:
replicas: 1
strategy:
type: Recreate
revisionHistoryLimit: 2
template:
metadata:
labels:
app: pythonApp
creationTimestamp: null
spec:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "backend"
containers:
- name: backend
imagePullPolicy: IfNotPresent
image: <img-name>
command: ["sh", "-c"]
args: ['python manage.py runserver']
resources: {}
volumeMounts:
- mountPath: /pythonApp/configs
name: configs
restartPolicy: Always
volumes:
- name: configs
persistentVolumeClaim:
claimName: "configs-volume"
status: {}
---------------------------------------------------------------
apiVersion: v1
kind: PersistentVolume
metadata:
name: "configs-volume"
storageClassName: manual
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteMany
nfs:
path: /mnt/k8sMount/configs
server: <server-ip>
---------------------------------------------------------------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "configs-volume-claim"
creationTimestamp: null
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: "configs-volume"
After the deployment, when I exec inside the container (using oc exec or kubectl exec) and check the /pythonApp/configs folder, it is found to be empty, when really it's supposed to have some configuration files from the used image.
Is this issue caused due to the fact that /pythonApp/configs is mounted to the persistent nfs volume mount path /mnt/k8sMount/configs, which will be initially empty?
How could this be solved?
Environment
Kubernetes version: 1.11
Openshift version: 3.11
Is there any way to connect to a volume attached to a Kubernetes cluster and explore the content inside it.
First, create a PersistentVolumeClaim and Pod to mount the volume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-claim
spec:
accessModes:
- ReadWriteOnce
volumeName: <the-volume-that-you-want-to-explore>
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
name: volume-inspector
spec:
containers:
- name: foo
command: ["sleep","infinity"]
image: ubuntu:latest
volumeMounts:
- mountPath: "/tmp/mount"
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-claim
readOnly: true
Then exec into the Pod and start exploring
$ kubectl exec -it volume-inspector bash
root#volume-inspector:/# ls /tmp/mount
I have map windows folder into me linux machine with
mount -t cifs //AUTOCHECK/OneStopShopWebAPI -o user=someuser,password=Aa1234 /xml_shared
and the following command
df -hk
give me
//AUTOCHECK/OneStopShopWebAPI 83372028 58363852 25008176 71% /xml_shared
after that I create yaml file with
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-jenkins-slave
spec:
storageClassName: jenkins-slave-data
accessModes:
- ReadWriteMany
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-jenkins-slave
labels:
type: jenkins-slave-data2
spec:
storageClassName: jenkins-slave-data
capacity:
storage: 4Gi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.100.109
path: "//AUTOCHECK/OneStopShopWebAPI/jenkins_slave_shared"
this seems to not work when I create new pod
apiVersion: v1
kind: Pod
metadata:
name: jenkins-slave
labels:
label: jenkins-slave
spec:
containers:
- name: node
image: node
command:
- cat
tty: true
volumeMounts:
- mountPath: /var/jenkins_slave_shared
name: jenkins-slave-vol
volumes:
- name: jenkins-slave-vol
persistentVolumeClaim:
claimName: pvc-nfs-jenkins-slave
do i need to change the nfs ? what is wrong with me logic?
The mounting of CIFS share under Linux machine is correct but you need to take different approach to mount CIFS volume under Kubernetes. Let me explain:
There are some differences between NFS and CIFS.
This site explained the whole process step by step: Github CIFS Kubernetes.
I really dont understand this issue. In my pod.yaml i set the persistentVolumeClaim . i copied on my lastapplication declaration with PVC & PV.
i've checked that the files are in the right place !
on my Deployment file i've just set the port and the spec for the containers.
apiVersion: v1
kind: Pod
metadata:
name: ds-mg-cas-pod
namespace: ds-svc
spec:
containers:
- name: karaf
image: docker-all.xxxx.net/library/ds-mg-cas:latest
env:
- name: JAVA_APP_CONFIGS
value: "/apps/ds-cas-webapp/context"
- name: JAVA_EXTRA_PARAMS
value: "-Djava.security.auth.login.config=./config/jaas.config -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6402"
volumeMounts:
- name: ds-cas-config
mountPath: "/apps/ds-cas-webapp/context"
volumes:
- name: ds-cas-config
persistentVolumeClaim:
claimName: ds-cas-pvc
the PersistentVolume & PersistenteVolumeClaim
kind: PersistentVolume
apiVersion: v1
metadata:
name: ds-cas-pv
namespace: ds-svc
labels:
type: local
spec:
storageClassName: generic
capacity:
storage: 5Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/apps/ds-cas-webapp/context"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ds-cas-pvc
namespace: ds-svc
spec:
storageClassName: generic
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Mi
The error i get when i run the pod
java.io.FileNotFoundException: ./config/truststore.jks (No such file or directory)
I run the same image manually with docker. i didn't had an error. My question is where i can made a mistake because i really dont see :(
i set everything
the mountpoints
the ports
the variable
the docker command that i used to run the container :
docker run --name ds-mg-cas-manually
-e JAVA_APP=/apps/ds-cas-webapp/cas.war
-e JAVA_APP_CONFIGS=/apps/ds-cas-webapp/context
-e JAVA_EXTRA_PARAMS="-Djava.security.auth.login.config=./config/jaas.config -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6402"
-p 8443:8443
-p 6402:640
-d
-v /apps/ds-cas-webapp/context:/apps/ds-cas-webapp/context
docker-all.attanea.net/library/ds-mg-cas
/bin/sh -c
Your PersistentVolumeClaim is probably bound to the wrong PersistentVolume.
PersistentVolumes exist cluster-wide, only PersistentVolumeClaims are attached to a namespace:
$ kubectl api-resources
NAME SHORTNAMES APIGROUP NAMESPACED KIND
persistentvolumeclaims pvc true PersistentVolumeClaim
persistentvolumes pv false PersistentVolume
i want to use local volume that is mounted on my node on path: /mnts/drive.
so i created a storageclass (as shown in documentation for local storageclass),
and created a PVC and a simple pod which uses that volume.
so these are the configurations used:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-fast
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysampleclaim
spec:
storageClassName: local-fast
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
name: mysamplepod
spec:
containers:
- name: frontend
image: nginx:1.13
volumeMounts:
- mountPath: "/var/www/html"
name: myvolume
volumes:
- name: myvolume
persistentVolumeClaim:
claimName: mysampleclaim
and when i try to create this yaml file gives me an error, don't know what i am missing:
Unable to mount volumes for pod "mysamplepod_default(169efc06-3141-11e8-8e58-02d4a61b9de4)": timeout expired list of unattached/unmounted volumes=[myvolume]
If you want to use local volume that is mounted on the node on /mnts/drive path, you just need to use hostPath volume in your pod:
A hostPath volume mounts a file or directory from the host node’s
filesystem into your pod.
The final pod.yaml is:
apiVersion: v1
kind: Pod
metadata:
name: mysamplepod
spec:
containers:
- name: frontend
image: nginx:1.13
volumeMounts:
- mountPath: "/var/www/html"
name: myvolume
volumes:
- name: myvolume
hostPath:
# directory location on host
path: /mnts/drive