Explore Azure Kubernetes Volume Content - kubernetes

Is there any way to connect to a volume attached to a Kubernetes cluster and explore the content inside it.

First, create a PersistentVolumeClaim and Pod to mount the volume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-claim
spec:
accessModes:
- ReadWriteOnce
volumeName: <the-volume-that-you-want-to-explore>
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
name: volume-inspector
spec:
containers:
- name: foo
command: ["sleep","infinity"]
image: ubuntu:latest
volumeMounts:
- mountPath: "/tmp/mount"
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-claim
readOnly: true
Then exec into the Pod and start exploring
$ kubectl exec -it volume-inspector bash
root#volume-inspector:/# ls /tmp/mount

Related

Single PV mount on all members of Replica Set for a K8s deployment

I would like to understand if there is a way to mount a single PV that I manually created with ReadWriteMany to all the members of the replicaSet either using PodSpec or StatfulSet.
Yes, you can do if your PV is supporting the ReadWriteMany.
Here sharing the YAML for example
PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
spec:
replicas: 2
selector:
matchLabels:
run: busybox
template:
metadata:
labels:
run: busybox
spec:
containers:
- args:
- sh
image: busybox
name: busybox
stdin: true
tty: true
volumeMounts:
- name: pvc
mountPath: "/mnt"
restartPolicy: Always
volumes:
- name: pvc
persistentVolumeClaim:
claimName: test-claim

How to mount a ceph-csi provisioned block-volume?

I have a PVC like below:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: raw-block-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
It is provisioned by ceph-csi plugin. As you can see, it will perform as a block device in a Pod. And the Pod definition is like this:
---
apiVersion: v1
kind: Pod
metadata:
name: pod-with-raw-block-volume
spec:
containers:
- name: fc-container
image: nginx
volumeDevices:
- name: data
devicePath: /dev/xvda
volumes:
- name: data
persistentVolumeClaim:
claimName: raw-block-pvc
I can find it at /dev/xvda.
But when I try to mount it on a dir by mount /dev/xvda /mnt, it failed and showed following:
mount: /mnt: cannot mount /dev/xvda read-only.
Could anyone tell me what is the reason?
When you claim volumeMode as block in pvc , that's a raw block device, /dev/xvda is block device same as your linux hard disk. You can't mount one raw block device, it's not formatted , no file system on it.
If you want to attache storage to directory, you cloud claim filesystem volumemode.
here is a sample, more detail you could visit https://docs.ceph.com/en/latest/rbd/rbd-kubernetes/
$ cat <<EOF > pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
EOF
$ kubectl apply -f pvc.yaml
$ cat <<EOF > pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: csi-rbd-demo-pod
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- name: mypvc
mountPath: /var/lib/www/html
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: rbd-pvc
readOnly: false
EOF
$ kubectl apply -f pod.yaml

Persistent volume to windows not working on kubernetes

I have map windows folder into me linux machine with
mount -t cifs //AUTOCHECK/OneStopShopWebAPI -o user=someuser,password=Aa1234 /xml_shared
and the following command
df -hk
give me
//AUTOCHECK/OneStopShopWebAPI 83372028 58363852 25008176 71% /xml_shared
after that I create yaml file with
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-jenkins-slave
spec:
storageClassName: jenkins-slave-data
accessModes:
- ReadWriteMany
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-jenkins-slave
labels:
type: jenkins-slave-data2
spec:
storageClassName: jenkins-slave-data
capacity:
storage: 4Gi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.100.109
path: "//AUTOCHECK/OneStopShopWebAPI/jenkins_slave_shared"
this seems to not work when I create new pod
apiVersion: v1
kind: Pod
metadata:
name: jenkins-slave
labels:
label: jenkins-slave
spec:
containers:
- name: node
image: node
command:
- cat
tty: true
volumeMounts:
- mountPath: /var/jenkins_slave_shared
name: jenkins-slave-vol
volumes:
- name: jenkins-slave-vol
persistentVolumeClaim:
claimName: pvc-nfs-jenkins-slave
do i need to change the nfs ? what is wrong with me logic?
The mounting of CIFS share under Linux machine is correct but you need to take different approach to mount CIFS volume under Kubernetes. Let me explain:
There are some differences between NFS and CIFS.
This site explained the whole process step by step: Github CIFS Kubernetes.

StorageClass of type local with a pvc but gives an error in kubernetes

i want to use local volume that is mounted on my node on path: /mnts/drive.
so i created a storageclass (as shown in documentation for local storageclass),
and created a PVC and a simple pod which uses that volume.
so these are the configurations used:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-fast
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysampleclaim
spec:
storageClassName: local-fast
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
name: mysamplepod
spec:
containers:
- name: frontend
image: nginx:1.13
volumeMounts:
- mountPath: "/var/www/html"
name: myvolume
volumes:
- name: myvolume
persistentVolumeClaim:
claimName: mysampleclaim
and when i try to create this yaml file gives me an error, don't know what i am missing:
Unable to mount volumes for pod "mysamplepod_default(169efc06-3141-11e8-8e58-02d4a61b9de4)": timeout expired list of unattached/unmounted volumes=[myvolume]
If you want to use local volume that is mounted on the node on /mnts/drive path, you just need to use hostPath volume in your pod:
A hostPath volume mounts a file or directory from the host node’s
filesystem into your pod.
The final pod.yaml is:
apiVersion: v1
kind: Pod
metadata:
name: mysamplepod
spec:
containers:
- name: frontend
image: nginx:1.13
volumeMounts:
- mountPath: "/var/www/html"
name: myvolume
volumes:
- name: myvolume
hostPath:
# directory location on host
path: /mnts/drive

Multiple Volume mounts with Kubernetes: one works, one doesn't

I am trying to create a Kubernetes pod with a single container which has two external volumes mounted on it. My .yml pod file is:
apiVersion: v1
kind: Pod
metadata:
name: my-project
labels:
name: my-project
spec:
containers:
- image: my-username/my-project
name: my-project
ports:
- containerPort: 80
name: nginx-http
- containerPort: 443
name: nginx-ssl-https
imagePullPolicy: Always
volumeMounts:
- mountPath: /home/projects/my-project/media/upload
name: pd-data
- mountPath: /home/projects/my-project/backups
name: pd2-data
imagePullSecrets:
- name: vpregistrykey
volumes:
- name: pd-data
persistentVolumeClaim:
claimName: pd-claim
- name: pd2-data
persistentVolumeClaim:
claimName: pd2-claim
I am using Persistent Volumes and Persisten Volume Claims, as such:
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pd-disk
labels:
name: pd-disk
spec:
capacity:
storage: 250Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: "pd-disk"
fsType: "ext4"
PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pd-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
I have initially created my disks using the command:
$ gcloud compute disks create --size 250GB pd-disk
Same goes for the second disk and second PV and PVC. Everything seems to work ok when I create the pod, no errors are thrown. Now comes the weird part: one of the paths is being mounted correctly (and is therefor persistent) and the other one is being erased every time I restart the pod...
I have tried re-creating everything from scratch, but nothing changes. Also, from the pod description, both volumes seem to be correctly mounted:
$ kubectl describe pod my-project
Name: my-project
...
Volumes:
pd-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd-claim
ReadOnly: false
pd2-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd2-claim
ReadOnly: false
Any help is appreciated. Thanks.
The Kubernetes documentation states:
Volumes can not mount onto other volumes or have hard links to other
volumes
I had the same issue and in my case the problem was that both volume mounts had overlapping mountPaths, i.e. both started with /var/.
They mounted without issues after fixing that.
I do not see any direct problem for which such behavior as explained above has occurred! But what I can rather ask you to try is to use a "Deployment" instead of a "Pod" as suggested by many here, especially when using PVs and PVCs. Deployment takes care of many things to maintain the "Desired State". I have attached my code below for your reference which works and both the volumes are persistent even after deleting/terminating/restarting as this is managed by the Deployment's desired state.
Two difference which you would find in my code from yours are:
I have a deployment object instead of pod
I am using GlusterFs for my volume.
Deployment yml.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
namespace: platform
labels:
component: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
component: nginx
spec:
nodeSelector:
role: app-1
containers:
- name: nginx
image: vip-intOAM:5001/nginx:1.15.3
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/etc/nginx/conf.d/"
name: nginx-confd
- mountPath: "/var/www/"
name: nginx-web-content
volumes:
- name: nginx-confd
persistentVolumeClaim:
claimName: glusterfsvol-nginx-confd-pvc
- name: nginx-web-content
persistentVolumeClaim:
claimName: glusterfsvol-nginx-web-content-pvc
One of my PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: glusterfsvol-nginx-confd-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
glusterfs:
endpoints: gluster-cluster
path: nginx-confd
readOnly: false
persistentVolumeReclaimPolicy: Retain
claimRef:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
PVC for the above
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi