Pod/Container directory used to mount nfs path gets empty after deployment - kubernetes

The following is my kubernetes/openshift deployment configuration template, along with its persistent volumes and persistent volume claims:
apiVersion: v1
kind: DeploymentConfig
metadata:
name: pythonApp
creationTimestamp: null
annotations:
openshift.io/image.insecureRepository: "true"
spec:
replicas: 1
strategy:
type: Recreate
revisionHistoryLimit: 2
template:
metadata:
labels:
app: pythonApp
creationTimestamp: null
spec:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "backend"
containers:
- name: backend
imagePullPolicy: IfNotPresent
image: <img-name>
command: ["sh", "-c"]
args: ['python manage.py runserver']
resources: {}
volumeMounts:
- mountPath: /pythonApp/configs
name: configs
restartPolicy: Always
volumes:
- name: configs
persistentVolumeClaim:
claimName: "configs-volume"
status: {}
---------------------------------------------------------------
apiVersion: v1
kind: PersistentVolume
metadata:
name: "configs-volume"
storageClassName: manual
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteMany
nfs:
path: /mnt/k8sMount/configs
server: <server-ip>
---------------------------------------------------------------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "configs-volume-claim"
creationTimestamp: null
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: "configs-volume"
After the deployment, when I exec inside the container (using oc exec or kubectl exec) and check the /pythonApp/configs folder, it is found to be empty, when really it's supposed to have some configuration files from the used image.
Is this issue caused due to the fact that /pythonApp/configs is mounted to the persistent nfs volume mount path /mnt/k8sMount/configs, which will be initially empty?
How could this be solved?
Environment
Kubernetes version: 1.11
Openshift version: 3.11

Related

Single PV mount on all members of Replica Set for a K8s deployment

I would like to understand if there is a way to mount a single PV that I manually created with ReadWriteMany to all the members of the replicaSet either using PodSpec or StatfulSet.
Yes, you can do if your PV is supporting the ReadWriteMany.
Here sharing the YAML for example
PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
spec:
replicas: 2
selector:
matchLabels:
run: busybox
template:
metadata:
labels:
run: busybox
spec:
containers:
- args:
- sh
image: busybox
name: busybox
stdin: true
tty: true
volumeMounts:
- name: pvc
mountPath: "/mnt"
restartPolicy: Always
volumes:
- name: pvc
persistentVolumeClaim:
claimName: test-claim

Kubernetes Security Policy fsgroup not working

Trying to mount Kubernetes volume to a pod(running as non-root) with fsGroup SecurityContext option. But the volume is still mounted as root and getting permission denied from the pod when trying to do write operations on the filesystem
Created the Persistent volume:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-demo
spec:
storageClassName: nfs
capacity:
storage: 10Gi
nfs:
server: xxx.xxx.xxx.xxx
path: /nfs/data/demo
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
Persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: demo-vol
spec:
storageClassName: nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeName: pv-demo
StatefulSet for the application deployment. The container image starts as user 1001(belonging to group 0)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: demo-app
labels:
app: demo-app
spec:
replicas: 1
serviceName: demo-app
selector:
matchLabels:
app: demo-app
spec:
securityContext:
fsGroup: 0
fsGroupChangePolicy: "OnRootMismatch"
containers:
- name: demo-app-container
image: <theImage>
volumeMounts:
- mountPath: /store
name: demo-vol
volumes:
- name: demo-vol
persistentVolumeClaim:
claimName: demo-vol

How can I mount folder correctly in kubernetes

I'm trying to run nodered in my minikube kubernetes cluster ("cluster", its one node :D).
The docker command to do this is by example:
docker run -it -p 1880:1880 -v /home/user/node_red_data:/data --name mynodered nodered/node-red
But I'm not running it in docker, I'm trying to run it in minikube. The documentation of minikube states that /data on the host is persisted. So what I wanted was a /data/nodered to be mounted up as /data on the nodered container.
I started with adding a storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
Then added persistent storage:
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
Then a persistent volume claim for the nodered:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nodered-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
And then the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: "/data"
subPath: "nodered"
I've checked kubernetes dasboard and everything is green and volume is bound. I created a simple http service in nodered and deployed it. It's running but nothing is saved. So if the deployment goes down and gets redeployed it will be empty.
I've checked the /data and /data/nodered folders on the minikube instance running in docker but they are empty.
Your deployment spec should return error, try the following:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: /data/nodered
# subPath: nodered <-- not needed in your case
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim

Minikube: use persistent volume (shared disk) and mount it to host

I try to mount a linux directory as a shared directory for multiple containers in minikube.
Here is my config:
minikube start --insecure-registry="myregistry.com:5000" --mount --mount-string="/tmp/myapp/k8s/:/data/myapp/share/"
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: myapp-share-storage
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
local:
path: "/data/myapp/share/"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-share-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: myapp-server
name: myapp-server
spec:
selector:
matchLabels:
io.kompose.service: myapp-server
template:
metadata:
labels:
io.kompose.service: myapp-server
spec:
containers:
- name: myapp-server
image: myregistry.com:5000/server-myapp:alpine
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /data/myapp/share
name: myapp-share
env:
- name: storage__root_directory
value: /data/myapp/share
volumes:
- name: myapp-share
persistentVolumeClaim:
claimName: myapp-share-claim
status: {}
It works with pitfalls: Statefulset are not supported, they bring deadlock errors :
pending PVC: waiting for first consumer to be created before binding
pending POD: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind
Another option is to use minikube persistentvolumeclaim without persistentvolume (it will be created automatically). However:
The volume is created in /tmp (ex: /tmp/hostpath-provisioner/default/myapp-share-claim)
Minikube doesn't honor mount request
How can I make it just work?
Using your yaml file I've managed to create the volumes and deploy it without issue, but i had to use the command minikube mount /mydir/:/data/myapp/share/ after starting the minikube since --mount --mount-strings="/mydir/:/data/myapp/share/" wasn't working.

Multiple Volume mounts with Kubernetes: one works, one doesn't

I am trying to create a Kubernetes pod with a single container which has two external volumes mounted on it. My .yml pod file is:
apiVersion: v1
kind: Pod
metadata:
name: my-project
labels:
name: my-project
spec:
containers:
- image: my-username/my-project
name: my-project
ports:
- containerPort: 80
name: nginx-http
- containerPort: 443
name: nginx-ssl-https
imagePullPolicy: Always
volumeMounts:
- mountPath: /home/projects/my-project/media/upload
name: pd-data
- mountPath: /home/projects/my-project/backups
name: pd2-data
imagePullSecrets:
- name: vpregistrykey
volumes:
- name: pd-data
persistentVolumeClaim:
claimName: pd-claim
- name: pd2-data
persistentVolumeClaim:
claimName: pd2-claim
I am using Persistent Volumes and Persisten Volume Claims, as such:
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pd-disk
labels:
name: pd-disk
spec:
capacity:
storage: 250Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: "pd-disk"
fsType: "ext4"
PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pd-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
I have initially created my disks using the command:
$ gcloud compute disks create --size 250GB pd-disk
Same goes for the second disk and second PV and PVC. Everything seems to work ok when I create the pod, no errors are thrown. Now comes the weird part: one of the paths is being mounted correctly (and is therefor persistent) and the other one is being erased every time I restart the pod...
I have tried re-creating everything from scratch, but nothing changes. Also, from the pod description, both volumes seem to be correctly mounted:
$ kubectl describe pod my-project
Name: my-project
...
Volumes:
pd-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd-claim
ReadOnly: false
pd2-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd2-claim
ReadOnly: false
Any help is appreciated. Thanks.
The Kubernetes documentation states:
Volumes can not mount onto other volumes or have hard links to other
volumes
I had the same issue and in my case the problem was that both volume mounts had overlapping mountPaths, i.e. both started with /var/.
They mounted without issues after fixing that.
I do not see any direct problem for which such behavior as explained above has occurred! But what I can rather ask you to try is to use a "Deployment" instead of a "Pod" as suggested by many here, especially when using PVs and PVCs. Deployment takes care of many things to maintain the "Desired State". I have attached my code below for your reference which works and both the volumes are persistent even after deleting/terminating/restarting as this is managed by the Deployment's desired state.
Two difference which you would find in my code from yours are:
I have a deployment object instead of pod
I am using GlusterFs for my volume.
Deployment yml.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
namespace: platform
labels:
component: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
component: nginx
spec:
nodeSelector:
role: app-1
containers:
- name: nginx
image: vip-intOAM:5001/nginx:1.15.3
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/etc/nginx/conf.d/"
name: nginx-confd
- mountPath: "/var/www/"
name: nginx-web-content
volumes:
- name: nginx-confd
persistentVolumeClaim:
claimName: glusterfsvol-nginx-confd-pvc
- name: nginx-web-content
persistentVolumeClaim:
claimName: glusterfsvol-nginx-web-content-pvc
One of my PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: glusterfsvol-nginx-confd-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
glusterfs:
endpoints: gluster-cluster
path: nginx-confd
readOnly: false
persistentVolumeReclaimPolicy: Retain
claimRef:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
PVC for the above
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi