Kubernetes error while creating mount source path : file exists - kubernetes

after re-deploying my kubernetes statefulset, the pod is now failing due to error while creating mount source path
'/var/lib/kubelet/pods/1559ef17-9c48-401d-9a2f-9962a4a16151/volumes/kubernetes.io~csi/pvc-6b9ac265-d0ec-4564-adb2-1c7b3f6631ca/mount': mkdir /var/lib/kubelet/pods/1559ef17-9c48-401d-9a2f-9962a4a16151/volumes/kubernetes.io~csi/pvc-6b9ac265-d0ec-4564-adb2-1c7b3f6631ca/mount: file exists
I'm assuming this is because the persistent volume/PVC already exists and so it cannot be created, but I thought that was the point of the statefulset, that the data would persist and you could just mount it again? How should I fix this?
Thanks.
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ClusterIP
ports:
- name: http
port: 80
selector:
app: foo-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foo-statefulset
namespace: foo
spec:
selector:
matchLabels:
app: foo-app
serviceName: foo-app
replicas: 1
template:
metadata:
labels:
app: foo-app
spec:
serviceAccountName: foo-service-account
containers:
- name: foo
image: blahblah
imagePullPolicy: Always
volumeMounts:
- name: foo-data
mountPath: "foo"
- name: stuff
mountPath: "here"
- name: config
mountPath: "somedata"
volumes:
- name: stuff
persistentVolumeClaim:
claimName: stuff-pvc
- name: config
configMap:
name: myconfig
volumeClaimTemplates:
- metadata:
name: foo-data
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "foo-storage"
resources:
requests:
storage: 2Gi

Related

How to connect to samba server from container running in kubernetes?

I created a kubernetes cluster in amazon. Then I run my pod (container) and volume into this cluster. Now I want to run the samba server into the volume and connect my pod to samba server. Is there any tutorial how can I solve this problem? By the way I am working at windows 10. Here is my deployment code with volume:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
labels:
app : application
spec:
replicas: 2
selector:
matchLabels:
project: k8s
template:
metadata:
labels:
project: k8s
spec:
containers:
- name : k8s-web
image: mine/flask:latest
volumeMounts:
- mountPath: /test-ebs
name: my-volume
ports:
- containerPort: 8080
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: pv0004
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0004
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: [my-Id-volume]
you can check out the smaba container docker image at : https://github.com/dperson/samba
---
kind: Service
apiVersion: v1
metadata:
name: smb-server
labels:
app: smb-server
spec:
type: LoadBalancer
selector:
app: smb-server
ports:
- port: 445
name: smb-server
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: smb-server
spec:
replicas: 1
selector:
matchLabels:
app: smb-server
template:
metadata:
name: smb-server
labels:
app: smb-server
spec:
containers:
- name: smb-server
image: dperson/samba
env:
- name: PERMISSIONS
value: "0777"
args: ["-u", "username;test","-s","share;/smbshare/;yes;no;no;all;none","-p"]
volumeMounts:
- mountPath: /smbshare
name: data-volume
ports:
- containerPort: 445
volumes:
- name: data-volume
hostPath:
path: /smbshare
type: DirectoryOrCreate

Why am I getting "1 pod has unbound immediate PersistentVolumeClaims" when working with 2 deployments

I am trying to do a fairly simple red/green setup using Minikube where I want 1 pod running a red container and 1 pod running a green container and a service to hit each. To do this my k82 file is like...
apiVersion: v1
kind: PersistentVolume
metadata:
name: main-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Users/jackiegleason/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: main-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
labels:
app: express-app
name: express-service-red
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: express-app-red
---
apiVersion: v1
kind: Service
metadata:
labels:
app: express-app
name: express-service-green
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: express-app-green
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-app-deployment-red
labels:
app: express-app-red
tier: app
spec:
replicas: 1
selector:
matchLabels:
tier: app
template:
metadata:
labels:
app: express-app-red
tier: app
spec:
volumes:
- name: express-app-storage
persistentVolumeClaim:
claimName: main-volume-claim
containers:
- name: express-app-container
image: partyk1d24/hello-express:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/external"
name: express-app-storage
ports:
- containerPort: 3000
protocol: TCP
name: express-endpnt
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-app-deployment-green
labels:
app: express-app-green
tier: app
spec:
replicas: 1
selector:
matchLabels:
tier: app
template:
metadata:
labels:
app: express-app-green
tier: app
spec:
volumes:
- name: express-app-storage
persistentVolumeClaim:
claimName: main-volume-claim
containers:
- name: express-app-container
image: partyk1d24/hello-express:latest
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_TYPE
value: "Green"
volumeMounts:
- mountPath: "/var/external"
name: express-app-storage
ports:
- containerPort: 3000
protocol: TCP
name: express-endpnt
But when I run I get...
0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
What am I missing? It worked fine without the second deployment so what am I missing?
Thank you!
You cannot use the same a PV with accessMode: ReadWriteOnce multiple times.
To do this you would need to use a volume that supports ReadWriteMany access mode.
Check out k8s documentation for the list of Volume Plugins that support this feature.
Additionally, as David already menioned, it's much better to log to the STDOUT.
You can also check solutions like FluentBit/fluentd or ELK stack.

stateful jupyter notebook in kubernetes

Trying to deploy a stateful jupyter notebook in Kubernetes, but not able to save the code written in a notebook, whenever the notebook pod is going down all the code is being deleted. I tried to use persistent volume but unable to achieve the expected result.
UPDATE
Changed mount path to "/home/jovyan" as jyputer saves the ipynb in this location. But now getting PermissionError: [Errno 13] Permission denied: '/home/jovyan/.local' while deploying the pod.
kind: Ingress
metadata:
name: jupyter-ingress
spec:
backend:
serviceName: jupyter-notebook-service
servicePort: 8888
---
kind: Service
apiVersion: v1
metadata:
name: jupyter-notebook-service
spec:
clusterIP: None
selector:
app: jupyter-notebook
ports:
- protocol: TCP
port: 8888
targetPort: 8888
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jupyter-notebook
labels:
app: jupyter-notebook
spec:
replicas: 1
serviceName: "jupyter-notebook-service"
selector:
matchLabels:
app: jupyter-notebook
template:
metadata:
labels:
app: jupyter-notebook
spec:
serviceAccountName: dsx-spark
volumes:
- name: jupyter-pv-storage
persistentVolumeClaim:
claimName: jupyter-pv-claim
containers:
- name: minimal-notebook
image: jupyter/pyspark-notebook:latest
ports:
- containerPort: 8888
command: ["start-notebook.sh"]
args: ["--NotebookApp.token=''"]
volumeMounts:
- mountPath: "/home/jovyan"
name: jupyter-pv-storage
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: jupyter-pv-claim
spec:
storageClassName: jupyter-pv-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: jupyter-pv-volume
labels:
type: local
spec:
storageClassName: jupyter-pv-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/jovyan"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: jupyternotebook-pv-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
labels:
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
```
The Pod with jupyter notebook is non-root user so unable to mount the so we are
using initContainers to change user/permission of the Persistent Volume Claim before creating the Pod.
kind: StatefulSet
metadata:
name: jupyter-notebook
labels:
app: jupyter-notebook
spec:
replicas: 1
serviceName: "jupyter-notebook-service"
selector:
matchLabels:
app: jupyter-notebook
template:
metadata:
labels:
app: jupyter-notebook
spec:
serviceAccountName: dsx-spark
volumes:
- name: ci-jupyter-storage-def
persistentVolumeClaim:
claimName: my-jupyter-pv-claim
containers:
- name: minimal-notebook
image: jupyter/pyspark-notebook:latest
ports:
- containerPort: 8888
command: ["start-notebook.sh"]
args: ["--NotebookApp.token=''"]
volumeMounts:
- mountPath: "/home/jovyan/work"
name: ci-jupyter-storage-def
initContainers:
- name: jupyter-data-permission-fix
image: busybox
command: ["/bin/chmod","-R","777", "/data"]
volumeMounts:
- name: ci-jupyter-storage-def
mountPath: /data```
As I have already mentioned in the comments you need to make sure that:
The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner based on the requested storage class, or pre-provisioned by an admin. The volumeClaimTemplates will provide stable storage using PersistentVolumes. PersistentVolumes associated with the Pods’ PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted.
The container is running as a user that has the permissions to access that volume. It can be done by changing the permissions to 777 or as you already noticed by using a proper initContainers.

How to mount a secret to kubernetes StatefulSet

So, looking at the Kubernetes API documentation: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#statefulsetspec-v1-apps it appears that I can indeed have a volume because it uses a podspec and the podspec does have a volume field, so I could list the secret and then mount it like in a deployment, or any other pod.
The problem is that kubernetes seems to think that volumes are not actually in the podspec for StatefulSet? Is this right? How do I mount in a secret to my statefulset if this is true.
error: error validating "mysql-stateful-set.yaml": error validating data: ValidationError(StatefulSet.spec.template.spec.containers[0]): unknown field "volumes" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
StatefulSet:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- port: 3306
name: database
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 60
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
name: database
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: mysql
mountPath: /run/secrets/mysql
env:
- name: MYSQL_ROOT_PASSWORD_FILE
value: /run/secrets/mysql/root-pass
volumes:
- name: mysql
secret:
secretName: mysql
items:
- key: root-pass
path: root-pass
mode: 511
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 10Gi```
The volume field should come inside template spec and not inside container (as done in your template). Refer this for the exact structure (https://godoc.org/k8s.io/api/apps/v1#StatefulSetSpec), go to PodTemplateSpec and you will find volumes field.
Below template should work for you:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- port: 3306
name: database
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 60
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
name: database
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: mysql
mountPath: /run/secrets/mysql
env:
- name: MYSQL_ROOT_PASSWORD_FILE
value: /run/secrets/mysql/root-pass
volumes:
- name: mysql
secret:
secretName: mysql
items:
- key: root-pass
path: root-pass
mode: 511
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 10Gi

Kubernetes nfs define path in claim

I want to create common persistence volume with nfs.
PV(nfs):
common-data-pv 1500Gi RWO Retain
192.168.0.24 /home/common-data-pv
I want a claim or pod(mount the claim) subscribed common-data-pv can define path example :
/home/common-data-pv/www-site-1(50GI)
/home/common-data-pv/www-site-2(50GI)
But i not found in documentation how i can define this.
My actual conf for pv :
kind: PersistentVolume
apiVersion: v1
metadata:
name: common-data-pv
labels:
type: common
spec:
capacity:
storage: 1500Gi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.122.1
path: "/home/pv/common-data-pv"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: common-data-pvc
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: common
Example use:
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-web-1
namespace: kube-system
spec:
replicas: 2
selector:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx:alpine
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: common-data-pvc
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-web-2
namespace: kube-system
spec:
replicas: 2
selector:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx:alpine
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: common-data-pvc
To use the claim you just need to add a volumeMounts section and volumes to your manifest. Here's an example replication controller for nginx that would use your claim. Note the very last line that uses the same PVC name.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-web
namespace: kube-system
spec:
replicas: 2
selector:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx:alpine
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: common-data-pvc
More examples can be found in the kubernetes repo under examples