I am running node-red on kubernetes locally. This is my deployment code:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
resources:
requests:
memory: 256Mi
cpu: "0.2"
limits:
memory: 512Mi
cpu: "1"
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: /data/nodered
# subPath: nodered <-- not needed in your case
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nodered-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
---
apiVersion: v1
kind: Service
metadata:
name: node-red-service
spec:
type: LoadBalancer
selector:
app: nodered
ports:
- port: 1880
targetPort: 1880
I am trying to feed a local text file in my directory into node red but I am getting the error - ""Error: ENOENT: no such file or directory, open 'D:\SHRDC\abss\test.txt'"
What could be a solution to this?
I have tried \data\test.txt but that hasn't worked either. I am expecting nodered to read the file contents.
First thing to know is that under most situations is that kubernetes pods run docker containers that are Linux based. This means that even if you are running your kubernetes node on a Windows machine it will be running in a Linux virtual machine. This means 2 things.
All paths used will need to use the Unix style
it will have no direct access to files on the host Windows machine
Next while you have requested a volume to be mounted into the pod at /data/nodered and defined both a physical volume and a physical volume claim (which I'm not sure will actually map to each other in this case) they will be in the Linux Virtual machine not the urgently Windows machine so will not have access to files on the host.
Even if you had managed to copy the file into the directory that is backing the volume mount in the pod the correct path to give to Node-RED would be something like /data/nodered/test.txt based on the volume mount
Related
I want to mount a folder that is located on my local hard disk (ex. /home/logon/volumes/algovens/ids/app_data/) on a pod:
{{/home/logon/volumes/algovens/ids/app_data/}} SHOULD BE MOUNTED ON {{/app/app_data/}} ON A POD
I create a persistent volume and a persistent volume claim, then import the pvc on my pod .yaml file.
I apply all three yaml files for pv, pvc, pod into kubernetes cluster.
When getting access to the container using interactive bash, I see there's a folder that I've configured to mount from my local hard disk (/app/app_data/ ON POD), but it's still empty.
However, I have some files and folders on my local folder on my local hard disk (/home/logon/volumes/algovens/ids/app_data/)
My configuration files:
identityserver-pv.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: identityserver-pv
labels:
name: identityserver-pv
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
local:
path: /home/logon/volumes/algovens/ids/app_data/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
identityserver-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: identityserver-pvc
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
identityserver-deployment.yaml
# Deployment for identityserver
apiVersion: apps/v1
kind: Deployment
metadata:
name: identityserver-deployment
labels:
app: identityserver-deployment
spec:
replicas: 1
selector:
matchLabels:
app: identityserver
template:
metadata:
labels:
app: identityserver # pod label
spec:
volumes:
- name: identityserver-pvc
persistentVolumeClaim:
claimName: identityserver-pvc
containers:
- name: identityserver-container
image: localhost:5000/algovens-identityserver:v1.0
ports:
- containerPort: 80
volumeMounts:
- name: identityserver-pvc
mountPath: "/app/app_data/"
env:
- name: ALGOVENS_CORS_ORIGINS
valueFrom:
configMapKeyRef:
name: identityserver-config
key: ALGOVENS_CORS_ORIGINS
---
# Service for identityserver
apiVersion: v1
kind: Service
metadata:
name: identityserver-service
spec:
type: NodePort # External service. default value is ClusterIP
selector:
app: identityserver
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30100
I'm trying to run nodered in my minikube kubernetes cluster ("cluster", its one node :D).
The docker command to do this is by example:
docker run -it -p 1880:1880 -v /home/user/node_red_data:/data --name mynodered nodered/node-red
But I'm not running it in docker, I'm trying to run it in minikube. The documentation of minikube states that /data on the host is persisted. So what I wanted was a /data/nodered to be mounted up as /data on the nodered container.
I started with adding a storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
Then added persistent storage:
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
Then a persistent volume claim for the nodered:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nodered-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
And then the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: "/data"
subPath: "nodered"
I've checked kubernetes dasboard and everything is green and volume is bound. I created a simple http service in nodered and deployed it. It's running but nothing is saved. So if the deployment goes down and gets redeployed it will be empty.
I've checked the /data and /data/nodered folders on the minikube instance running in docker but they are empty.
Your deployment spec should return error, try the following:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: /data/nodered
# subPath: nodered <-- not needed in your case
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim
I have deployed (using Helm) some services in the K8s cluster hosted by Docker Desktop (MacOS). One of the "services" is MongoDB, for which I'm trying to set up a PersistedVolume, so that the actual data will be retained in a MacOS local directory between cluster (re)installs (or MongoDB pod replacements). Everything "works" per se, but the MongoDB container process keeps setting up its local directory /data/db, as if nothing is really setup in terms of Persistent Volumes. I've been pulling my hair for a while now and thought an extra set of eyes might spot whats wrong or missing.
I have several other resources deployed, e.g a small Micronaut based backend service which exposes an API to read from the MongoDB instance. All of that works just fine.
Here are the descriptors involved for MongoDB:
PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Deployment:
kind: Deployment
metadata:
name: fo-persons-mongodb
namespace: fo
labels:
app: fo-persons-mongodb
spec:
replicas: 1
selector:
matchLabels:
app: fo-persons-mongodb
template:
metadata:
labels:
app: fo-persons-mongodb
spec:
volumes:
- name: fo-persons-mongodb-volume-pvc
persistentVolumeClaim:
claimName: persons-mongodb-pvc
containers:
- name: fo-persons-mongodb
image: mongo
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
volumeMounts:
- name: fo-persons-mongodb-volume-pvc
mountPath: "/data/db"
Service:
apiVersion: v1
kind: Service
metadata:
name: fo-persons-mongodb
namespace: fo
spec:
type: ClusterIP
selector:
app: fo-persons-mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /Users/mike/kubernetes/fo/persons/mongodb
Alright! I got it working. Seems I'd made two mistakes. Below are the updated descriptors for the PersistentVolumeClaim and PersistentVolume:
Error #1: Not setting the storageClassName in the spec of the PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local-storage
Error #2: Not setting the node affinity and not using local.path instead of hostPath, both in the PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
local:
path: /Users/mike/kubernetes/fo/persons/mongodb
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
I try to mount a linux directory as a shared directory for multiple containers in minikube.
Here is my config:
minikube start --insecure-registry="myregistry.com:5000" --mount --mount-string="/tmp/myapp/k8s/:/data/myapp/share/"
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: myapp-share-storage
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
local:
path: "/data/myapp/share/"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-share-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: myapp-server
name: myapp-server
spec:
selector:
matchLabels:
io.kompose.service: myapp-server
template:
metadata:
labels:
io.kompose.service: myapp-server
spec:
containers:
- name: myapp-server
image: myregistry.com:5000/server-myapp:alpine
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /data/myapp/share
name: myapp-share
env:
- name: storage__root_directory
value: /data/myapp/share
volumes:
- name: myapp-share
persistentVolumeClaim:
claimName: myapp-share-claim
status: {}
It works with pitfalls: Statefulset are not supported, they bring deadlock errors :
pending PVC: waiting for first consumer to be created before binding
pending POD: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind
Another option is to use minikube persistentvolumeclaim without persistentvolume (it will be created automatically). However:
The volume is created in /tmp (ex: /tmp/hostpath-provisioner/default/myapp-share-claim)
Minikube doesn't honor mount request
How can I make it just work?
Using your yaml file I've managed to create the volumes and deploy it without issue, but i had to use the command minikube mount /mydir/:/data/myapp/share/ after starting the minikube since --mount --mount-strings="/mydir/:/data/myapp/share/" wasn't working.
The following is my kubernetes/openshift deployment configuration template, along with its persistent volumes and persistent volume claims:
apiVersion: v1
kind: DeploymentConfig
metadata:
name: pythonApp
creationTimestamp: null
annotations:
openshift.io/image.insecureRepository: "true"
spec:
replicas: 1
strategy:
type: Recreate
revisionHistoryLimit: 2
template:
metadata:
labels:
app: pythonApp
creationTimestamp: null
spec:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "backend"
containers:
- name: backend
imagePullPolicy: IfNotPresent
image: <img-name>
command: ["sh", "-c"]
args: ['python manage.py runserver']
resources: {}
volumeMounts:
- mountPath: /pythonApp/configs
name: configs
restartPolicy: Always
volumes:
- name: configs
persistentVolumeClaim:
claimName: "configs-volume"
status: {}
---------------------------------------------------------------
apiVersion: v1
kind: PersistentVolume
metadata:
name: "configs-volume"
storageClassName: manual
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteMany
nfs:
path: /mnt/k8sMount/configs
server: <server-ip>
---------------------------------------------------------------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "configs-volume-claim"
creationTimestamp: null
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: "configs-volume"
After the deployment, when I exec inside the container (using oc exec or kubectl exec) and check the /pythonApp/configs folder, it is found to be empty, when really it's supposed to have some configuration files from the used image.
Is this issue caused due to the fact that /pythonApp/configs is mounted to the persistent nfs volume mount path /mnt/k8sMount/configs, which will be initially empty?
How could this be solved?
Environment
Kubernetes version: 1.11
Openshift version: 3.11