Node-red Kubernetes Deployment - kubernetes

I am trying to deploy a simple node-red container on kubernetes locally to monitor its usage and at the same time create a persistent volume storage so that my node red work is saved. However, I can't get it to deploy to kubernetes. I have created a Deployment.yaml file with the following code
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: /data/nodered
# subPath: nodered <-- not needed in your case
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nodered-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:enter image description here
storage: 1Gi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
I am getting this error in powershell:
kubectl apply -f ./Deployment.yaml
persistentvolume/small-pv unchanged
persistentvolumeclaim/nodered-claim unchanged
storageclass.storage.k8s.io/local-storage unchanged
Error from server (BadRequest): error when creating "./Deployment.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.template.spec.containers[0].cpu", unknown field "spec.template.spec.containers[0].limits", unknown field "spec.template.spec.containers[0].memory", unknown field "spec.template.spec.requests"
I want it to deploy to kubernetes so that I can monitor the memory usage of the node red container

The requests & limits sections needs to be under a resources heading as follows:
spec:
containers:
- name: nodered
image: nodered/node-red:latest
resources:
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: /data/nodered
# subPath: nodered <-- not needed in your case

Node-Red because of its base platform(Node-JS) is entirely single threaded. Even if you put this on kubernetes with huge number of pods, somewhere down the road the whole app will be blocked at CPU limit of one of the POD.
Both vertical and horizontal scaling is infeasible.
If you load is heavy, use a proper middleware.

Your deployment was syntax error. Let's fix it to the right syntax
It should be like this:
containers:
- name: nodered
image: nodered/node-red:latest
resources:
requests:
memory: "256Mi"
cpu: "0.2"
limits:
memory: "512Mi"
cpu: "1"

Related

Error: ENOENT: no such file or directory, open

I am running node-red on kubernetes locally. This is my deployment code:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
resources:
requests:
memory: 256Mi
cpu: "0.2"
limits:
memory: 512Mi
cpu: "1"
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: /data/nodered
# subPath: nodered <-- not needed in your case
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nodered-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
---
apiVersion: v1
kind: Service
metadata:
name: node-red-service
spec:
type: LoadBalancer
selector:
app: nodered
ports:
- port: 1880
targetPort: 1880
I am trying to feed a local text file in my directory into node red but I am getting the error - ""Error: ENOENT: no such file or directory, open 'D:\SHRDC\abss\test.txt'"
What could be a solution to this?
I have tried \data\test.txt but that hasn't worked either. I am expecting nodered to read the file contents.
First thing to know is that under most situations is that kubernetes pods run docker containers that are Linux based. This means that even if you are running your kubernetes node on a Windows machine it will be running in a Linux virtual machine. This means 2 things.
All paths used will need to use the Unix style
it will have no direct access to files on the host Windows machine
Next while you have requested a volume to be mounted into the pod at /data/nodered and defined both a physical volume and a physical volume claim (which I'm not sure will actually map to each other in this case) they will be in the Linux Virtual machine not the urgently Windows machine so will not have access to files on the host.
Even if you had managed to copy the file into the directory that is backing the volume mount in the pod the correct path to give to Node-RED would be something like /data/nodered/test.txt based on the volume mount

Accessing PVC created by pod from Statefulset in another pod created by daemonset using azure disk

I have app-1 pods created by StatefulSet and in that, I am creating PVC as well
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: app-1
spec:
replicas: 3
selector:
matchLabels:
app: app-1
serviceName: "app-1"
template:
metadata:
labels:
app: app-1
spec:
containers:
- name: app-1
image: registry.k8s.io/nginx-slim:0.8
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 4567
volumeMounts:
- name: app-1-state-volume-claim
mountPath: /app1Data
- name: app-2-data-volume-claim
mountPath: /app2Data
volumeClaimTemplates:
- metadata:
name: app-1-state-volume-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-csi-premium"
resources:
requests:
storage: 1Gi
- metadata:
name: app-2-data-volume-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-csi-premium"
resources:
requests:
storage: 1Gi
state of the app1 is maintained in PVC - app-1-state-volume-claim
app1 is also creating data for app2 in PVC - app-2-data-volume-claim
I want to access app-2-data-volume-claim in another pod deployment described below
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: app-2
spec:
selector:
matchLabels:
name: app-2
template:
metadata:
labels:
name: app-2
spec:
containers:
- name: app-2
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: app2Data
mountPath: /app2Data
volumes:
- name: app2Data
persistentVolumeClaim:
claimName: app-2-data-volume-claim
This is failing with below output
persistentvolumeclaim "app-2-data-volume-claim" not found
How can I do that?I cannot use Azure file share due to app-1 limitation.

WaitForFirstConsumer PersistentVolumeClaim waiting for first consumer to be created before binding. Auto provisioning does not work

Hi I know this might be a possible duplicate, but I cannot get the answer from this question.
I have a prometheus deployment and would like to give it a persistent volume.
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
labels:
app: prometheus-server
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--storage.tsdb.retention.time=60d"
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
resources:
requests:
cpu: 500m
memory: 500M
limits:
cpu: 1
memory: 1Gi
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
persistentVolumeClaim:
claimName: prometheus-pv-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-pv-claim
spec:
storageClassName: default
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
Now both the pvc and the deployment cannot be scheduled because the pvc waits for the deployment and the other way around. As far as I am concerned we have a cluster with automatic provisioning, thus I cannot just create a pv. How can I solve this problem, because other deployments do use pvc in the same style and it works.
Its because of namespace. PVC is a namespaced object you can look here. Your PVC is on the default namespace. Moving it to monitoring namespace should work.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-pv-claim
namespace: monitoring
spec:
storageClassName: default
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
Does your PVC deploy the correct namespace with deployment?
And make sure the StorageClass has volumeBindingMode: WaitForFirstConsumer
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: default
...
volumeBindingMode: WaitForFirstConsumer

Kubernetes "shared" persistent volume on DigitalOcean

I have a persistent volume defined as
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ghost-cms-content
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: do-block-storage
and a deployment defined as
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: ghost-cms
spec:
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
selector:
matchLabels:
app: ghost-cms
tier: frontend
template:
metadata:
labels:
app: ghost-cms
tier: frontend
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/region
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: ghost-cms
tier: frontend
containers:
- name: ghost-cms
image: ghost:4.6-alpine
imagePullPolicy: Always
ports:
- containerPort: 2368
volumeMounts:
- mountPath: /var/lib/ghost/content
name: content
env:
- name: url
value: https://ghost.site
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
volumes:
- name: content
persistentVolumeClaim:
claimName: ghost-cms-content
but each replica appears to have a unique volume that is not shared with the rest of the replicas. For instance, when I create a text file inside /var/lib/ghost/content in one of the pods, I don't see it in the volume of the other pods. What am I doing wrong?
PVC with permission
accessModes:
- ReadWriteOnce
Each pod will get the one volume or PVC, as it's readwrite once.
If you want to keep shared volume across replicas you can use the NFS with accessMode ReadWriteMany
accessModes:
- ReadWriteMany
Read more at : https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
Example : https://medium.com/asl19-developers/create-readwritemany-persistentvolumeclaims-on-your-kubernetes-cluster-3a8db51f98e3
You can also use Minio, GlusterFS to creeat the NFS or any managed service like GCP filestore providing NFS and attach that to POD.
GKE example : https://medium.com/#Sushil_Kumar/readwritemany-persistent-volumes-in-google-kubernetes-engine-a0b93e203180

How to deploy a single instance mongodb with persistant volume using NFS

I have a microservice that is working on my laptop. However, I am using docker compose. I am working to deploy to a kubernetes cluster which I have already set up. I am stuck on making data persistent. E.g here is my mongodb in docker-compose
systemdb:
container_name: system-db
image: mongo:4.4.1
restart: always
ports:
- '9000:27017'
volumes:
- ./system_db:/data/db
networks:
- backend
Since it is an on premise solution, I went with an NFS server. I have created a Persistent Volume and Persistent Volume Claim (pvc-nfs-pv1) which seem to work well when testing with nginx. However, I don't know how to deploy a mongodb statefulset to use the pvc. I am not implementing a replicaset.
Here is my yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongod
spec:
serviceName: mongodb-service
replicas: 1
selector:
matchLabels:
role: mongo
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongod-container
image: mongo
resources:
requests:
cpu: "0.2"
memory: 200Mi
ports:
- containerPort: 27017
volumeMounts:
- name: pvc-nfs-pv1
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: pvc-nfs-pv1
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 500Mi
How do i do it?
volumeClaimTemplates are used for dynamic volume provisioning. So you're defining one volume claim template which will be used to create a PersistentVolumeClaim for each pod.
The volumeClaimTemplates will provide stable storage using
PersistentVolumes
provisioned by a PersistentVolume Provisioner
So for your use case you would need to create storageclass with nfs provisioner. NFS Subdir external provisioner is an automatic provisioner that use your existing and already configured NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as ${namespace}-${pvcName}-${pvName}.
Here`s an example how to define storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}" # waits for nfs.io/storage-path annotation, if not specified will accept as empty string.
onDelete: delete
Ok, I have a solution. It works simply by selecting the volume by using the matchLabels selector.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-data-volume
labels:
app: moderetic
type: mongodb
role: data
spec:
storageClassName: hcloud-volumes
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
csi:
volumeHandle: "11099996"
driver: csi.hetzner.cloud
fsType: ext4
---
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: system-mongodb
labels:
app: moderetic
type: mongodb
spec:
members: 1
type: ReplicaSet
version: "4.2.6"
logLevel: INFO
security:
authentication:
modes: ["SCRAM"]
users:
- name: moderetic
db: moderetic
passwordSecretRef:
name: mongodb-secret
roles:
- name: clusterAdmin
db: moderetic
- name: userAdminAnyDatabase
db: moderetic
scramCredentialsSecretName: moderetic-scram-secret
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
persistent: true
statefulSet:
spec:
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 1
memory: 1Gi
limits:
memory: 8Gi
- name: mongodb-agent
resources:
requests:
memory: 50Mi
limits:
cpu: 500m
memory: 256Mi
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: hcloud-volumes
resources:
requests:
storage: 10Gi
selector:
matchLabels:
app: moderetic
type: mongodb
role: data
- metadata:
name: logs-volume
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: hcloud-volumes
resources:
requests:
storage: 10Gi
selector:
matchLabels:
app: moderetic
type: mongodb
role: logs
Your question is how the mongo StatefulSet is going to use the pvc u have created ? By default It wont . It will create numbers of new pvc (depending of number of replicaset) automatically via the volumeClaimTemplates and it will be named like so : pvc-nfs-pv1-mongod-0 , pvc-nfs-pv1-mongod-1 etc ..
So if you want to use the pvc you created change the name to match pvc-nfs-pv1-mongod-0
something like this
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
role: mongo
name: pvc-nfs-pv1-mongod-0
namespace: default
spec:
...
volumeName: nfs-pv1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
...
However I dont recommend using this method (issue: when you have many other replicasets .. do you have to create all the pvcs manually and the correspondent pv ).. Here is similar questions is asked in here and also in here , I recommend using Dynamic NFS Provisioning.
Hope I helped
I do not use NFS but volumes at hetzner.com where my dev server is running. But I have exactly the same problem: As it is my dev system I destroy and rebuild it regularly. And by doing so I want the data on my volumes survive the deletion of the whole cluster. And when I rebuild it, all the volumes shall be mounted to the right pod.
For my postgres this works just fine. But using the mongodb kubernetes operator I am not able to get this running. The one mongodb pod stays forever in the state "Pending" because the PVC I created and bound manually to the volume is already bound to a volume. Or so it seems to me.
I am thankful for any help,
Tobias
The exact message I can see is:
0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims
PVC and PV:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-volume-system-mongodb-0
labels:
app: moderetic
type: mongodb
spec:
storageClassName: hcloud-volumes
volumeName: mongodb-data-volume
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-data-volume
labels:
app: moderetic
type: mongodb
spec:
storageClassName: hcloud-volumes
claimRef:
name: data-volume-system-mongodb-0
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
csi:
volumeHandle: "11099996"
driver: csi.hetzner.cloud
fsType: ext4
And the mongodb StatefulSet:
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: system-mongodb
labels:
app: moderetic
type: mongodb
spec:
members: 1
type: ReplicaSet
version: "4.2.6"
security:
authentication:
modes: ["SCRAM"]
users:
- name: moderetic
db: moderetic
passwordSecretRef:
name: mongodb-secret
roles:
- name: clusterAdmin
db: moderetic
- name: userAdminAnyDatabase
db: moderetic
scramCredentialsSecretName: moderetic-scram-secret
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
persistent: true
statefulSet:
spec:
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 1
memory: 1Gi
limits:
memory: 8Gi
- name: mongodb-agent
resources:
requests:
memory: 50Mi
limits:
cpu: 500m
memory: 256Mi
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: hcloud-volumes
resources:
requests:
storage: 10Gi
- metadata:
name: logs-volume
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: hcloud-volumes
resources:
requests:
storage: 10Gi