What is the main problem in the YAML file? - kubernetes

When I want to run the following YAML file, I get the following error:
error: error parsing pod2.yaml: error converting YAML to JSON: yaml: line 8: mapping values are not allowed in this context
---
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
name: wp
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"

You need to fix indentation and also containers is a list:
---
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: wp
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"

Related

Node-red Kubernetes Deployment

I am trying to deploy a simple node-red container on kubernetes locally to monitor its usage and at the same time create a persistent volume storage so that my node red work is saved. However, I can't get it to deploy to kubernetes. I have created a Deployment.yaml file with the following code
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: /data/nodered
# subPath: nodered <-- not needed in your case
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nodered-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:enter image description here
storage: 1Gi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
I am getting this error in powershell:
kubectl apply -f ./Deployment.yaml
persistentvolume/small-pv unchanged
persistentvolumeclaim/nodered-claim unchanged
storageclass.storage.k8s.io/local-storage unchanged
Error from server (BadRequest): error when creating "./Deployment.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.template.spec.containers[0].cpu", unknown field "spec.template.spec.containers[0].limits", unknown field "spec.template.spec.containers[0].memory", unknown field "spec.template.spec.requests"
I want it to deploy to kubernetes so that I can monitor the memory usage of the node red container
The requests & limits sections needs to be under a resources heading as follows:
spec:
containers:
- name: nodered
image: nodered/node-red:latest
resources:
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: /data/nodered
# subPath: nodered <-- not needed in your case
Node-Red because of its base platform(Node-JS) is entirely single threaded. Even if you put this on kubernetes with huge number of pods, somewhere down the road the whole app will be blocked at CPU limit of one of the POD.
Both vertical and horizontal scaling is infeasible.
If you load is heavy, use a proper middleware.
Your deployment was syntax error. Let's fix it to the right syntax
It should be like this:
containers:
- name: nodered
image: nodered/node-red:latest
resources:
requests:
memory: "256Mi"
cpu: "0.2"
limits:
memory: "512Mi"
cpu: "1"

Resource Limits to be thrice of requests

In my values.yaml (which are more than 100) ; I just have resource requests mentioned. I added a logic in my helm deployment template which will make the limits as thrice of the requests. I am facing an issue with the units of memory and CPU. In some values.yaml; it is mentioned in Mi and in some as Gi for memory and 1 or 1000m for CPU. I tried to trim the unit to perform the multiplication and then I added "m" back. This would work in case the unit m but how can I do it for other units. I know this is not a best way to do this hence I am looking for a better approach.
enter image description here
You can use regex to parse your value, assuming your value contains only float (with of without dot) and suffix part as text, multiply float part and then append suffix later. Example, with 2x multiplication:
values.yaml
limit: 1.6Gi
pod.yaml
{{- $limit_value := .Values.limit | toString | regexFind "[0-9.]+" -}}
{{- $limit_suffix := .Values.limit | toString | regexFind "[^0-9.]+" -}}
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: {{ mulf $limit_value 2 }}{{ $limit_suffix }}
- name: log-aggregator
image: images.my-company.example/log-aggregator:v6
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Result of helm template
# Source: regex/templates/pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: 3.2Gi
- name: log-aggregator
image: images.my-company.example/log-aggregator:v6
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Notice usage of mulf instead of mul, it's required for float multiplication, toString function fixes type error if value specified without suffix.
Regexes are simple enough for proof of concept, you should make them stricter
Also, please don't use images of code to your questions, paste it directly, see Why should I not upload images of code/data/errors when asking a question?

Argo Workflows pods missing cpu/memory resources

I'm running into a missing resources issue when submitting a Workflow. The Kubernetes namespace my-namespace has a quota enabled, and for whatever reason the pods being created after submitting the workflow are failing with:
pods "hello" is forbidden: failed quota: team: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
I'm submitting the following Workflow,
apiVersion: "argoproj.io/v1alpha1"
kind: "Workflow"
metadata:
name: "hello"
namespace: "my-namespace"
spec:
entrypoint: "main"
templates:
- name: "main"
container:
image: "docker/whalesay"
resources:
requests:
memory: 0
cpu: 0
limits:
memory: "128Mi"
cpu: "250m"
Argo is running on Kubernetes 1.19.6 and was deployed with the official Helm chart version 0.16.10. Here are my Helm values:
controller:
workflowNamespaces:
- "my-namespace"
resources:
requests:
memory: 0
cpu: 0
limits:
memory: 500Mi
cpu: 0.5
pdb:
enabled: true
# See https://argoproj.github.io/argo-workflows/workflow-executors/
# docker container runtime is not present in the TKGI clusters
containerRuntimeExecutor: "k8sapi"
workflow:
namespace: "my-namespace"
serviceAccount:
create: true
rbac:
create: true
server:
replicas: 2
secure: false
resources:
requests:
memory: 0
cpu: 0
limits:
memory: 500Mi
cpu: 0.5
pdb:
enabled: true
executer:
resources:
requests:
memory: 0
cpu: 0
limits:
memory: 500Mi
cpu: 0.5
Any ideas on what I may be missing? Thanks, Weldon
Update 1: I tried another namespace without quotas enabled and got past the missing resources issue. However I now see: Failed to establish pod watch: timed out waiting for the condition. Here's what the spec looks like for this pod. You can see the wait container is missing resources. This is the container causing the issue reported by this question.
spec:
containers:
- command:
- argoexec
- wait
env:
- name: ARGO_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ARGO_CONTAINER_RUNTIME_EXECUTOR
value: k8sapi
image: argoproj/argoexec:v2.12.5
imagePullPolicy: IfNotPresent
name: wait
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /argo/podmetadata
name: podmetadata
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-v4jlb
readOnly: true
- image: docker/whalesay
imagePullPolicy: Always
name: main
resources:
limits:
cpu: 250m
memory: 128Mi
requests:
cpu: "0"
memory: "0"
try deploying the workflow on another namespace if you can, and verify if it's working or not.
if you can try with removing the quota for respective namespace.
instead of quota you can also use the
apiVersion: v1
kind: LimitRange
metadata:
name: default-limit-range
spec:
limits:
- default:
memory: 512Mi
cpu: 250m
defaultRequest:
cpu: 50m
memory: 64Mi
type: Container
so any container have not resource request, limit mentioned that will get this default config of 50m CPU & 64 Mi Memory.
https://kubernetes.io/docs/concepts/policy/limit-range/

did not find expected key error when trying to limit memory

I am trying to run a test container through the Kubernetes but I got error. The below is my deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
name: nginx
spec:
containers:
requests:
storage: 2Gi
cpu: 0.5
memory: "128M"
limits:
cpu: 0.5
memory: "128M" # This is the line throws error
- type: PersistentVolumeClaim
max:
storage: 2Gi
min:
storage: 1Gi
.
.
.
When I run kubectl apply -k ., I get this error:
error: accumulating resources: accumulation err='accumulating resources from 'deploy.yaml': yaml: line 19: did not find expected key': got file 'deploy.yaml', but '/home/kus/deploy.yaml' must be a directory to be a root
I tried to read the Kubernetes but I cannot understand why I'm getting error.
Edit 1
I changed my deploy.yaml file as #whites11 said, now it's like:
.
.
.
spec:
limits:
memory: "128Mi"
cpu: 0.5
.
.
.
Now I'm getting this error:
resourcequota/storagequota unchanged
service/nginx configured
error: error validating ".": error validating data: ValidationError(Deployment.spec): unknown field "limits" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate=false
The file you shared is not valid YAML.
limits:
cpu: 0.5
memory: "128M" # This is the line throws error
- type: PersistentVolumeClaim
max:
storage: 2Gi
min:
storage: 1Gi
The limits field has mixed type (hash and collection) and is obviously wrong.
Seems like you took the syntax for specifying pods resource limits and mixed it with the one used to limit storage consuption.
In your Deployment spec you might want to set limits simply like this:
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
containers:
- ...
resources:
limits:
cpu: 0.5
memory: "128M"
And deal with storage limits at the namespace level as described in the documentation.
You should change your yaml file to this:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
resources:
limits:
memory: "128Mi"
cpu: "500m"

heapster-controller.yaml error -choose one of: [heapster eventer heapster-nanny eventer-nanny]

Trying to deploy heapster-controller to get Heapster + Graphana + InfluxDB working for Kubernetes. Getting error messages while trying ot deploy using heapster-controller.yaml file:
heapster-controller.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster-v1.1.0-beta1
namespace: kube-system
labels:
k8s-app: heapster
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: heapster
template:
metadata:
labels:
k8s-app: heapster
kubernetes.io/cluster-service: "true"
spec:
containers:
- image: gcr.io/google_containers/heapster:v1.1.0-beta1
name: heapster
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 200m
requests:
cpu: 100m
memory: 200m
command:
- /heapster
- --source=kubernetes.summary_api:''
- --sink=influxdb:http://monitoring-influxdb:8086
- --metric_resolution=60s
- image: gcr.io/google_containers/heapster:v1.1.0-beta1
name: eventer
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 200m
requests:
cpu: 100m
memory: 200m
command:
- /eventer
- --source=kubernetes:''
- --sink=influxdb:http://monitoring-influxdb:8086
- image: gcr.io/google_containers/addon-resizer:1.0
name: heapster-nanny
resources:
limits:
cpu: 50m
memory: 100Mi
requests:
cpu: 50m
memory: 100Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- /pod_nanny
- --cpu=100m
- --extra-cpu=0m
- --memory=200
- --extra-memory=200Mi
- --threshold=5
- --deployment=heapster-v1.1.0-beta1
- --container=heapster
- --poll-period=300000
- image: gcr.io/google_containers/addon-resizer:1.0
name: eventer-nanny
resources:
limits:
cpu: 50m
memory: 100Mi
requests:
cpu: 50m
memory: 100Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- /pod_nanny
- --cpu=100m
- --extra-cpu=0m
- --memory=200
- --extra-memory=200Ki
- --threshold=5
- --deployment=heapster-v1.1.0-beta1
- --container=eventer
- --poll-period=300000
Deployment goes through, but then I get error:
[root#node236 influxdb]# kubectl get pods -o wide --namespace=kube-system
NAME READY STATUS RESTARTS AGE NODE
heapster-v1.1.0-beta1-3082378092-t6inb 2/4 RunContainerError 0 1m node262.local.net
[root#node236 influxdb]#
Display log for the failed container:
[root#node236 influxdb]# kubectl logs --namespace=kube-system heapster-v1.1.0-beta1-3082378092-t6inb
Error from server: a container name must be specified for pod heapster-v1.1.0-beta1-3082378092-t6inb, choose one of: [heapster eventer heapster-nanny eventer-nanny]
[root#node236 influxdb]#
Where am I possibly going wrong ?
Any feedback appreaciated!!
Alex
The correct syntax is kubectl logs <pod> <container>.
In your example, kubectl logs heapster-v1.1.0-beta1-3082378092-t6inb heapster --namespace=kube-system will show the logs of the "heapster" container within the named pod.
Thanks a lot for the feedback. I think my problem lies around tls-certs. Need to dig deeper.
Thanks so much yet again!!