Setting resource limits for multiple services based on an environment [duplicate] - kubernetes

I want to select different resources limits/requests depending on the environment (which is given as input)
This is my Values.yaml file inside my chart
resources:
dev:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 20m
memory: 10Mi
prod:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 200m
memory: 100Mi
I deploy the chart using this command:
helm upgrade --install --values=global_values.yaml
and inside global_values.yaml:
global:
environmentSuffix: prod
What I want to do is selecting the right resources based on environmentSuffix (dev ... prod. 4 environemnts in total)
Something like this (it is not working of course):
resources:
limits:
cpu: {{ .Values.resources[.Values.global.environmentSuffix].limits.cpu }}
memory: {{ .Values.resources[.Values.global.environmentSuffix].limits.memory}}
requests:
cpu: {{ .Values.resources[.Values.global.environmentSuffix].requests.cpu }}
memory: {{ .Values.resources[.Values.global.environmentSuffix].requests.memory}}
How can I achieve this?

You can use index function from Go text/template to store the environment resources in a variable and then access it's values.
{{ $envResources := index .Values.resources .Values.global.environmentSuffix }}
resources:
limits:
cpu: {{ $envResources.limits.cpu }}

Related

CPU and Memory Stress in Kubernetes

I created a simple pod with resource limits. The expectation is that the pod gets evicted when it uses more memory than the limit specified. To test this, how do I artificially fill the pod memory? I can stress CPU with dd if=/dev/zero of=/dev/null, but not memory. Can someone help me with this please? I tried with stress utility, but no luck.
apiVersion: v1
kind: Pod
metadata:
name: nginx # Name of our pod
labels:
env: test
spec:
containers:
- name: nginx
image: nginx:1.7.1 # Image version
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 100m
memory: 256Mi

Resource Limits to be thrice of requests

In my values.yaml (which are more than 100) ; I just have resource requests mentioned. I added a logic in my helm deployment template which will make the limits as thrice of the requests. I am facing an issue with the units of memory and CPU. In some values.yaml; it is mentioned in Mi and in some as Gi for memory and 1 or 1000m for CPU. I tried to trim the unit to perform the multiplication and then I added "m" back. This would work in case the unit m but how can I do it for other units. I know this is not a best way to do this hence I am looking for a better approach.
enter image description here
You can use regex to parse your value, assuming your value contains only float (with of without dot) and suffix part as text, multiply float part and then append suffix later. Example, with 2x multiplication:
values.yaml
limit: 1.6Gi
pod.yaml
{{- $limit_value := .Values.limit | toString | regexFind "[0-9.]+" -}}
{{- $limit_suffix := .Values.limit | toString | regexFind "[^0-9.]+" -}}
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: {{ mulf $limit_value 2 }}{{ $limit_suffix }}
- name: log-aggregator
image: images.my-company.example/log-aggregator:v6
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Result of helm template
# Source: regex/templates/pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: 3.2Gi
- name: log-aggregator
image: images.my-company.example/log-aggregator:v6
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Notice usage of mulf instead of mul, it's required for float multiplication, toString function fixes type error if value specified without suffix.
Regexes are simple enough for proof of concept, you should make them stricter
Also, please don't use images of code to your questions, paste it directly, see Why should I not upload images of code/data/errors when asking a question?

Argo Workflows pods missing cpu/memory resources

I'm running into a missing resources issue when submitting a Workflow. The Kubernetes namespace my-namespace has a quota enabled, and for whatever reason the pods being created after submitting the workflow are failing with:
pods "hello" is forbidden: failed quota: team: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
I'm submitting the following Workflow,
apiVersion: "argoproj.io/v1alpha1"
kind: "Workflow"
metadata:
name: "hello"
namespace: "my-namespace"
spec:
entrypoint: "main"
templates:
- name: "main"
container:
image: "docker/whalesay"
resources:
requests:
memory: 0
cpu: 0
limits:
memory: "128Mi"
cpu: "250m"
Argo is running on Kubernetes 1.19.6 and was deployed with the official Helm chart version 0.16.10. Here are my Helm values:
controller:
workflowNamespaces:
- "my-namespace"
resources:
requests:
memory: 0
cpu: 0
limits:
memory: 500Mi
cpu: 0.5
pdb:
enabled: true
# See https://argoproj.github.io/argo-workflows/workflow-executors/
# docker container runtime is not present in the TKGI clusters
containerRuntimeExecutor: "k8sapi"
workflow:
namespace: "my-namespace"
serviceAccount:
create: true
rbac:
create: true
server:
replicas: 2
secure: false
resources:
requests:
memory: 0
cpu: 0
limits:
memory: 500Mi
cpu: 0.5
pdb:
enabled: true
executer:
resources:
requests:
memory: 0
cpu: 0
limits:
memory: 500Mi
cpu: 0.5
Any ideas on what I may be missing? Thanks, Weldon
Update 1: I tried another namespace without quotas enabled and got past the missing resources issue. However I now see: Failed to establish pod watch: timed out waiting for the condition. Here's what the spec looks like for this pod. You can see the wait container is missing resources. This is the container causing the issue reported by this question.
spec:
containers:
- command:
- argoexec
- wait
env:
- name: ARGO_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ARGO_CONTAINER_RUNTIME_EXECUTOR
value: k8sapi
image: argoproj/argoexec:v2.12.5
imagePullPolicy: IfNotPresent
name: wait
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /argo/podmetadata
name: podmetadata
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-v4jlb
readOnly: true
- image: docker/whalesay
imagePullPolicy: Always
name: main
resources:
limits:
cpu: 250m
memory: 128Mi
requests:
cpu: "0"
memory: "0"
try deploying the workflow on another namespace if you can, and verify if it's working or not.
if you can try with removing the quota for respective namespace.
instead of quota you can also use the
apiVersion: v1
kind: LimitRange
metadata:
name: default-limit-range
spec:
limits:
- default:
memory: 512Mi
cpu: 250m
defaultRequest:
cpu: 50m
memory: 64Mi
type: Container
so any container have not resource request, limit mentioned that will get this default config of 50m CPU & 64 Mi Memory.
https://kubernetes.io/docs/concepts/policy/limit-range/

What is the main problem in the YAML file?

When I want to run the following YAML file, I get the following error:
error: error parsing pod2.yaml: error converting YAML to JSON: yaml: line 8: mapping values are not allowed in this context
---
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
name: wp
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
You need to fix indentation and also containers is a list:
---
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: wp
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"

helm: how to remove newline after toYaml function

From official documentation:
When the template engine runs, it removes the contents inside of {{ and }}, but it leaves the remaining whitespace exactly as is. The curly brace syntax of template declarations can be modified with special characters to tell the template engine to chomp whitespace. {{- (with the dash and space added) indicates that whitespace should be chomped left, while -}} means whitespace to the right should be consumed.
But I try all variations with no success. Have anyone solution how to place yaml inside yaml? I don't want to use range
apiVersion: v1
kind: Pod
metadata:
name: app
labels:
app: app
spec:
containers:
- name: app
image: image
volumeMounts:
- mountPath: test
name: test
resources:
{{ toYaml .Values.pod.resources | indent 6 }}
volumes:
- name: test
emptyDir: {}
when I use this code without -}} it's adding a newline:
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 20m
memory: 64Mi
volumes:
- name: test
emptyDir: {}
but when I use -}} it's concate with another position.
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 20m
memory: 64Mi
volumes: <- shoud be in indent 2
- name: test
emptyDir: {}
values.yaml is
pod:
resources:
requests:
cpu: 20m
memory: 64Mi
limits:
cpu: 100m
memory: 128Mi
This worked for me:
{{ toYaml .Values.pod.resources | trim | indent 6 }}
The below variant is correct:
{{ toYaml .Values.pod.resources | indent 6 }}
Adding a newline doesn't create any issue here.
I've tried your pod.yaml and got the following error:
$ helm install .
Error: release pilfering-pronghorn failed: Pod "app" is invalid: spec.containers[0].volumeMounts[0].mountPath: Invalid value: "test": must be an absolute path
which means that mountPath of volumeMounts should be something like /mnt.
So, the following pod.yaml works pretty good and creates a pod with the exact resources we defined in values.yaml:
apiVersion: v1
kind: Pod
metadata:
name: app
labels:
app: app
spec:
containers:
- name: app
image: image
volumeMounts:
- mountPath: /mnt
name: test
resources:
{{ toYaml .Values.pod.resources | indent 6 }}
volumes:
- name: test
emptyDir: {}
{{- toYaml .Values.pod.resources | indent 6 -}}
This removes a new line
#Nickolay, it is not a valid yaml file, according to helm - at least helm barfs and says:
error converting YAML to JSON: yaml: line 51: did not find expected key
For me, line 51 is the empty space - and whatever follows should not be indented to the same level