How does one pass an override file to specific Service YAML file using Helm? - kubernetes

I am trying to pass a toleration when deploying to a chart located in stable. The toleration should be applied to a specific YAML file in the templates directory, NOT the values.yaml file as it is doing by default.
I've applied using patch and I can see that the change I need would work if it were applied to the right Service, which is a DaemonSet.
Currently I'm trying "helm install -f tolerations.yaml --name release_here"
This is simply creating a one-off entry when running get chart release_here, and is not in the correct service YAML

Quoting your requirement
The toleration should be applied to a specific YAML file in the
templates directory
First, in order to make it happen your particular helm chart file needs to allow such an end-user customization.
Here is the example based on stable/kiam chart:
Definition of kiam/templates/server-daemonset.yaml
{{- if .Values.server.enabled -}}
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
app: {{ template "kiam.name" . }}
chart: {{ template "kiam.chart" . }}
component: "{{ .Values.server.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kiam.fullname" . }}-server
spec:
selector:
matchLabels:
app: {{ template "kiam.name" . }}
component: "{{ .Values.server.name }}"
release: {{ .Release.Name }}
template:
metadata:
{{- if .Values.server.podAnnotations }}
annotations:
{{ toYaml .Values.server.podAnnotations | indent 8 }}
{{- end }}
labels:
app: {{ template "kiam.name" . }}
component: "{{ .Values.server.name }}"
release: {{ .Release.Name }}
{{- if .Values.server.podLabels }}
{{ toYaml .Values.server.podLabels | indent 8 }}
{{- end }}
spec:
serviceAccountName: {{ template "kiam.serviceAccountName.server" . }}
hostNetwork: {{ .Values.server.useHostNetwork }}
{{- if .Values.server.nodeSelector }}
nodeSelector:
{{ toYaml .Values.server.nodeSelector | indent 8 }}
{{- end }}
tolerations: <---- TOLERATIONS !
{{ toYaml .Values.server.tolerations | indent 8 }}
{{- if .Values.server.affinity }}
affinity:
{{ toYaml .Values.server.affinity | indent 10 }}
{{- end }}
volumes:
- name: tls
Override default values.yaml with your customs-values to set toleration in Pod spec of DeamonSet.
server:
enabled: true
tolerations: ## Agent container resources
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: foo.bar.com/role
operator: In
values:
- master
Render the resulting manifest file, to see how it would look like when overriding default values with install/upgrade helm command using --values/--set argument:
helm template --name my-release . -x templates/server-daemonset.yaml --values custom-values.yaml
Rendered file (output truncated):
---
# Source: kiam/templates/server-daemonset.yaml
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
app: kiam
chart: kiam-2.5.1
component: "server"
heritage: Tiller
release: my-release
name: my-release-kiam-server
spec:
selector:
matchLabels:
app: kiam
component: "server"
release: my-release
template:
metadata:
labels:
app: kiam
component: "server"
release: my-release
spec:
serviceAccountName: my-release-kiam-server
hostNetwork: false
tolerations:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: foo.bar.com/role
operator: In
values:
- master
volumes:
...
I hope this will help you to solve your problem.

Related

HorizontalPodAutoscaler scales up pods but then terminates them instantly

So I have a HorizontalPodAutoscaler set up for my backend (an fpm-server and an Nginx server for a Laravel application).
The problem is that when the HPA is under load, it scales up the pods but it terminates them instantly, not even letting them get into the Running state.
The metrics are good, the scale-up behavior is as expected the only problem is that the pods get terminated right after scaling.
What could be the problem?
Edit: The same HPA is used on the frontend and it's working as expected, the problem seems to be only on the backend.
Edit 2: I have Cluster Autoscaler enabled, it does it's job, nodes get added when they are needed and then cleaned up so it's not an issue about available resources.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: fpm-server
labels:
tier: backend
layer: fpm
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
tier: backend
layer: fpm
template:
metadata:
labels:
tier: backend
layer: fpm
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: fpm
image: "{{ .Values.fpm.image.repository }}:{{ .Values.fpm.image.tag }}"
ports:
- name: http
containerPort: 9000
protocol: TCP
env:
{{- range $name, $value := .Values.env }}
- name: {{ $name }}
value: "{{ $value }}"
{{- end }}
envFrom:
- secretRef:
name: backend-secrets
resources:
{{- toYaml .Values.resources | nindent 12 }}
hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: fpm-server-hpa
labels:
tier: backend
layer: fpm
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: fpm-server
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
Seems that the problem was with the replicas: {{ .Values.replicaCount }} definition. It seems that, if you are using HPA, replicas can't be used. I removed this line and the HPA started scaling.

In a Helm Chart, how can I block the upgrade until a deployment in it is complete?

I am using Rancher Pipelines and catalogs to run Helm Charts like this:
.rancher-pipeline.yml
stages:
- name: Deploy app-web
steps:
- applyAppConfig:
catalogTemplate: cattle-global-data:chart-web-server
version: 0.4.0
name: ${CICD_GIT_REPO_NAME}-${CICD_GIT_BRANCH}-serv
targetNamespace: ${CICD_GIT_REPO_NAME}
answers:
pipeline.sequence: ${CICD_EXECUTION_SEQUENCE}
...
- name: Another chart needs to wait until the previous one success
...
And in the chart-web-server app, it has a deployment:
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-dpy
labels:
{{- include "labels" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Release.Name }}
{{- include "labels" . | nindent 6 }}
template:
metadata:
labels:
app: {{ .Release.Name }}
{{- include "labels" . | nindent 8 }}
spec:
containers:
- name: "web-server-{{ include "numericSafe" .Values.git.commitID }}"
image: "{{ .Values.harbor.host }}/{{ .Values.web.imageTag }}"
imagePullPolicy: Always
env:
...
ports:
- containerPort: {{ .Values.web.port }}
protocol: TCP
resources:
{{- .Values.resources | toYaml | nindent 12 }}
Now, I need the pipeline to be blocked until the deployment is upgraded since I want to do some server testing in the following stages.
My idea is to use Helm hook: If I can create a Job hooking post-install and post-upgrade and waiting for the deployment to be completed, I can then block the whole pipeline until the deployment (a web server) is updated.
Does this idea work? If so, how can I write such a blocking and detecting Job?
Does not appear to be supported from what I can find of their code. It would appear they just shell out to helm upgrade, would need to use the --wait mode.

Kubernetes write once, deploy anywhere

Consider that I have built a complex Kubernetes deployment/workload consisting of deployments, stateful sets, services, operators, CRDs with specific configuration etc… The workload/deployment was created by individual commands (kubectl create, helm install…)…
1-) Is there a way to dynamically (not manually) generate a script or a special file that describes the deployment and that could be used to redeploy/reinstall my deployment without going into each command one by one again.
2-) Is there a domain specific language (DSL) or something similar through which one can describe a Kubernetes deployment independently from the final target kubernetes cluster target, whether GKE, AWS, Azure, or on premises … kind of write once deploy anywhere…
Thanks.
I think Kustomize and Helm are your best bet. We can write the helm chart as a template and decide which template to use with Go-Templating and conditions.
For example, look at the below configuration file which has a few conditions.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appName }}
namespace: {{ .Values.namespace }}
spec:
selector:
matchLabels:
app: {{ .Values.appName }}
replicas: {{ .Values.replicaCount }}
template:
metadata:
annotations:
labels:
app: {{ .Values.appName }}
spec:
containers:
- name: {{ .Values.appName }}
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.hasSecretVolume }}
volumeMounts:
- name: {{ .Values.appName }}-volume-sec
mountPath: {{ .Values.secretVolumeMountPath }}
{{- end}}
{{- if or .Values.env.configMap .Values.env.secrets }}
envFrom:
{{- if .Values.env.configMap }}
- configMapRef:
name: {{ .Values.appName }}-env-configmap
{{- end }}
{{- if .Values.env.secrets }}
- secretRef:
name: {{ .Values.appName }}-env-secret
{{- end }}
{{- end }}
ports:
- containerPort: {{ .Values.containerPort }}
protocol: TCP
{{- if .Values.railsContainerHealthChecks}}
{{ toYaml .Values.railsContainerHealthChecks | indent 8 }}
{{- end}}
{{- if .Values.hasSecretVolume }}
volumes:
- name: {{ .Values.appName }}-volume-sec
secret:
secretName: {{ .Values.appName }}-volume-sec
{{- end}}
{{- if .Values.imageCredentials}}
imagePullSecrets:
- name: {{.Values.imageCredentials.secretName}}
{{- end}}
For instance, this condition checks for Secret volume and mounts it.
{{- if .Values.hasSecretVolume }}
In case you are interested in helm and generic templating, you can refer to this medium blog: https://medium.com/srendevops/helm-generic-spring-boot-templates-c9d9800ddfee

Using single helm chart for deployment of multiple services

I am new to helm and kubernetes.
My current requirement is to use setup multiple services using a common helm chart.
Here is the scenario.
I have a common docker image for all of the services
for each of the services there are different commands to run. In total there are more than 40 services.
Example
pipenv run python serviceA.py
pipenv run python serviceB.py
pipenv run python serviceC.py
and so on...
Current state of helm chart I have is
demo-helm
|- Chart.yaml
|- templates
|- deployment.yaml
|- _helpers.tpl
|- values
|- values-serviceA.yaml
|- values-serviceB.yaml
|- values-serviceC.yaml
and so on ...
Now, since I want to use the same helm chart and deploy multiple services. How should I do it?
I used following command helm install demo-helm . -f values/values-serviceA.yaml -f values-serviceB.yaml but it only does a deployment for values file provided at the end.
Here is my deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm.fullname" . }}
labels:
{{- include "helm.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "helm.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "helm.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: {{- toYaml .Values.command |nindent 12}}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: secrets
mountPath: "/usr/src/app/config.ini"
subPath: config.ini
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: secrets
secret:
secretName: sample-application
defaultMode: 0400
Update.
Since my requirement has been updated to add all the values for services in a single file I am able to do it by following.
deployment.yaml
{{- range $service, $val := .Values.services }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $service }}
labels:
app: {{ .nameOverride }}
spec:
replicas: {{ .replicaCount }}
selector:
matchLabels:
app: {{ .nameOverride }}
template:
metadata:
labels:
app: {{ .nameOverride }}
spec:
imagePullSecrets:
- name: aws-ecr
containers:
- name: {{ $service }}
image: "image-latest-v3"
imagePullPolicy: IfNotPresent
command: {{- toYaml .command |nindent 12}}
resources:
{{- toYaml .resources | nindent 12 }}
volumeMounts:
- name: secrets
mountPath: "/usr/src/app/config.ini"
subPath: config.ini
volumes:
- name: secrets
secret:
secretName: {{ .secrets }}
defaultMode: 0400
{{- end }}
and values.yaml
services:
#Services for region1
serviceA-region1:
nameOverride: "serviceA-region1"
fullnameOverride: "serviceA-region1"
command: ["bash", "-c", "python serviceAregion1.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
#Services for region2
serviceA-region2:
nameOverride: "serviceA-region2"
fullnameOverride: "serviceA-region2"
command: ["bash", "-c", "python serviceAregion2.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
Now I want to know will the following configuration work with the changes I am posting below for both values.yaml
services:
region:
#Services for region1
serviceA-region1:
nameOverride: "serviceA-region1"
fullnameOverride: "serviceA-region1"
command: ["bash", "-c", "python serviceAregion1.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
region:2
#Services for region2
serviceA-region2:
nameOverride: "serviceA-region2"
fullnameOverride: "serviceA-region2"
command: ["bash", "-c", "python serviceAregion2.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
and deployment.yaml
{{- range $region, $val := .Values.services.region }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $region }}-{{ .nameOverride }}
labels:
app: {{ .nameOverride }}
spec:
replicas: {{ .replicaCount }}
selector:
matchLabels:
app: {{ .nameOverride }}
template:
metadata:
labels:
app: {{ .nameOverride }}
spec:
imagePullSecrets:
- name: aws-ecr
containers:
- name: {{ $region }}-{{ .nameOverride }}
image: "image-latest-v3"
imagePullPolicy: IfNotPresent
command: {{- toYaml .command |nindent 12}}
resources:
{{- toYaml .resources | nindent 12 }}
volumeMounts:
- name: secrets
mountPath: "/usr/src/app/config.ini"
subPath: config.ini
volumes:
- name: secrets
secret:
secretName: {{ .secrets }}
defaultMode: 0400
{{- end }}
I can recommend you try a helmfile-based approach. I prefer a 3-file approach.
What you'll need :
helmfile-init.yaml: contains YAML instructions that you might need to use for creating and configuring namespaces etc.
helmfile-backend.yaml: contains all the releases you need to deploy (service1, service2 ...)
helmfile.yaml: paths to the above-mentioned (helmfile-init, helmfile-backend YAML files)
a deployment spec file (app_name.json): a specification file that contains all the information regarding the release (release-name, namespace, helm chart version, application-version, etc.)
Helmfile has made my life a little bit breezy when deploying multiple applications. I will edit this answer with a couple of examples in a few minutes.
Meanwhile, you can refer to the official docs here or the Blue Books if you have Github access on your machine.
helm install demo-helm . -f values/values-serviceA.yaml -f values-serviceB.yaml
When you did like this, serviceB values will override serviceA values. You need to run the command separately with different release name as follow :
helm install demo-helm-A . -f values/values-serviceA.yaml
helm install demo-helm-B . -f values/values-serviceB.yaml
Is there any other approach like I run everything in a loop since the
only difference in each of my values.yaml file is the command section.
So, I can include command in the same file like this > command: > -
["bash", "-c", "python serviceA.py"] > - ["bash", "-c", "python
serviceB.py"] > - ["bash", "-c", "python serviceC.py"] – whoami 20
hours ago
Yes, you can write a fairly simple bash script which will run everything in a loop:
for i in {A..Z}; do sed "s/{{COMMAND}}/[\"bash\", \"-c\", \"python service$i.py\"]/g" values/values-service-template.yaml | helm install demo-helm-$i . -f - ; done
Instead of command: ["bash", "-c", "python serviceAregion1.py"] in your values/values-service-template.yaml file just put command: {{COMMAND}} as it will be substituted with the exact command with every iteration of the loop.
As to {A..Z} put there whatever you need in your case. It might be {A..K} if you only have services named from A to K or {1..40} if instead of letters you prefer numeric values.
The following sed command will substitute {{COMMAND}} fragment in your original values/values-service-template.yaml with the actual command e.g. ["bash", "-c", "python serviceA.py"], ["bash", "-c", "python serviceB.py"] and so on.
sed "s/{{COMMAND}}/[\"bash\", \"-c\", \"python service$i.py\"]/g" values/values-service-template.yaml
Then it will be piped ( | symbol ) to:
helm install demo-helm-$i . -f -
where demo-helm-$i will be expanded e.g. to demo-helm-A but the key element here is - character which means: read from standard input instead of reading from file, which is normally expected after -f flag.

Does helm support Endpoints object type?

I've created the following to objects:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.serviceName }}
namespace: {{ .Values.global.namespace }}
labels:
chart: {{ template "chartName" . }}
env: {{ .Values.global.env }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-weight": "10"
"helm.sh/hook-delete-policy": "before-hook-creation"
spec:
ports:
- port: {{ .Values.postgres.port}}
selector: {}
for a service and its endpoint:
kind: Endpoints
apiVersion: v1
metadata:
name: {{ .Values.serviceName }}
namespace: {{ .Values.global.namespace }}
labels:
chart: {{ template "chartName" . }}
env: {{ .Values.global.env }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-weight": "10"
"helm.sh/hook-delete-policy": "before-hook-creation"
subsets:
- addresses:
- ip: "{{ .Values.external.ip }}"
ports:
- name: "db"
port: {{ .Values.external.port }}
When I use helm even in a dry run mode I can see the service object and cant see the endpoint object.
Why? Doesn't helm support all k8s objects?
Helm is just a "templating" tool, so technically it supports everything that your underlying k8 supports.
In your case, please check that both files are in the templates directory
Actually it does work. The problem was that the service and the endpoint MUST have same names (which I new) and MUST have port names exactly the same