Consider that I have built a complex Kubernetes deployment/workload consisting of deployments, stateful sets, services, operators, CRDs with specific configuration etc… The workload/deployment was created by individual commands (kubectl create, helm install…)…
1-) Is there a way to dynamically (not manually) generate a script or a special file that describes the deployment and that could be used to redeploy/reinstall my deployment without going into each command one by one again.
2-) Is there a domain specific language (DSL) or something similar through which one can describe a Kubernetes deployment independently from the final target kubernetes cluster target, whether GKE, AWS, Azure, or on premises … kind of write once deploy anywhere…
Thanks.
I think Kustomize and Helm are your best bet. We can write the helm chart as a template and decide which template to use with Go-Templating and conditions.
For example, look at the below configuration file which has a few conditions.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appName }}
namespace: {{ .Values.namespace }}
spec:
selector:
matchLabels:
app: {{ .Values.appName }}
replicas: {{ .Values.replicaCount }}
template:
metadata:
annotations:
labels:
app: {{ .Values.appName }}
spec:
containers:
- name: {{ .Values.appName }}
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.hasSecretVolume }}
volumeMounts:
- name: {{ .Values.appName }}-volume-sec
mountPath: {{ .Values.secretVolumeMountPath }}
{{- end}}
{{- if or .Values.env.configMap .Values.env.secrets }}
envFrom:
{{- if .Values.env.configMap }}
- configMapRef:
name: {{ .Values.appName }}-env-configmap
{{- end }}
{{- if .Values.env.secrets }}
- secretRef:
name: {{ .Values.appName }}-env-secret
{{- end }}
{{- end }}
ports:
- containerPort: {{ .Values.containerPort }}
protocol: TCP
{{- if .Values.railsContainerHealthChecks}}
{{ toYaml .Values.railsContainerHealthChecks | indent 8 }}
{{- end}}
{{- if .Values.hasSecretVolume }}
volumes:
- name: {{ .Values.appName }}-volume-sec
secret:
secretName: {{ .Values.appName }}-volume-sec
{{- end}}
{{- if .Values.imageCredentials}}
imagePullSecrets:
- name: {{.Values.imageCredentials.secretName}}
{{- end}}
For instance, this condition checks for Secret volume and mounts it.
{{- if .Values.hasSecretVolume }}
In case you are interested in helm and generic templating, you can refer to this medium blog: https://medium.com/srendevops/helm-generic-spring-boot-templates-c9d9800ddfee
Related
We configured CSI driver in our cluster for secret management and used the below secret provider class template to automatically assign secrets to the deployments env variable. The above setup is working fine.
But 2 things where I have issues. Whenever new changes were done to the secret, say if adding a new secret to the YAML and key vault, the next release will fail with the helm upgrade command, stating specified secret is not found.
So in order to solve this, I have to uninstall all helm releases and need to install the helm release again, which means down time, how can I achieve this scenario without any down time?
Secondly, is there any recommended way to restart the Pods when the secret template changes:
values.yaml for MyAppA
keyvault:
name: mykv
tenantId: ${tenantId}$
clientid: "#{spid}#"
clientsecret: "#{spsecret}#"
secrets:
- MyAPPA_SECRET1_NAME1
- MyAPPA_SECRET2_NAME2
- MyAPPA_SECRET3_NAME3
deployment.yaml, ENV part is as below
{{- if eq .Values.keyvault.enabled true }}
{{- range .Values.keyvault.secrets }}{{/* <-- only one range loop */}}
- name: {{ . }}
valueFrom:
secretKeyRef:
name: {{ $.Release.Name }}-kvsecret
key: {{ . }}
{{- end }}
{{- end }}
volumeMounts:
- name: {{ $.Release.Name }}-volume
mountPath: '/mnt/secrets-store'
readOnly: true
volumes:
- name: {{ $.Release.Name }}-volume
csi:
driver: 'secrets-store.csi.k8s.io'
readOnly: true
volumeAttributes:
secretProviderClass: {{ $.Release.Name }}-secretproviderclass
nodePublishSecretRef:
name: {{ $.Release.Name }}-secrets-store-creds
secretProviderClass yaml file is as below.
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: {{ $.Release.Name }}-secretproviderclass
labels:
app: {{ $.Release.Name }}
chart: "{{ $.Release.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
provider: azure
secretObjects:
- data:
{{- range .Values.keyvault.secrets }}{{/* <-- only one range loop */}}
- key: {{ . }}
objectName: {{ $.Release.Name | upper }}-{{ . }}
{{- end }}
secretName: {{ $.Release.Name }}-kvsecret
type: opaque
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "false"
userAssignedIdentityID: ""
keyvaultName: {{ .Values.keyvault.name | default "mydev-kv" }}
objects: |
array:
{{- range .Values.keyvault.secrets }}{{/* <-- only one range loop */}}
- |
objectName: {{ $.Release.Name | upper }}-{{ . }}
objectType: secret
{{- end }}
tenantId: {{ .Values.keyvault.tenantid }}
{{- end }}
{{- end -}}
{{- define "commonobject.secretproviderclass" -}}
{{- template "commonobject.util.merge" (append . "commonobject.secretproviderclass.tpl") -}}
{{- end -}}
The problem is not in the "helm upgrade" command. I discovered this is a limitation of a CSI driver or SecretProviderClass. When the deployment is already created, the SecretProviderClass resource is updated but the "SecretProviderClassPodStatuses" is not, so secrets are not updated.
Two potential solutions to update secrets:
delete secret and restart/recreate pod => this works but it sounds more like a workaround than an actual solution
set enableSecretRotation to true => it has been implemented in a CSI driver recently and it's in an 'alpha' version
https://secrets-store-csi-driver.sigs.k8s.io/topics/secret-auto-rotation.html
Edited:
In the end, I used this command to use automatic secret rotation in Azure Kubernetes Service:
az aks addon update -g [resource-group] -n [aks-name] -a azure-keyvault-secrets-provider --enable-secret-rotation --rotation-poll-interval 0.5m
You can use the following command to check if this option is enabled:
az aks addon show -g [resource-group] -n [aks-name] -a azure-keyvault-secrets-provider
More info here:
https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver#enable-and-disable-autorotation
We've just had an increase in traffic to our kubernetes cluster and I've noticed that of our 6 application pods, 2 of them are seemingly not used very much. kubectl top pods returns the following
You can see of the 6 pods, 4 of them are using more than 50% of the CPU (2 vCPU nodes), but two of them aren't really doing much at all.
Our cluster is setup on AWS, using the ALB ingress controller. The load balancer is configured to use the Least outstanding requests rather than Round robin in an attempt to balance things out a bit more, but we're still seeing this imbalance.
Is there any way of determining why this is happening, or if indeed it actually is a problem? I'm hoping it's normal behaviour rather than an issue but I'd rather investigate it.
App deployment config
This is the configuration of the main application pods. Nothing fancy really
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "app.fullname" . }}
labels:
{{- include "app.labels" . | nindent 4 }}
app.kubernetes.io/component: web
spec:
replicas: {{ .Values.app.replicaCount }}
selector:
matchLabels:
app: {{ include "app.fullname" . }}-web
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/config_maps/app-env-vars.yaml") . | sha256sum }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 10 }}
{{- end }}
labels:
{{- include "app.selectorLabels" . | nindent 8 }}
app.kubernetes.io/component: web
app: {{ include "app.fullname" . }}-web
spec:
serviceAccountName: {{ .Values.serviceAccount.web }}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- {{ include "app.fullname" . }}-web
topologyKey: failure-domain.beta.kubernetes.io/zone
containers:
- name: {{ .Values.image.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- bundle
args: ["exec", "puma", "-p", "{{ .Values.app.containerPort }}"]
ports:
- name: http
containerPort: {{ .Values.app.containerPort }}
protocol: TCP
readinessProbe:
httpGet:
path: /healthcheck
port: {{ .Values.app.containerPort }}
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 5
resources:
{{- toYaml .Values.resources | nindent 12 }}
envFrom:
- configMapRef:
name: {{ include "app.fullname" . }}-cm-env-vars
- secretRef:
name: {{ include "app.fullname" . }}-secret-rails-master-key
- secretRef:
name: {{ include "app.fullname" . }}-secret-db-credentials
- secretRef:
name: {{ include "app.fullname" . }}-secret-db-url-app
- secretRef:
name: {{ include "app.fullname" . }}-secret-redis-credentials
That's a known issue with Kubernetes.
In short, Kubernetes doesn't load balance long-lived TCP connections.
This excellent article covers it in details.
The load distribution you service is showing complies exactly with the case.
I am new to helm and kubernetes.
My current requirement is to use setup multiple services using a common helm chart.
Here is the scenario.
I have a common docker image for all of the services
for each of the services there are different commands to run. In total there are more than 40 services.
Example
pipenv run python serviceA.py
pipenv run python serviceB.py
pipenv run python serviceC.py
and so on...
Current state of helm chart I have is
demo-helm
|- Chart.yaml
|- templates
|- deployment.yaml
|- _helpers.tpl
|- values
|- values-serviceA.yaml
|- values-serviceB.yaml
|- values-serviceC.yaml
and so on ...
Now, since I want to use the same helm chart and deploy multiple services. How should I do it?
I used following command helm install demo-helm . -f values/values-serviceA.yaml -f values-serviceB.yaml but it only does a deployment for values file provided at the end.
Here is my deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm.fullname" . }}
labels:
{{- include "helm.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "helm.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "helm.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: {{- toYaml .Values.command |nindent 12}}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: secrets
mountPath: "/usr/src/app/config.ini"
subPath: config.ini
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: secrets
secret:
secretName: sample-application
defaultMode: 0400
Update.
Since my requirement has been updated to add all the values for services in a single file I am able to do it by following.
deployment.yaml
{{- range $service, $val := .Values.services }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $service }}
labels:
app: {{ .nameOverride }}
spec:
replicas: {{ .replicaCount }}
selector:
matchLabels:
app: {{ .nameOverride }}
template:
metadata:
labels:
app: {{ .nameOverride }}
spec:
imagePullSecrets:
- name: aws-ecr
containers:
- name: {{ $service }}
image: "image-latest-v3"
imagePullPolicy: IfNotPresent
command: {{- toYaml .command |nindent 12}}
resources:
{{- toYaml .resources | nindent 12 }}
volumeMounts:
- name: secrets
mountPath: "/usr/src/app/config.ini"
subPath: config.ini
volumes:
- name: secrets
secret:
secretName: {{ .secrets }}
defaultMode: 0400
{{- end }}
and values.yaml
services:
#Services for region1
serviceA-region1:
nameOverride: "serviceA-region1"
fullnameOverride: "serviceA-region1"
command: ["bash", "-c", "python serviceAregion1.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
#Services for region2
serviceA-region2:
nameOverride: "serviceA-region2"
fullnameOverride: "serviceA-region2"
command: ["bash", "-c", "python serviceAregion2.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
Now I want to know will the following configuration work with the changes I am posting below for both values.yaml
services:
region:
#Services for region1
serviceA-region1:
nameOverride: "serviceA-region1"
fullnameOverride: "serviceA-region1"
command: ["bash", "-c", "python serviceAregion1.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
region:2
#Services for region2
serviceA-region2:
nameOverride: "serviceA-region2"
fullnameOverride: "serviceA-region2"
command: ["bash", "-c", "python serviceAregion2.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
and deployment.yaml
{{- range $region, $val := .Values.services.region }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $region }}-{{ .nameOverride }}
labels:
app: {{ .nameOverride }}
spec:
replicas: {{ .replicaCount }}
selector:
matchLabels:
app: {{ .nameOverride }}
template:
metadata:
labels:
app: {{ .nameOverride }}
spec:
imagePullSecrets:
- name: aws-ecr
containers:
- name: {{ $region }}-{{ .nameOverride }}
image: "image-latest-v3"
imagePullPolicy: IfNotPresent
command: {{- toYaml .command |nindent 12}}
resources:
{{- toYaml .resources | nindent 12 }}
volumeMounts:
- name: secrets
mountPath: "/usr/src/app/config.ini"
subPath: config.ini
volumes:
- name: secrets
secret:
secretName: {{ .secrets }}
defaultMode: 0400
{{- end }}
I can recommend you try a helmfile-based approach. I prefer a 3-file approach.
What you'll need :
helmfile-init.yaml: contains YAML instructions that you might need to use for creating and configuring namespaces etc.
helmfile-backend.yaml: contains all the releases you need to deploy (service1, service2 ...)
helmfile.yaml: paths to the above-mentioned (helmfile-init, helmfile-backend YAML files)
a deployment spec file (app_name.json): a specification file that contains all the information regarding the release (release-name, namespace, helm chart version, application-version, etc.)
Helmfile has made my life a little bit breezy when deploying multiple applications. I will edit this answer with a couple of examples in a few minutes.
Meanwhile, you can refer to the official docs here or the Blue Books if you have Github access on your machine.
helm install demo-helm . -f values/values-serviceA.yaml -f values-serviceB.yaml
When you did like this, serviceB values will override serviceA values. You need to run the command separately with different release name as follow :
helm install demo-helm-A . -f values/values-serviceA.yaml
helm install demo-helm-B . -f values/values-serviceB.yaml
Is there any other approach like I run everything in a loop since the
only difference in each of my values.yaml file is the command section.
So, I can include command in the same file like this > command: > -
["bash", "-c", "python serviceA.py"] > - ["bash", "-c", "python
serviceB.py"] > - ["bash", "-c", "python serviceC.py"] – whoami 20
hours ago
Yes, you can write a fairly simple bash script which will run everything in a loop:
for i in {A..Z}; do sed "s/{{COMMAND}}/[\"bash\", \"-c\", \"python service$i.py\"]/g" values/values-service-template.yaml | helm install demo-helm-$i . -f - ; done
Instead of command: ["bash", "-c", "python serviceAregion1.py"] in your values/values-service-template.yaml file just put command: {{COMMAND}} as it will be substituted with the exact command with every iteration of the loop.
As to {A..Z} put there whatever you need in your case. It might be {A..K} if you only have services named from A to K or {1..40} if instead of letters you prefer numeric values.
The following sed command will substitute {{COMMAND}} fragment in your original values/values-service-template.yaml with the actual command e.g. ["bash", "-c", "python serviceA.py"], ["bash", "-c", "python serviceB.py"] and so on.
sed "s/{{COMMAND}}/[\"bash\", \"-c\", \"python service$i.py\"]/g" values/values-service-template.yaml
Then it will be piped ( | symbol ) to:
helm install demo-helm-$i . -f -
where demo-helm-$i will be expanded e.g. to demo-helm-A but the key element here is - character which means: read from standard input instead of reading from file, which is normally expected after -f flag.
I am trying to deploy the following service:
{{- if .Values.knativeDeploy }}
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
{{- if .Values.service.name }}
name: {{ .Values.service.name }}
{{- else }}
name: {{ template "fullname" . }}
{{- end }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
template:
spec:
containers:
- image: quay.io/keycloak/keycloak-gatekeeper:9.0.3
name: gatekeeper-sidecar
ports:
- containerPort: {{ .Values.keycloak.proxyPort }}
env:
- name: KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ template "keycloakclient" . }}
key: secret
args:
- --resources=uri=/*
- --discovery-url={{ .Values.keycloak.url }}/auth/realms/{{ .Values.keycloak.realm }}
- --client-id={{ template "keycloakclient" . }}
- --client-secret=$(KEYCLOAK_CLIENT_SECRET)
- --listen=0.0.0.0:{{ .Values.keycloak.proxyPort }} # listen on all interfaces
- --enable-logging=true
- --enable-json-logging=true
- --upstream-url=http://127.0.0.1:{{ .Values.service.internalPort }} # To connect with the main container's port
resources:
{{ toYaml .Values.gatekeeper.resources | indent 12 }}
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $pkey, $pval := .Values.env }}
- name: {{ $pkey }}
value: {{ quote $pval }}
{{- end }}
envFrom:
{{ toYaml .Values.envFrom | indent 10 }}
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: {{ .Values.probePath }}
port: {{ .Values.service.internalPort }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
httpGet:
path: {{ .Values.probePath }}
port: {{ .Values.service.internalPort }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
resources:
{{ toYaml .Values.resources | indent 12 }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- end }}
Which fails with the following error:
Error from server (BadRequest): error when creating "/tmp/helm-template-workdir-290082188/jx/output/namespaces/jx-staging/env/charts/docs/templates/part0-ksvc.yaml": admission webhook "webhook.serving.knative.dev" denied the request: mutation failed: expected exactly one, got both: spec.template.spec.containers'
Now, if I read the specs (https://knative.dev/v0.15-docs/serving/getting-started-knative-app/), I can see this example:
apiVersion: serving.knative.dev/v1 # Current version of Knative
kind: Service
metadata:
name: helloworld-go # The name of the app
namespace: default # The namespace the app will use
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go # The URL to the image of the app
env:
- name: TARGET # The environment variable printed out by the sample app
value: "Go Sample v1"
Which has exactly the same structure. Now, my questions are:
How can I validate my yam without waiting for a deployment? Intellij has a k8n plugin, but I can't find the CRD schema for serving.knative.dev/v1 that are machine consumable. (https://knative.dev/docs/serving/spec/knative-api-specification-1.0/)
Is it allowed with knative to have multiple container? (that configuration works perfectly with apiVersion: apps/v1 kind: Deployment)
Multi container is alpha feature in knative version 0.16.
This feature need to be enabled by setting multi-container to enabled in the config-features ConfigMap. So edit the configmap using
kubectl edit cm config-features and enable that feature.
apiVersion: v1
kind: ConfigMap
metadata:
name: config-features
namespace: knative-serving
labels:
serving.knative.dev/release: devel
annotations:
knative.dev/example-checksum: "983ddf13"
data:
_example: |
...
# Indicates whether multi container support is enabled
multi-container: "enabled"
...
What version of Knative are you using?
Support for multiple containers was added as an alpha feature in 0.16. If you're not using 0.16 or later or don't have the alpha flag enabled, the request will probably be blocked.
There were a number of edge cases to define for multi-container support in Knative, so the default was to be conservative and only allow one container until the constraints had been explored.
I am trying to pass a toleration when deploying to a chart located in stable. The toleration should be applied to a specific YAML file in the templates directory, NOT the values.yaml file as it is doing by default.
I've applied using patch and I can see that the change I need would work if it were applied to the right Service, which is a DaemonSet.
Currently I'm trying "helm install -f tolerations.yaml --name release_here"
This is simply creating a one-off entry when running get chart release_here, and is not in the correct service YAML
Quoting your requirement
The toleration should be applied to a specific YAML file in the
templates directory
First, in order to make it happen your particular helm chart file needs to allow such an end-user customization.
Here is the example based on stable/kiam chart:
Definition of kiam/templates/server-daemonset.yaml
{{- if .Values.server.enabled -}}
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
app: {{ template "kiam.name" . }}
chart: {{ template "kiam.chart" . }}
component: "{{ .Values.server.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kiam.fullname" . }}-server
spec:
selector:
matchLabels:
app: {{ template "kiam.name" . }}
component: "{{ .Values.server.name }}"
release: {{ .Release.Name }}
template:
metadata:
{{- if .Values.server.podAnnotations }}
annotations:
{{ toYaml .Values.server.podAnnotations | indent 8 }}
{{- end }}
labels:
app: {{ template "kiam.name" . }}
component: "{{ .Values.server.name }}"
release: {{ .Release.Name }}
{{- if .Values.server.podLabels }}
{{ toYaml .Values.server.podLabels | indent 8 }}
{{- end }}
spec:
serviceAccountName: {{ template "kiam.serviceAccountName.server" . }}
hostNetwork: {{ .Values.server.useHostNetwork }}
{{- if .Values.server.nodeSelector }}
nodeSelector:
{{ toYaml .Values.server.nodeSelector | indent 8 }}
{{- end }}
tolerations: <---- TOLERATIONS !
{{ toYaml .Values.server.tolerations | indent 8 }}
{{- if .Values.server.affinity }}
affinity:
{{ toYaml .Values.server.affinity | indent 10 }}
{{- end }}
volumes:
- name: tls
Override default values.yaml with your customs-values to set toleration in Pod spec of DeamonSet.
server:
enabled: true
tolerations: ## Agent container resources
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: foo.bar.com/role
operator: In
values:
- master
Render the resulting manifest file, to see how it would look like when overriding default values with install/upgrade helm command using --values/--set argument:
helm template --name my-release . -x templates/server-daemonset.yaml --values custom-values.yaml
Rendered file (output truncated):
---
# Source: kiam/templates/server-daemonset.yaml
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
app: kiam
chart: kiam-2.5.1
component: "server"
heritage: Tiller
release: my-release
name: my-release-kiam-server
spec:
selector:
matchLabels:
app: kiam
component: "server"
release: my-release
template:
metadata:
labels:
app: kiam
component: "server"
release: my-release
spec:
serviceAccountName: my-release-kiam-server
hostNetwork: false
tolerations:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: foo.bar.com/role
operator: In
values:
- master
volumes:
...
I hope this will help you to solve your problem.