In Kubernetes, is it possible to use a Configmap for the value of an annotation? The reason I want to do this is to reuse an IP whitelist across multiple ingresses. Alternatively, is there another way to approach this problem?
Unfortunately, no feature does it in Kubernetes. But as iomv wrote, you could try to use helm.
Helm allows you to use variables in your charts, for example:
metadata:
{{- if .Values.controller.service.annotations }}
annotations:
{{ toYaml .Values.controller.service.annotations | indent 4 }}
{{- end }}
labels:
{{- if .Values.controller.service.labels }}
{{ toYaml .Values.controller.service.labels | indent 4 }}
{{- end }}
app: {{ template "nginx-ingress.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.controller.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "nginx-ingress.controller.fullname" . }}
This part of code is from the nginx-ingress chart.
As you see, you can fetch this chart and update the values as you need.
Related
I have a container image that is loading multiple large files on startup. When restarting the container, all files have to be loaded again.
What I want to do now is to start six instances which only load one file each, given an environment variable. Now my question is how to configure this. What I could do is create a new deployment+service for each file, but that seems incorrect because 99% of the content is the same, only the environment variable is different. Another option would be to have one pod with multiple containers and one gateway-like containers. But then when the pod is restarting, all files are loaded again.
What's the best strategy to do this?
Ideally, you should have to keep it like deployment+service and make 5-6 different secret or configmap as per need storing the environment variables files your application require.
Inject this secret or configmap one by one to each different deployment.
Another option would be to have one pod with multiple containers and
one gateway-like containers.
that's didn't look like scalable approach, if you are running the 5- container inside single pod and one gateway container also.
Although this does not directly address the question, I am delivering an answer to how you could create multiple container instances from one deployment that only different in environmental variables because this question pops up if googling said approach.
For your question the answer of Harsh Manvar is the correct kubernetes approved way of handling that.
I had the same problem and found a solution which needs to be refined a bit more.
With Helm you can specify a key-value pair array inside your values.yaml, which would look like this:
envVars:
key1: value1
key2: value2
Now you need to modify your deployment.yaml to loop over this array and inject the values as environmental variables into your container :
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "chart.fullname" . }}
labels:
{{- include "chart.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "chart.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "chart.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "chart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
{{- range $key, $value := .Values.envVars }}
- name: {{ $.Chart.Name }}-{{ $key }}
securityContext:
{{- toYaml $.Values.securityContext | nindent 12 }}
image: "{{ $.Values.image.repository }}:{{ $.Values.image.tag | default $.Chart.AppVersion }}"
env:
- name: envVarName
value: {{ $value | quote }}
imagePullPolicy: {{ $.Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
resources:
{{- toYaml $.Values.resources | nindent 12 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
There are some notes that need to be considered tho:
We are using {{- range $key, $value := .Values.envVars }} inside the containers spec to create multiple containers inside one pod. This may not be wanted due to ressource restrictions
you can reference the key-value pairs with $key and $value as specified in the range command
As the scope inside the range environment is set to .Values.envVars every other call outside of that scope needs to be made from the root scope. In Helm the root scope of the values file is called with the $ sign e.g. {{ $.Chart.Name }}
You need to incooperate your key or value into the container name or you would be creating duplicated, hence the - name: {{ $.Chart.Name }}-{{ $key }} portion.
It should be possible to use the same range command to create multiple deployments based on one file if put at the top. Just keep in mind to fix the scope of other calls.
It should NOT be possible to create multiple (different) pods out of one deployment, due to the Kubernetes structure not expecting multiple template sections within the spec of a deployment. Multiple pods would simply be replicas. (The template inside the deployment.spec is of type PodTemplate)
Anyway, this took me quite some time to crack, I hope it is helpful to someone.
Trying to update helm ingress jira atlassian software.
I have such ingress template:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "atlassian-jira-software.fullname" . -}}
{{- $servicePort := .Values.service.port -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ template "atlassian-jira-software.name" . }}
chart: {{ template "atlassian-jira-software.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . }}
http:
paths:
- path: {{ $ingressPath }}
backend:
service:
name: {{ $fullName }}
port:
name: http
{{- end }}
{{- end }}
Execute this command:
helm upgrade --dry-run -n atlassian jira .
The output of this command:
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Ingress "jira-atlassian-jira-software" in namespace "atlassian" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "jira"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "atlassian"
kubectl version --short
The output:
Client Version: v1.19.12 Server Version: v1.19.13-eks-8df270
Please, help me!
Was the ingress originally installed by Helm? Check out its "labels" section:
kubectl get ingress jira-atlassian-jira-software -o json
If you don't find the expected values (as described in the error messages) and you are sure you know what you are doing, you can try adding the labels yourself by editing the ingress:
kubectl edit ingress jira-atlassian-jira-software
If you do this, make sure that you run a diff before you do the helm upgrade again (to ensure that you see what is going to happen in advance and that you don't blow away anything you did not intend to):
helm diff upgrade -n atlassian jira .
I'm trying to install a chart to my cluster but I'm getting a error
Error: template: go-api/templates/deployment.yaml:18:24: executing "go-api/templates/deployment.yaml"
at <.Values.deployment.container.name>: nil pointer evaluating interface {}.name
However I executed the same commands for another 2 charts and it worked fine.
The template file I'm using is this:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: {{ .Values.namespace}}
labels: {{- include "chart.labels" . | nindent 4 }}
name: {{ .Values.deployment.name}}
spec:
replicas: {{ .Values.deployment.replicas}}
selector:
matchLabels: {{ include "chart.matchLabels" . | nindent 6 }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Values.deployment.container.name }}
image: {{ .Values.deployment.container.image }}
imagePullPolicy: Never
ports:
- containerPort: {{ .Values.deployment.container.port }}
This can happen if the Helm values you're using to install don't have that particular block:
namespace: default
deployment:
name: a-name
replicas: 1
# but no container:
To avoid this specific error in the template code, it's useful to pick out the parent dictionary into a variable; then if the parent is totally absent, you can decide what to do about it. This technique is a little more useful if there are optional fields or sensible defaults:
{{- $deployment := .Values.deployment | default dict }}
metadata:
name: {{ $deployment.name | default (include "chart.fullname" .) }}
spec:
{{- if $deployment.replicas }}
replicas: {{ $deployment.replicas }}
{{- end }}
If you really can't work without the value, Helm has an undocumented required function that can print a more specific error message.
{{- $deployment := .Values.deployment | required "deployment configuration is required" }}
(My experience has been that required values are somewhat frustrating as an end user, particularly if you're trying to run someone else's chart, and I would try to avoid this if possible.)
Given what you show, it's also possible you're making the chart too configurable. The container name, for example, is mostly a detail that only appears if you have a multi-container pod (or are using Istio); the container port is a fixed attribute of the image you're running. You can safely fix these values in the Helm template file, and then it's reasonable to provide defaults for things like the replica count or image name (consider setting the repository name, image name, and tag as separate variables).
{{- $deployment := .Values.deployment | default dict }}
{{- $registry := $deployment.registry | default "docker.io" }}
{{- $image := $deployment.image | default "my/image" }}
{{- $tag := $deployment.tag | default "latest" }}
containers:
- name: app # fixed
image: {{ printf "%s/%s:%s" $registry $image $tag }}
{{- with .Values.imagePullPolicy }}
imagePullPolicy: {{ . }}
{{- end }}
ports:
- name: http
containerPort: 3000 # fixed
If the value is defined in your values file, and you're still getting the error then the issue could be due to accessing that value inside range or similar function which changes the context.
For example, to use a named template mySuffix that has been defined with .Values and using that inside range with template function, we need to provide $ to the template function instead of the usual .:
{{- define "mySuffix" -}}
{{- .Values.suffix }}
{{- end }}
...
{{- range .Values.listOfValues }}
echo {{ template "mySuffix" $ }}
{{- end }}
When I do
helm create my-app
I get default labels like below in generated templates (deployment, service, ingress yaml files):
app.kubernetes.io/name: {{ include "my-app.name" . }}
helm.sh/chart: {{ include "my-app.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
Can I remove all of them and just use my own labels, will it affect helm features like rollback etc.???
Yeah they can all be removed - from here:
Helm itself never requires that a particular label be present.
Probably in ./templates/_helpers.tpl there is a section similar to this
{{/*
Common labels
*/}}
{{- define "{CHART_NAME}.labels" -}}
helm.sh/chart: {{ include "{CHART_NAME}.chart" . }}
{{ include "{CHART_NAME}.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "{CHART_NAME}.selectorLabels" -}}
app.kubernetes.io/name: {{ include "{CHART_NAME}.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
Also, in the labels section of each object that is being labeled there should be something like the next, that references the helper:
labels:
{{- include "{CHART_NAME}.labels" . | nindent 4 }}
If you just want to remove those labels, you can remove that block or delete the variables that sets each of the labels to the value that you want.
I'm trying to define n StatefulSets where n is the number of nodes required, set in values.yaml as nodeCount. I get an error that looks to be scope related, but I can't seem to get the scope sorted out. Am I missing something here?
The relevant content in my StatefulSet .yaml file:
{{ range $k, $v := until ( .Values.nodeCount | int) }}
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: {{ $.Release.Name }}
labels:
app: {{ $.Release.Name }}
chart: {{ template "myapp-on-kube.chart" . }} #here's my call to _helpers
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
The relevant content in _helpers.tpl:
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "myapp-on-kube.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
The error I get:
Error: render error in "myapp-on-kube/templates/statefulset.yaml": template: myapp-on-kube/templates/_helpers.tpl:31:25: executing "myapp-on-kube.chart" at <.Chart.Name>: can't evaluate field Chart in type int
Several of the Go templating constructs change the meaning of . to be the thing that's being looped over, and you need to use $ to refer to the initial value. Most of your template correctly refers to e.g. $.Release.Name, but when you invoke the helper template, it's using the current context rather than the root value. Change:
chart: {{ template "myapp-on-kube.chart" $ }}
(Note that the template as you have it will declare several StatefulSets all with the same name, which won't go well. I might create just one StatefulSet with replicas: {{ .Values.nodeCount }}.)