Starting container multiple times with different environment variables - kubernetes

I have a container image that is loading multiple large files on startup. When restarting the container, all files have to be loaded again.
What I want to do now is to start six instances which only load one file each, given an environment variable. Now my question is how to configure this. What I could do is create a new deployment+service for each file, but that seems incorrect because 99% of the content is the same, only the environment variable is different. Another option would be to have one pod with multiple containers and one gateway-like containers. But then when the pod is restarting, all files are loaded again.
What's the best strategy to do this?

Ideally, you should have to keep it like deployment+service and make 5-6 different secret or configmap as per need storing the environment variables files your application require.
Inject this secret or configmap one by one to each different deployment.
Another option would be to have one pod with multiple containers and
one gateway-like containers.
that's didn't look like scalable approach, if you are running the 5- container inside single pod and one gateway container also.

Although this does not directly address the question, I am delivering an answer to how you could create multiple container instances from one deployment that only different in environmental variables because this question pops up if googling said approach.
For your question the answer of Harsh Manvar is the correct kubernetes approved way of handling that.
I had the same problem and found a solution which needs to be refined a bit more.
With Helm you can specify a key-value pair array inside your values.yaml, which would look like this:
envVars:
key1: value1
key2: value2
Now you need to modify your deployment.yaml to loop over this array and inject the values as environmental variables into your container :
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "chart.fullname" . }}
labels:
{{- include "chart.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "chart.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "chart.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "chart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
{{- range $key, $value := .Values.envVars }}
- name: {{ $.Chart.Name }}-{{ $key }}
securityContext:
{{- toYaml $.Values.securityContext | nindent 12 }}
image: "{{ $.Values.image.repository }}:{{ $.Values.image.tag | default $.Chart.AppVersion }}"
env:
- name: envVarName
value: {{ $value | quote }}
imagePullPolicy: {{ $.Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
resources:
{{- toYaml $.Values.resources | nindent 12 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
There are some notes that need to be considered tho:
We are using {{- range $key, $value := .Values.envVars }} inside the containers spec to create multiple containers inside one pod. This may not be wanted due to ressource restrictions
you can reference the key-value pairs with $key and $value as specified in the range command
As the scope inside the range environment is set to .Values.envVars every other call outside of that scope needs to be made from the root scope. In Helm the root scope of the values file is called with the $ sign e.g. {{ $.Chart.Name }}
You need to incooperate your key or value into the container name or you would be creating duplicated, hence the - name: {{ $.Chart.Name }}-{{ $key }} portion.
It should be possible to use the same range command to create multiple deployments based on one file if put at the top. Just keep in mind to fix the scope of other calls.
It should NOT be possible to create multiple (different) pods out of one deployment, due to the Kubernetes structure not expecting multiple template sections within the spec of a deployment. Multiple pods would simply be replicas. (The template inside the deployment.spec is of type PodTemplate)
Anyway, this took me quite some time to crack, I hope it is helpful to someone.

Related

Kubernetes cluster unable to pull images from DigitalOcean Registry

My DigitalOcean kubernetes cluster is unable to pull images from the DigitalOcean registry. I get the following error message:
Failed to pull image "registry.digitalocean.com/XXXX/php:1.1.39": rpc error: code = Unknown desc = failed to pull and unpack image
"registry.digitalocean.com/XXXXXXX/php:1.1.39": failed to resolve reference
"registry.digitalocean.com/XXXXXXX/php:1.1.39": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized
I have added the kubernetes cluster using DigitalOcean Container Registry Integration, which shows there successfully both on the registry and the settings for the kubernetes cluster.
I can confirm the above address `registry.digitalocean.com/XXXX/php:1.1.39 matches the one in the registry. I wonder if I’m misunderstanding how the token / login integration works with the registry, but I’m under the impression that this was a “one click” thing and that the cluster would automatically get the connection to the registry after that.
I have tried by logging helm into a registry before pushing, but this did not work (and I wouldn't really expect it to, the cluster should be pulling the image).
It's not completely clear to me how the image pull secrets are supposed to be used.
My helm deployment chart is basically the default for API Platform:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "api-platform.fullname" . }}
labels:
{{- include "api-platform.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "api-platform.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "api-platform.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "api-platform.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}-caddy
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.caddy.image.repository }}:{{ .Values.caddy.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.caddy.image.pullPolicy }}
env:
- name: SERVER_NAME
value: :80
- name: PWA_UPSTREAM
value: {{ include "api-platform.fullname" . }}-pwa:3000
- name: MERCURE_PUBLISHER_JWT_KEY
valueFrom:
secretKeyRef:
name: {{ include "api-platform.fullname" . }}
key: mercure-publisher-jwt-key
- name: MERCURE_SUBSCRIBER_JWT_KEY
valueFrom:
secretKeyRef:
name: {{ include "api-platform.fullname" . }}
key: mercure-subscriber-jwt-key
ports:
- name: http
containerPort: 80
protocol: TCP
- name: admin
containerPort: 2019
protocol: TCP
volumeMounts:
- mountPath: /var/run/php
name: php-socket
#livenessProbe:
# httpGet:
# path: /
# port: admin
#readinessProbe:
# httpGet:
# path: /
# port: admin
resources:
{{- toYaml .Values.resources | nindent 12 }}
- name: {{ .Chart.Name }}-php
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.php.image.repository }}:{{ .Values.php.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.php.image.pullPolicy }}
env:
{{ include "api-platform.env" . | nindent 12 }}
volumeMounts:
- mountPath: /var/run/php
name: php-socket
readinessProbe:
exec:
command:
- docker-healthcheck
initialDelaySeconds: 120
periodSeconds: 3
livenessProbe:
exec:
command:
- docker-healthcheck
initialDelaySeconds: 120
periodSeconds: 3
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumes:
- name: php-socket
emptyDir: {}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
How do I authorize the kubernetes cluster to pull from the registry? Is this a helm thing or a kubernetes only thing?
Thanks!
The problem that you have is that you do not have an image pull secret for your cluster to use to pull from the registry.
You will need to add this to give your cluster a way to authorize its requests to the cluster.
Using the DigitalOcean kubernetes Integration for Container Registry
Digital ocean provides a way to add image pull secrets to a kubernetes cluster in your account. You can link the registry to the cluster in the settings of the registry. Under "DigitalOcean Kuberentes Integration" select edit, then select the cluster you want to link the registry to.
This action adds an image pull secret to all namespaces within the cluster and will be used by default (unless you specify otherwise).
The issue was that API Platform automatically has a default value for imagePullSecrets in the helm chart, which is
imagePullSecrets: []
in values.yaml
So this seems to override kubernetes from accessing imagePullSecrets in the way that I expected. The solution was to add the name of the imagePullSecrets directly to the helm deployment command, like this:
--set "imagePullSecrets[0].name=registry-secret-name-goes-here"
You can view the name of your secret using kubectl get secrets like this:
kubectl get secrets
And the output should look something like this:
NAME TYPE DATA AGE
default-token-lz2ck kubernetes.io/service-account-token 3 38d
registry-secret-name-goes-here kubernetes.io/dockerconfigjson 1 2d16h

nil pointer evaluating interface when installing a helm chart

I'm trying to install a chart to my cluster but I'm getting a error
Error: template: go-api/templates/deployment.yaml:18:24: executing "go-api/templates/deployment.yaml"
at <.Values.deployment.container.name>: nil pointer evaluating interface {}.name
However I executed the same commands for another 2 charts and it worked fine.
The template file I'm using is this:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: {{ .Values.namespace}}
labels: {{- include "chart.labels" . | nindent 4 }}
name: {{ .Values.deployment.name}}
spec:
replicas: {{ .Values.deployment.replicas}}
selector:
matchLabels: {{ include "chart.matchLabels" . | nindent 6 }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Values.deployment.container.name }}
image: {{ .Values.deployment.container.image }}
imagePullPolicy: Never
ports:
- containerPort: {{ .Values.deployment.container.port }}
This can happen if the Helm values you're using to install don't have that particular block:
namespace: default
deployment:
name: a-name
replicas: 1
# but no container:
To avoid this specific error in the template code, it's useful to pick out the parent dictionary into a variable; then if the parent is totally absent, you can decide what to do about it. This technique is a little more useful if there are optional fields or sensible defaults:
{{- $deployment := .Values.deployment | default dict }}
metadata:
name: {{ $deployment.name | default (include "chart.fullname" .) }}
spec:
{{- if $deployment.replicas }}
replicas: {{ $deployment.replicas }}
{{- end }}
If you really can't work without the value, Helm has an undocumented required function that can print a more specific error message.
{{- $deployment := .Values.deployment | required "deployment configuration is required" }}
(My experience has been that required values are somewhat frustrating as an end user, particularly if you're trying to run someone else's chart, and I would try to avoid this if possible.)
Given what you show, it's also possible you're making the chart too configurable. The container name, for example, is mostly a detail that only appears if you have a multi-container pod (or are using Istio); the container port is a fixed attribute of the image you're running. You can safely fix these values in the Helm template file, and then it's reasonable to provide defaults for things like the replica count or image name (consider setting the repository name, image name, and tag as separate variables).
{{- $deployment := .Values.deployment | default dict }}
{{- $registry := $deployment.registry | default "docker.io" }}
{{- $image := $deployment.image | default "my/image" }}
{{- $tag := $deployment.tag | default "latest" }}
containers:
- name: app # fixed
image: {{ printf "%s/%s:%s" $registry $image $tag }}
{{- with .Values.imagePullPolicy }}
imagePullPolicy: {{ . }}
{{- end }}
ports:
- name: http
containerPort: 3000 # fixed
If the value is defined in your values file, and you're still getting the error then the issue could be due to accessing that value inside range or similar function which changes the context.
For example, to use a named template mySuffix that has been defined with .Values and using that inside range with template function, we need to provide $ to the template function instead of the usual .:
{{- define "mySuffix" -}}
{{- .Values.suffix }}
{{- end }}
...
{{- range .Values.listOfValues }}
echo {{ template "mySuffix" $ }}
{{- end }}

can I remove all the helm generated labels and use my own labels?

When I do
helm create my-app
I get default labels like below in generated templates (deployment, service, ingress yaml files):
app.kubernetes.io/name: {{ include "my-app.name" . }}
helm.sh/chart: {{ include "my-app.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
Can I remove all of them and just use my own labels, will it affect helm features like rollback etc.???
Yeah they can all be removed - from here:
Helm itself never requires that a particular label be present.
Probably in ./templates/_helpers.tpl there is a section similar to this
{{/*
Common labels
*/}}
{{- define "{CHART_NAME}.labels" -}}
helm.sh/chart: {{ include "{CHART_NAME}.chart" . }}
{{ include "{CHART_NAME}.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "{CHART_NAME}.selectorLabels" -}}
app.kubernetes.io/name: {{ include "{CHART_NAME}.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
Also, in the labels section of each object that is being labeled there should be something like the next, that references the helper:
labels:
{{- include "{CHART_NAME}.labels" . | nindent 4 }}
If you just want to remove those labels, you can remove that block or delete the variables that sets each of the labels to the value that you want.

Helm - how to call helper functions in a loop?

I'm trying to define n StatefulSets where n is the number of nodes required, set in values.yaml as nodeCount. I get an error that looks to be scope related, but I can't seem to get the scope sorted out. Am I missing something here?
The relevant content in my StatefulSet .yaml file:
{{ range $k, $v := until ( .Values.nodeCount | int) }}
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: {{ $.Release.Name }}
labels:
app: {{ $.Release.Name }}
chart: {{ template "myapp-on-kube.chart" . }} #here's my call to _helpers
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
The relevant content in _helpers.tpl:
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "myapp-on-kube.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
The error I get:
Error: render error in "myapp-on-kube/templates/statefulset.yaml": template: myapp-on-kube/templates/_helpers.tpl:31:25: executing "myapp-on-kube.chart" at <.Chart.Name>: can't evaluate field Chart in type int
Several of the Go templating constructs change the meaning of . to be the thing that's being looped over, and you need to use $ to refer to the initial value. Most of your template correctly refers to e.g. $.Release.Name, but when you invoke the helper template, it's using the current context rather than the root value. Change:
chart: {{ template "myapp-on-kube.chart" $ }}
(Note that the template as you have it will declare several StatefulSets all with the same name, which won't go well. I might create just one StatefulSet with replicas: {{ .Values.nodeCount }}.)

Using a Configmap to define whitelist annotations across multiple Ingresses

In Kubernetes, is it possible to use a Configmap for the value of an annotation? The reason I want to do this is to reuse an IP whitelist across multiple ingresses. Alternatively, is there another way to approach this problem?
Unfortunately, no feature does it in Kubernetes. But as iomv wrote, you could try to use helm.
Helm allows you to use variables in your charts, for example:
metadata:
{{- if .Values.controller.service.annotations }}
annotations:
{{ toYaml .Values.controller.service.annotations | indent 4 }}
{{- end }}
labels:
{{- if .Values.controller.service.labels }}
{{ toYaml .Values.controller.service.labels | indent 4 }}
{{- end }}
app: {{ template "nginx-ingress.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.controller.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "nginx-ingress.controller.fullname" . }}
This part of code is from the nginx-ingress chart.
As you see, you can fetch this chart and update the values as you need.