Create multiple containers with templating - kubernetes

I have a running k8s deployment, with one container.
I want to deploy 10 more containers, with a few differences in the deployment manifest (i.e command launched, container name, ...).
Rather than create 10 more .yml files with the whole deployment, I would prefer use templating. What can I do to achieve this ?
---
apiVersion: v1
kind: CronJob
metadata:
name: myname
labels:
app.kubernetes.io/name: myname
spec:
schedule: "*/10 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app.kubernetes.io/name: myname
spec:
serviceAccountName: myname
containers:
- name: myname
image: 'mynameimage'
imagePullPolicy: IfNotPresent
command: ["/my/command/to/launch"]
restartPolicy: OnFailure

Kustomize seems to be the go-to tool for templating, composition, multi-environment overriding, etc, in kubernetes configs. And it's built directly into kubectl now as well.
Specifically, I think you can achieve what you want by using the bases and overlays feature. Setup a base which contains the common structure and overlays which contain specific overrides.

You can either specify a set of containers to be created you can do that like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: container1
image: your-image
- name: container2
image: your-image
- name: container3
image: your-image
and you can repeat that container definition as many times as you want.
The other way around is to use a templating engine like helm/kustomize as mentioned above.

Using helm which is a templating engine for Kubernetes manifests you can create your own template by following me through.
If you have never worked with helm you can check the official docs
In order for you to follow make sure you have helm already installed!
- create a new chart:
helm create cowboy-app
this will generate a new project for you.
- DELETE EVERYTHING WITHING THE templates DIR
- REMOVE ALL values.yaml content
- create a new file deployment.yaml in templates directory and paste this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appName }}
labels:
chart: {{ .Values.appName }}
spec:
selector:
matchLabels:
app: {{ .Values.appName }}
replicas: 1
template:
metadata:
labels:
app: {{ .Values.appName }}
spec:
containers:
{{ toYaml .Values.images | indent 8 }}
- in values.yaml paste this:
appName: cowboy-app
images:
- name: app-1
image: image-1
- name: app-2
image: image-2
- name: app-3
image: image-3
- name: app-4
image: image-4
- name: app-5
image: image-5
- name: app-6
image: image-6
- name: app-7
image: image-7
- name: app-8
image: image-8
- name: app-9
image: image-9
- name: app-10
image: image-10
So if you are familiar with helm you can tell that {{ toYaml .Values.images | indent 10 }} in the deployment.yaml is referring to data specified in values.yaml as YAML and by running helm install release-name /path/to/chart will generate and deploy a manifest file which is deployment.yaml that looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cowboy-app
labels:
chart: cowboy-app
spec:
selector:
matchLabels:
app: cowboy-app
replicas: 1
template:
metadata:
labels:
app: cowboy-app
spec:
containers:
- image: image-1
name: app-1
- image: image-2
name: app-2
- image: image-3
name: app-3
- image: image-4
name: app-4
- image: image-5
name: app-5
- image: image-6
name: app-6
- image: image-7
name: app-7
- image: image-8
name: app-8
- image: image-9
name: app-9
- image: image-10
name: app-10

Either you can use Helm or Kustomize. Both are templating tools and help you to achieve your goal

Related

kubernetes set env variable

My requirement is inside pod there is a file
location : /mnt/secrets-store/environment
In kubernetes manifest file i would like to set environment variable . Values contains above location flat file
pls share your thought how to achieve that
I have tried below option in the k8s yml file but not working
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-api
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: sample-api
template:
metadata:
labels:
app: sample-api
aadpodidbinding: azure-pod-identity-binding-selector
spec:
containers:
- name: sample-api
image: sample.azurecr.io/sample:11129
imagePullPolicy: Always
ports:
- containerPort: 80
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Kubernetes"
- name: NRIA_DISPLAY_NAME
value: $("/usr/bin/cat" "/mnt/secrets-store/environment")

Kubernetes: Is there a way to retrieve or inject local env vars into configmap.yaml? [duplicate]

I am setting up the kubernetes setup for django webapp.
I am passing environment variable while creating deployment as below
kubectl create -f deployment.yml -l key1=value1
I am getting error as below
error: no objects passed to create
Able to create the deployment successfully, If i remove the env variable -l key1=value1 while creating deployment.
deployment.yaml as below
#Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: sigma-service
name: $key1
What will be the reason for causing the above error while creating deployment?
I used envsubst (https://www.gnu.org/software/gettext/manual/html_node/envsubst-Invocation.html) for this. Create a deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: $NAME
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then:
export NAME=my-test-nginx
envsubst < deployment.yaml | kubectl apply -f -
Not sure what OS are you using to run this. On macOS, envsubst installed like:
brew install gettext
brew link --force gettext
This isn't a right way to use the deployment, you can't provide half details in yaml and half in kubectl commands. If you want to pass environment variables in your deployment you should add those detail in the deployment spec.template.spec:
You should add following block to your deployment.yaml
spec:
containers:
- env:
- name: var1
value: val1
This will export your environment variables inside the container.
The other way to export the environment variable is use kubectl run (not advisable) as it is going to be depreciated very soon. You can use following command:
kubectl run nginx --image=nginx --restart=Always --replicas=1 --env=var1=val1
The above command will create a deployment nginx with replica 1 and environment variable var1=val1
You cannot pass variables to "kubectl create -f". YAML files should be complete manifests without variables. Also you cannot use "-l" flag to "kubectl create -f".
If you want to pass environment variables to pod you should do like that:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
env:
- name: MY_VAT
value: MY_VALUE
ports:
- containerPort: 80
Read more here: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Follow the below steps
create test-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: MYAPP
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
using sed command you can update the deployment name at deployment time
sed -e 's|MYAPP|my-nginx|g' test-deploy.yaml | kubectl apply -f -
File: ./deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: MYAPP
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
File: ./service.yaml
apiVersion: v1
kind: Service
metadata:
name: MYAPP
labels:
app: nginx
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: nginx
File: ./kustomization.yaml
resources:
- deployment.yaml
- service.yaml
If you're using https://kustomize.io/, you can do this trick in a CI:
sh '( echo "images:" ; echo " - name: $IMAGE" ; echo " newTag: $VERSION" ) >> ./kustomization.yaml'
sh "kubectl apply --kustomize ."
I chose yq since it is yaml aware and gives a precise control where text substitutions happen.
To set an image from bash env var:
export IMAGE="your_image:latest"
yq eval '.spec.template.spec.containers[0].image = "'$IMAGE'"' manifests/daemonset.yaml | kubectl apply -f -
yq is available on MacPorts (as of 19/04/21 v4.4.1) with sudo port install yq
I was facing the same problem. I created a python script to change simple/complex or add values to the YAML file.
This became very handy in a similar situation that you describe. Also, switching to the python domain can allow for more complex scenarios.
The code and how to use it are available at this gist.
https://gist.github.com/washraf/f81153270c80b0b4ecf90a53872abde7
Please try following
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: kdpd00201
name: frontend
labels:
app: nginx
spec:
replicas: 6
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: frontend
image: ifccncf/nginx:1.14.2
ports:
- containerPort: 8001
env:
- name: NGINX_PORT
value: "8001"
My solution is then
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: frontend
name: frontend
namespace: kdpd00201
spec:
replicas: 4
selector:
matchLabels:
app: frontend
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: frontend
spec:
containers:
- env: # modified level
- name: NGINX_PORT
value: "8080"
image: lfccncf/nginx:1.13.7
name: nginx
ports:
- containerPort: 8080

Fetching docker image and tag as key/value pairs from values.yaml in helm k8s

I have a list of docker images which I want to pass as an environment variable to deployment.yaml
values.yaml
contributions_list:
- image: flogo-aws
tag: 36
- image: flogo-awsec2
tag: 37
- image: flogo-awskinesis
tag: 18
- image: flogo-chargify
tag: 19
deployment.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: container-image-extractor
namespace: local-tibco-tci
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
backoffLimit: 0
template:
metadata:
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Never
containers:
- name: container-image-extractor
image: reldocker.tibco.com/stratosphere/container-image-extractor
imagePullPolicy: IfNotPresent
env:
- name: SOURCE_DOCKER_IMAGE
value: "<docker_image>:<docker_tag>" # docker image from which contents to be copied
My questions are as follows.
Is this the correct way to pass an array of docker image and tags as an argument to deployment.yaml
How would I replace <docker_image> and <docker_tag> in deployment.yaml from values.yaml and incrementally job should be triggered for each docker image and tag.
This is how I would do it, creating a job for every image in your list
{{- range .Values.contributions_list }}
apiVersion: batch/v1
kind: Job
metadata:
name: "container-image-extractor-{{ .image }}-{{ .tag }}"
namespace: local-tibco-tci
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
backoffLimit: 0
template:
metadata:
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Never
containers:
- name: container-image-extractor
image: reldocker.tibco.com/stratosphere/container-image-extractor
imagePullPolicy: IfNotPresent
env:
- name: SOURCE_DOCKER_IMAGE
value: "{{ .image }}:{{ .tag }}" # docker image from which contents to be copied
{{ end }}
If you use a value outside of this contribution list (release name, env, whatever), do not forget to change the scope such {{ $.Values.myjob.limits.cpu | quote }}. The $. is important :)
Edit: If you don't change the name at each iteration of the loop, it will override the configuration every time. With different names, you will have multiple jobs created.
You need to fix a deployment.yaml as below:
{{- range $contribution := .Values.contributions_list }}
apiVersion: batch/v1
kind: Job
metadata:
name: container-image-extractor
namespace: local-tibco-tci
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
backoffLimit: 0
template:
metadata:
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Never
containers:
- name: container-image-extractor
image: reldocker.tibco.com/stratosphere/container-image-extractor
imagePullPolicy: IfNotPresent
env:
- name: SOURCE_DOCKER_IMAGE
value: "{{ $contribution.image }}:{{ $contribution.tag }}"
{{- end }}
If you want to know helm template syntax, you can see this document

Error in creating Deployment YAML on kubernetes spec.template.spec.containers[1].image: Required value

I have created an EC2 and install EKS on it.Then i created cluster and install docker image on it.
Now i'm trying to deploy this image to the docker container using given yaml and getting error.
Error in creating Deployment YAML on kubernetes
spec.template.spec.containers[1].image: Required value
spec.template.spec.containers[2].image: Required value
--i can see the image on ec2 docker.
my yaml is like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: premiumservice
labels:
app: premium-service
namespace:
annotations:
monitoring: "true"
spec:
replicas: 1
selector:
matchLabels:
app: premium-service
template:
metadata:
labels:
app: premium-service
spec:
containers:
- image: "mp3-image1:latest"
name: premiumservice
ports:
- containerPort: 80
env:
- name: type1
value: "xyz"
- name: type2
value: "abc"
The deployment yaml have indentation problem near the env section and should look like below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: premiumservice
labels:
app: premium-service
namespace:
annotations:
monitoring: "true"
spec:
replicas: 1
selector:
matchLabels:
app: premium-service
template:
metadata:
labels:
app: premium-service
spec:
containers:
- image: mp3-image1:latest
name: premiumservice
ports:
- containerPort: 80
env:
- name: type1
value: "xyz"
- name: type2
value: "abc"
This may be totally unrelated, but I had the same issue with a k8s deployment file that had variable substitution in the image but the env variable it was referencing wasn't defined.
...
spec:
containers:
- name: indexing-queue
image: ${K8S_IMAGE} #<--- here
Basically this error means "can't find/understand" the image you've set

imagePullSecrets not working with Kind deployment

I'm tying to create a deployment with 3 replicas, whcih will pull image from a private registry. I have stored the credentials in a secret and using the imagePullSecrets in the deployment file. Im getting below error in the deploy it.
error: error validating "private-reg-pod.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "containers" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): unknown field "imagePullSecrets" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): missing required field "template" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false
Any help on this?
Below is my deployment file :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-pod-deployment
labels:
app: test-pod
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: test-pod
image: <private-registry>
imagePullSecrets:
- name: regcred
Thanks,
Sundar
Image section should be placed in container specification. ImagePullSecret should be placed in spec section so proper yaml file looks like this (please note indent):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-pod-deployment
labels:
app: test-pod
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: test-pod
image: <private-registry>
imagePullSecrets:
- name: regcred
Very common issue with kubernetes Deployment.
The valid format for pulling image from private repository in your Kubernetes Deployment file is:
spec:
imagePullSecrets:
- name: <your secret name>
containers:
Please make sure you have created the secret,then please try to make it like the below .
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-pod-deployment
labels:
app: test-pod
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: test-pod
image: nginx
imagePullSecrets:
- name: regcred
Both #Jakub-Bujny and #itmaven are correct. The indentation is really important in creating and using .yaml (or .yml) file. The yaml file has been parsed based on these indentations. So, both of these are correct:
1)
spec:
imagePullSecrets:
- name: regcred
containers:
- name: test-pod
image:
2)
spec:
containers:
- name: test-pod
image: <private-registry>
imagePullSecrets:
- name: regcred
Note: before you used the imagePullSecrets you have to create that using the following code:
kubectl create secret docker-registry <private-registry> --docker-server=
<cluster_CA_domain>:[some port] --docker-username=<user_name> --docker-
password=<user_password> --docker-email=<user_email>
also check if the imagePullSecrets was created successfully using the following code:
kubectl get secret