Fetching docker image and tag as key/value pairs from values.yaml in helm k8s - kubernetes

I have a list of docker images which I want to pass as an environment variable to deployment.yaml
values.yaml
contributions_list:
- image: flogo-aws
tag: 36
- image: flogo-awsec2
tag: 37
- image: flogo-awskinesis
tag: 18
- image: flogo-chargify
tag: 19
deployment.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: container-image-extractor
namespace: local-tibco-tci
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
backoffLimit: 0
template:
metadata:
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Never
containers:
- name: container-image-extractor
image: reldocker.tibco.com/stratosphere/container-image-extractor
imagePullPolicy: IfNotPresent
env:
- name: SOURCE_DOCKER_IMAGE
value: "<docker_image>:<docker_tag>" # docker image from which contents to be copied
My questions are as follows.
Is this the correct way to pass an array of docker image and tags as an argument to deployment.yaml
How would I replace <docker_image> and <docker_tag> in deployment.yaml from values.yaml and incrementally job should be triggered for each docker image and tag.

This is how I would do it, creating a job for every image in your list
{{- range .Values.contributions_list }}
apiVersion: batch/v1
kind: Job
metadata:
name: "container-image-extractor-{{ .image }}-{{ .tag }}"
namespace: local-tibco-tci
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
backoffLimit: 0
template:
metadata:
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Never
containers:
- name: container-image-extractor
image: reldocker.tibco.com/stratosphere/container-image-extractor
imagePullPolicy: IfNotPresent
env:
- name: SOURCE_DOCKER_IMAGE
value: "{{ .image }}:{{ .tag }}" # docker image from which contents to be copied
{{ end }}
If you use a value outside of this contribution list (release name, env, whatever), do not forget to change the scope such {{ $.Values.myjob.limits.cpu | quote }}. The $. is important :)
Edit: If you don't change the name at each iteration of the loop, it will override the configuration every time. With different names, you will have multiple jobs created.

You need to fix a deployment.yaml as below:
{{- range $contribution := .Values.contributions_list }}
apiVersion: batch/v1
kind: Job
metadata:
name: container-image-extractor
namespace: local-tibco-tci
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
backoffLimit: 0
template:
metadata:
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Never
containers:
- name: container-image-extractor
image: reldocker.tibco.com/stratosphere/container-image-extractor
imagePullPolicy: IfNotPresent
env:
- name: SOURCE_DOCKER_IMAGE
value: "{{ $contribution.image }}:{{ $contribution.tag }}"
{{- end }}
If you want to know helm template syntax, you can see this document

Related

Create multiple containers with templating

I have a running k8s deployment, with one container.
I want to deploy 10 more containers, with a few differences in the deployment manifest (i.e command launched, container name, ...).
Rather than create 10 more .yml files with the whole deployment, I would prefer use templating. What can I do to achieve this ?
---
apiVersion: v1
kind: CronJob
metadata:
name: myname
labels:
app.kubernetes.io/name: myname
spec:
schedule: "*/10 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app.kubernetes.io/name: myname
spec:
serviceAccountName: myname
containers:
- name: myname
image: 'mynameimage'
imagePullPolicy: IfNotPresent
command: ["/my/command/to/launch"]
restartPolicy: OnFailure
Kustomize seems to be the go-to tool for templating, composition, multi-environment overriding, etc, in kubernetes configs. And it's built directly into kubectl now as well.
Specifically, I think you can achieve what you want by using the bases and overlays feature. Setup a base which contains the common structure and overlays which contain specific overrides.
You can either specify a set of containers to be created you can do that like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: container1
image: your-image
- name: container2
image: your-image
- name: container3
image: your-image
and you can repeat that container definition as many times as you want.
The other way around is to use a templating engine like helm/kustomize as mentioned above.
Using helm which is a templating engine for Kubernetes manifests you can create your own template by following me through.
If you have never worked with helm you can check the official docs
In order for you to follow make sure you have helm already installed!
- create a new chart:
helm create cowboy-app
this will generate a new project for you.
- DELETE EVERYTHING WITHING THE templates DIR
- REMOVE ALL values.yaml content
- create a new file deployment.yaml in templates directory and paste this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appName }}
labels:
chart: {{ .Values.appName }}
spec:
selector:
matchLabels:
app: {{ .Values.appName }}
replicas: 1
template:
metadata:
labels:
app: {{ .Values.appName }}
spec:
containers:
{{ toYaml .Values.images | indent 8 }}
- in values.yaml paste this:
appName: cowboy-app
images:
- name: app-1
image: image-1
- name: app-2
image: image-2
- name: app-3
image: image-3
- name: app-4
image: image-4
- name: app-5
image: image-5
- name: app-6
image: image-6
- name: app-7
image: image-7
- name: app-8
image: image-8
- name: app-9
image: image-9
- name: app-10
image: image-10
So if you are familiar with helm you can tell that {{ toYaml .Values.images | indent 10 }} in the deployment.yaml is referring to data specified in values.yaml as YAML and by running helm install release-name /path/to/chart will generate and deploy a manifest file which is deployment.yaml that looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cowboy-app
labels:
chart: cowboy-app
spec:
selector:
matchLabels:
app: cowboy-app
replicas: 1
template:
metadata:
labels:
app: cowboy-app
spec:
containers:
- image: image-1
name: app-1
- image: image-2
name: app-2
- image: image-3
name: app-3
- image: image-4
name: app-4
- image: image-5
name: app-5
- image: image-6
name: app-6
- image: image-7
name: app-7
- image: image-8
name: app-8
- image: image-9
name: app-9
- image: image-10
name: app-10
Either you can use Helm or Kustomize. Both are templating tools and help you to achieve your goal

Creating Kubernetes Pod per Kubernetes Job and Cleanup

I'm trying to create Kubernetes job with the following requirements:
Only one pod can be created for each job at most
If the pod failed - the job will fail
Max run time of the pod will be 1 hour
If the job finished successfully - delete the job
I tried the following configurations:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
spec:
containers:
- name: {{ .Release.Name }}
image: {{ .Values.image }}
env:
- name: ARG1
value: {{ required "ARG1 is mandatory" .Values.ENV.ARG1 }}
- name: GITLAB_USER_EMAIL
value: {{ .Values.ENV.GITLAB_USER_EMAIL }}
envFrom:
- secretRef:
name: {{ .Release.Name }}
restartPolicy: Never
backoffLimit: 1
activeDeadlineSeconds: 3600
But it's not working as expected, any ideas?
Thanks !
Only one pod can be created for each job at most
The requested parallelism (.spec.parallelism) can be set to any non-negative value. If it is unspecified, it defaults to 1. If it is specified as 0, then the Job is effectively paused until it is increased.
For Cronjobs could be helpful successfulJobsHistoryLimit: 0, failedJobsHistoryLimit: 0 this will
remove the PODs if it's get failed or success so no history or
POD will stays. So only one pod will get created or run.
If the pod failed - the job will fail
That will be the default behavior, also restartPolicy: Never so it won't get restarted.
Max run time of the pod will be 1 hour
activeDeadlineSeconds: 3600 you have already added
If the job finished successfully - delete the job
ttlSecondsAfterFinished: 100 will solve your issue.
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
spec:
containers:
- name: {{ .Release.Name }}
image: {{ .Values.image }}
env:
- name: ARG1
value: {{ required "ARG1 is mandatory" .Values.ENV.ARG1 }}
- name: GITLAB_USER_EMAIL
value: {{ .Values.ENV.GITLAB_USER_EMAIL }}
envFrom:
- secretRef:
name: {{ .Release.Name }}
restartPolicy: Never
backoffLimit: 1
ttlSecondsAfterFinished: 100
activeDeadlineSeconds: 3600

How can I start a job automatically after a successful deployment in kubernetes?

I have a deployment .yaml file that basically create a pod with mariadb, as follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-pod
spec:
replicas: 1
selector:
matchLabels:
pod: {{ .Release.Name }}-pod
strategy:
type: Recreate
template:
metadata:
labels:
pod: {{ .Release.Name }}-pod
spec:
containers:
- env:
- name: MYSQL_ROOT_PASSWORD
value: {{ .Values.db.password }}
image: {{ .Values.image.repository }}
name: {{ .Release.Name }}
ports:
- containerPort: 3306
resources:
requests:
memory: 2048Mi
cpu: 0.5
limits:
memory: 4096Mi
cpu: 1
volumeMounts:
- mountPath: /var/lib/mysql
name: dbsvr-claim
- mountPath: /etc/mysql/conf.d/my.cnf
name: conf
subPath: my.cnf
- mountPath: /docker-entrypoint-initdb.d/init.sql
name: conf
subPath: init.sql
restartPolicy: Always
volumes:
- name: dbsvr-claim
persistentVolumeClaim:
claimName: {{ .Release.Name }}-claim
- name: conf
configMap:
name: {{ .Release.Name }}-configmap
status: {}
Upon success on
helm install abc ./abc/ -f values.yaml
I have a job that generates a mysqldump backup file and it completes successfully (just showing the relevant code)
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-job
spec:
template:
metadata:
name: {{ .Release.Name }}-job
spec:
containers:
- name: {{ .Release.Name }}-dbload
image: {{ .Values.image.repositoryRoot }}/{{.Values.image.imageName}}
command: ["/bin/sh", "-c"]
args:
- mysqldump -p$(PWD) -h{{.Values.db.source}} -u$(USER) --databases xyz > $(FILE);
echo "done!";
imagePullPolicy: Always
# Do not restart containers after they exit
restartPolicy: Never
So, here's my question. Is there a way to automatically start the job after the helm install abc ./ -f values.yaml finishes with success?
you can use kubectl wait -h command to execute job when the condition=Ready for the deployment.
Here the article wait-for-condition demonstrate quite similar situation

SchedulerPredicates failed due to PersistentVolumeClaim is not bound

I am using helm with kubernetes on google cloud platform.
i get the following error for my postgres deployment:
SchedulerPredicates failed due to PersistentVolumeClaim is not bound
it looks like it cant connect to the persistent storage but i don't understand why because the persistent storage loaded fine.
i have tried deleting the helm release completely, then on google-cloud-console > compute-engine > disks; i have deleted all persistent disk. and finally tried to install from the helm chart, but the postgres deployment still doesnt connect to the PVC.
my database configuration:
{{- $serviceName := "db-service" -}}
{{- $deploymentName := "db-deployment" -}}
{{- $pvcName := "db-disk-claim" -}}
{{- $pvName := "db-disk" -}}
apiVersion: v1
kind: Service
metadata:
name: {{ $serviceName }}
labels:
name: {{ $serviceName }}
env: production
spec:
type: LoadBalancer
ports:
- port: 5432
targetPort: 5432
protocol: TCP
name: http
selector:
name: {{ $deploymentName }}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: {{ $deploymentName }}
labels:
name: {{ $deploymentName }}
env: production
spec:
replicas: 1
template:
metadata:
labels:
name: {{ $deploymentName }}
env: production
spec:
containers:
- name: postgres-database
image: postgres:alpine
imagePullPolicy: Always
env:
- name: POSTGRES_USER
value: test-user
- name: POSTGRES_PASSWORD
value: test-password
- name: POSTGRES_DB
value: test_db
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
volumeMounts:
- mountPath: "/var/lib/postgresql/data/pgdata"
name: {{ $pvcName }}
volumes:
- name: {{ $pvcName }}
persistentVolumeClaim:
claimName: {{ $pvcName }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ $pvcName }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
name: {{ $pvName }}
env: production
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .Values.gcePersistentDisk }}
labels:
name: {{ $pvName }}
env: production
annotations:
volume.beta.kubernetes.io/mount-options: "discard"
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
fsType: "ext4"
pdName: {{ .Values.gcePersistentDisk }}
is this config for kubenetes correct? i have read the documentation and it looks like this should work. i'm new to Kubernetes and helm so any advice is appreciated.
EDIT:
i have added a PersistentVolume and linked it to the PersistentVolumeClaim to see if that helps, but it seems that when i do this, the PersistentVolumeClaim status becomes stuck in "pending" (resulting in the same issue as before).
You don't have a bound PV for this claim. What storage you use for this claim. You need to mention it in the PVC file

Host specific volumes in Kubernetes manifests

I am fairly sure this isn't possible, but I wanted to check.
I am using Kubernetes stateful sets, so my hosts get obvious hostnames.
I'd like them to provision a hostPath mount that is mapped to their hostname.
An example helm chart that I'm using might look like this:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: app
namespace: '{{ .Values.name }}'
labels:
chart: '{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}'
spec:
serviceName: "app"
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: app
spec:
terminationGracePeriodSeconds: 30
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}/{{ .Values.image.version}}"
imagePullPolicy: '{{ .Values.image.pullPolicy }}'
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: {{ .Values.baseport | add 80 }}
name: app
volumeMounts:
- mountPath: /NAS/$(POD_NAME)
name: store
readOnly: true
volumes:
- name: store
hostPath:
path: /NAS/$(POD_NAME)
Essentially, instead of hardcoding a volume, I'd like to have some kind of dynamic variable as the path. I don't mind using helm or the downward API for this, but ideally it would work when I scale the stateful set outwards.
Is there any way of doing this? All my docs reading seems to think it's not... :(