How can I start a job automatically after a successful deployment in kubernetes? - kubernetes

I have a deployment .yaml file that basically create a pod with mariadb, as follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-pod
spec:
replicas: 1
selector:
matchLabels:
pod: {{ .Release.Name }}-pod
strategy:
type: Recreate
template:
metadata:
labels:
pod: {{ .Release.Name }}-pod
spec:
containers:
- env:
- name: MYSQL_ROOT_PASSWORD
value: {{ .Values.db.password }}
image: {{ .Values.image.repository }}
name: {{ .Release.Name }}
ports:
- containerPort: 3306
resources:
requests:
memory: 2048Mi
cpu: 0.5
limits:
memory: 4096Mi
cpu: 1
volumeMounts:
- mountPath: /var/lib/mysql
name: dbsvr-claim
- mountPath: /etc/mysql/conf.d/my.cnf
name: conf
subPath: my.cnf
- mountPath: /docker-entrypoint-initdb.d/init.sql
name: conf
subPath: init.sql
restartPolicy: Always
volumes:
- name: dbsvr-claim
persistentVolumeClaim:
claimName: {{ .Release.Name }}-claim
- name: conf
configMap:
name: {{ .Release.Name }}-configmap
status: {}
Upon success on
helm install abc ./abc/ -f values.yaml
I have a job that generates a mysqldump backup file and it completes successfully (just showing the relevant code)
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-job
spec:
template:
metadata:
name: {{ .Release.Name }}-job
spec:
containers:
- name: {{ .Release.Name }}-dbload
image: {{ .Values.image.repositoryRoot }}/{{.Values.image.imageName}}
command: ["/bin/sh", "-c"]
args:
- mysqldump -p$(PWD) -h{{.Values.db.source}} -u$(USER) --databases xyz > $(FILE);
echo "done!";
imagePullPolicy: Always
# Do not restart containers after they exit
restartPolicy: Never
So, here's my question. Is there a way to automatically start the job after the helm install abc ./ -f values.yaml finishes with success?

you can use kubectl wait -h command to execute job when the condition=Ready for the deployment.
Here the article wait-for-condition demonstrate quite similar situation

Related

Mount (add) files to existing directory using configmap volume mount

I have a ConfigMap with multiple files, and want to add these files to an already existing directory. But the tricky part here is, the filenames(keys) can change. So I can't try to mount them individually using subPath.
Is there any way this can be achieved from Deployment manifest?
Configmap:
config-files-configmap
└── newFile1.yml
└── newFile2.yml
Existing directory after adding files from configmap:
config/
└── existingFile1.yml
└── existingFile2.yml
└── newFile1.yml
└── newFile2.yml
PS: I have tried mounting the configmap as directory, which will override existing contents of the directory.
Thanks
You can use the init container with configmap as a volume mount.
Not sure about the actual deployment architecture.
i would suggest injecting the configmap files to another directory and copying and pasting at starting of the main container.
Using life cycle hook of POD of init container.
As we can not go with subpath, this one option i am seeing as of now.
Example helm template from RabbitMQ
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Release.Name }}-rabbitmq
labels: &RabbitMQDeploymentLabels
app.kubernetes.io/name: {{ .Release.Name }}
app.kubernetes.io/component: rabbitmq-server
spec:
selector:
matchLabels: *RabbitMQDeploymentLabels
serviceName: {{ .Release.Name }}-rabbitmq-discovery
replicas: {{ .Values.rabbitmq.replicas }}
updateStrategy:
# https://www.rabbitmq.com/upgrade.html
# https://cloud.google.com/kubernetes-engine/docs/how-to/updating-apps
type: RollingUpdate
template:
metadata:
labels: *RabbitMQDeploymentLabels
spec:
serviceAccountName: {{ .Values.rabbitmq.serviceAccount }}
terminationGracePeriodSeconds: 180
initContainers:
# This init container copies the config files from read-only ConfigMap to writable location.
- name: copy-rabbitmq-config
image: {{ .Values.rabbitmq.initImage }}
imagePullPolicy: Always
command:
- /bin/bash
- -euc
- |
# Remove cached erlang cookie since we are always providing it,
# that opens the way to recreate the application and access to existing data
# as a new erlang will be regenerated again.
echo ${RABBITMQ_ERLANG_COOKIE} > /var/lib/rabbitmq/.erlang.cookie
chmod 600 /var/lib/rabbitmq/.erlang.cookie
# Copy the mounted configuration to both places.
cp /rabbitmqconfig/rabbitmq.conf /etc/rabbitmq/rabbitmq.conf
# Change permission to allow to add more configurations via variables
chown :999 /etc/rabbitmq/rabbitmq.conf
chmod 660 /etc/rabbitmq/rabbitmq.conf
cp /rabbitmqconfig/enabled_plugins /etc/rabbitmq/enabled_plugins
volumeMounts:
- name: configmap
mountPath: /rabbitmqconfig
- name: config
mountPath: /etc/rabbitmq
- name: {{ .Release.Name }}-rabbitmq-pvc
mountPath: /var/lib/rabbitmq
env:
- name: RABBITMQ_ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-rabbitmq-secret
key: rabbitmq-erlang-cookie
containers:
- name: rabbitmq
image: "{{ .Values.rabbitmq.image.repo }}:{{ .Values.rabbitmq.image.tag }}"
imagePullPolicy: Always
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: RABBITMQ_USE_LONGNAME
value: 'true'
- name: RABBITMQ_NODENAME
value: 'rabbit#$(MY_POD_NAME).{{ .Release.Name }}-rabbitmq-discovery.{{ .Release.Namespace }}.svc.cluster.local'
- name: K8S_SERVICE_NAME
value: '{{ .Release.Name }}-rabbitmq-discovery'
- name: K8S_HOSTNAME_SUFFIX
value: '.{{ .Release.Name }}-rabbitmq-discovery.{{ .Release.Namespace }}.svc.cluster.local'
# User name to create when RabbitMQ creates a new database from scratch.
- name: RABBITMQ_DEFAULT_USER
value: '{{ .Values.rabbitmq.user }}'
# Password for the default user.
- name: RABBITMQ_DEFAULT_PASS
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-rabbitmq-secret
key: rabbitmq-pass
ports:
- name: clustering
containerPort: 25672
- name: amqp
containerPort: 5672
- name: amqp-ssl
containerPort: 5671
- name: prometheus
containerPort: 15692
- name: http
containerPort: 15672
volumeMounts:
- name: config
mountPath: /etc/rabbitmq
- name: {{ .Release.Name }}-rabbitmq-pvc
mountPath: /var/lib/rabbitmq
livenessProbe:
exec:
command:
- rabbitmqctl
- status
initialDelaySeconds: 60
timeoutSeconds: 30
readinessProbe:
exec:
command:
- rabbitmqctl
- status
initialDelaySeconds: 20
timeoutSeconds: 30
lifecycle:
postStart:
exec:
command:
- /bin/bash
- -c
- |
# Wait for the RabbitMQ to be ready.
until rabbitmqctl node_health_check; do
sleep 5
done
# By default, RabbitMQ does not have Highly Available policies enabled,
# using the following command to enable it.
rabbitmqctl set_policy ha-all "." '{"ha-mode":"all", "ha-sync-mode":"automatic"}' --apply-to all --priority 0
{{ if .Values.metrics.exporter.enabled }}
- name: prometheus-to-sd
image: {{ .Values.metrics.image }}
ports:
- name: profiler
containerPort: 6060
command:
- /monitor
- --stackdriver-prefix=custom.googleapis.com
- --source=rabbitmq:http://localhost:15692/metrics
- --pod-id=$(POD_NAME)
- --namespace-id=$(POD_NAMESPACE)
- --monitored-resource-type-prefix=k8s_
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{ end }}
volumes:
- name: configmap
configMap:
name: {{ .Release.Name }}-rabbitmq-config
items:
- key: rabbitmq.conf
path: rabbitmq.conf
- key: enabled_plugins
path: enabled_plugins
- name: config
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: {{ .Release.Name }}-rabbitmq-pvc
labels: *RabbitMQDeploymentLabels
spec:
accessModes:
- ReadWriteOnce
storageClassName: {{ .Values.rabbitmq.persistence.storageClass }}
resources:
requests:
storage: {{ .Values.rabbitmq.persistence.size }}
Example reference : https://github.com/GoogleCloudPlatform/click-to-deploy/blob/master/k8s/rabbitmq/chart/rabbitmq/templates/statefulset.yaml

Fetching docker image and tag as key/value pairs from values.yaml in helm k8s

I have a list of docker images which I want to pass as an environment variable to deployment.yaml
values.yaml
contributions_list:
- image: flogo-aws
tag: 36
- image: flogo-awsec2
tag: 37
- image: flogo-awskinesis
tag: 18
- image: flogo-chargify
tag: 19
deployment.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: container-image-extractor
namespace: local-tibco-tci
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
backoffLimit: 0
template:
metadata:
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Never
containers:
- name: container-image-extractor
image: reldocker.tibco.com/stratosphere/container-image-extractor
imagePullPolicy: IfNotPresent
env:
- name: SOURCE_DOCKER_IMAGE
value: "<docker_image>:<docker_tag>" # docker image from which contents to be copied
My questions are as follows.
Is this the correct way to pass an array of docker image and tags as an argument to deployment.yaml
How would I replace <docker_image> and <docker_tag> in deployment.yaml from values.yaml and incrementally job should be triggered for each docker image and tag.
This is how I would do it, creating a job for every image in your list
{{- range .Values.contributions_list }}
apiVersion: batch/v1
kind: Job
metadata:
name: "container-image-extractor-{{ .image }}-{{ .tag }}"
namespace: local-tibco-tci
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
backoffLimit: 0
template:
metadata:
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Never
containers:
- name: container-image-extractor
image: reldocker.tibco.com/stratosphere/container-image-extractor
imagePullPolicy: IfNotPresent
env:
- name: SOURCE_DOCKER_IMAGE
value: "{{ .image }}:{{ .tag }}" # docker image from which contents to be copied
{{ end }}
If you use a value outside of this contribution list (release name, env, whatever), do not forget to change the scope such {{ $.Values.myjob.limits.cpu | quote }}. The $. is important :)
Edit: If you don't change the name at each iteration of the loop, it will override the configuration every time. With different names, you will have multiple jobs created.
You need to fix a deployment.yaml as below:
{{- range $contribution := .Values.contributions_list }}
apiVersion: batch/v1
kind: Job
metadata:
name: container-image-extractor
namespace: local-tibco-tci
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
backoffLimit: 0
template:
metadata:
labels:
app.cloud.tibco.com/name: container-image-extractor
spec:
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Never
containers:
- name: container-image-extractor
image: reldocker.tibco.com/stratosphere/container-image-extractor
imagePullPolicy: IfNotPresent
env:
- name: SOURCE_DOCKER_IMAGE
value: "{{ $contribution.image }}:{{ $contribution.tag }}"
{{- end }}
If you want to know helm template syntax, you can see this document

how can I iteratively create pods from list using Helm?

I'm trying to create a number of pods from a yaml loop in helm. if I run with --debug --dry-run the output matches my expectations, but when I actually deploy to to a cluster, only the last iteration of the loop is present.
some yaml for you:
{{ if .Values.componentTests }}
{{- range .Values.componentTests }}
apiVersion: v1
kind: Pod
metadata:
name: {{ . }}
labels:
app: {{ . }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
{{ toYaml $.Values.global.podSpec | indent 2 }}
restartPolicy: Never
containers:
- name: {{ . }}
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/{{ . }}:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
{{- end }}
{{ end }}
when I run helm upgrade --install --set componentTests="{a,b,c}" --debug --dry-run
I get the following output:
# Source: <path-to-file>.yaml
apiVersion: v1
kind: Pod
metadata:
name: a
labels:
app: a
chart: integrationtests-0.0.1
release: funny-ferret
heritage: Tiller
spec:
restartPolicy: Never
containers:
- name: content-tests
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/a:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
apiVersion: v1
kind: Pod
metadata:
name: b
labels:
app: b
chart: integrationtests-0.0.1
release: funny-ferret
heritage: Tiller
spec:
restartPolicy: Never
containers:
- name: b
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/b:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
apiVersion: v1
kind: Pod
metadata:
name: c
labels:
app: users-tests
chart: integrationtests-0.0.1
release: funny-ferret
heritage: Tiller
spec:
restartPolicy: Never
containers:
- name: c
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/c:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
---
(some parts have been edited/removed due to sensitivity/irrelevance)
which looks to me like I it does what I want it to, namely create a pod for a another for b and a third for c.
however, when actually installing this into a cluster, I always only end up with the pod corresponding to the last element in the list. (in this case, c) it's almost as if they overwrite each other, but given that they have different names I don't think they should? even running with --debug but not --dry-run the output tells me I should have 3 pods, but using kubectl get pods I can see only one.
How can I iteratively create pods from a list using Helm?
found it!
so apparently, helm uses --- as a separator between specifications of pods/services/whatHaveYou.
specifying the same fields multiple times in a single chart is valid, it will use the last specified value for for any given field. To avoid overwriting values and instead have multiple pods created, simply add the separator at the end of the loop:
{{ if .Values.componentTests }}
{{- range .Values.componentTests }}
apiVersion: v1
kind: Pod
metadata:
name: {{ . }}
labels:
app: {{ . }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
{{ toYaml $.Values.global.podSpec | indent 2 }}
restartPolicy: Never
containers:
- name: {{ . }}
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/{{ . }}:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
---
{{- end }}
{{ end }}

SchedulerPredicates failed due to PersistentVolumeClaim is not bound

I am using helm with kubernetes on google cloud platform.
i get the following error for my postgres deployment:
SchedulerPredicates failed due to PersistentVolumeClaim is not bound
it looks like it cant connect to the persistent storage but i don't understand why because the persistent storage loaded fine.
i have tried deleting the helm release completely, then on google-cloud-console > compute-engine > disks; i have deleted all persistent disk. and finally tried to install from the helm chart, but the postgres deployment still doesnt connect to the PVC.
my database configuration:
{{- $serviceName := "db-service" -}}
{{- $deploymentName := "db-deployment" -}}
{{- $pvcName := "db-disk-claim" -}}
{{- $pvName := "db-disk" -}}
apiVersion: v1
kind: Service
metadata:
name: {{ $serviceName }}
labels:
name: {{ $serviceName }}
env: production
spec:
type: LoadBalancer
ports:
- port: 5432
targetPort: 5432
protocol: TCP
name: http
selector:
name: {{ $deploymentName }}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: {{ $deploymentName }}
labels:
name: {{ $deploymentName }}
env: production
spec:
replicas: 1
template:
metadata:
labels:
name: {{ $deploymentName }}
env: production
spec:
containers:
- name: postgres-database
image: postgres:alpine
imagePullPolicy: Always
env:
- name: POSTGRES_USER
value: test-user
- name: POSTGRES_PASSWORD
value: test-password
- name: POSTGRES_DB
value: test_db
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
volumeMounts:
- mountPath: "/var/lib/postgresql/data/pgdata"
name: {{ $pvcName }}
volumes:
- name: {{ $pvcName }}
persistentVolumeClaim:
claimName: {{ $pvcName }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ $pvcName }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
name: {{ $pvName }}
env: production
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .Values.gcePersistentDisk }}
labels:
name: {{ $pvName }}
env: production
annotations:
volume.beta.kubernetes.io/mount-options: "discard"
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
fsType: "ext4"
pdName: {{ .Values.gcePersistentDisk }}
is this config for kubenetes correct? i have read the documentation and it looks like this should work. i'm new to Kubernetes and helm so any advice is appreciated.
EDIT:
i have added a PersistentVolume and linked it to the PersistentVolumeClaim to see if that helps, but it seems that when i do this, the PersistentVolumeClaim status becomes stuck in "pending" (resulting in the same issue as before).
You don't have a bound PV for this claim. What storage you use for this claim. You need to mention it in the PVC file

Host specific volumes in Kubernetes manifests

I am fairly sure this isn't possible, but I wanted to check.
I am using Kubernetes stateful sets, so my hosts get obvious hostnames.
I'd like them to provision a hostPath mount that is mapped to their hostname.
An example helm chart that I'm using might look like this:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: app
namespace: '{{ .Values.name }}'
labels:
chart: '{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}'
spec:
serviceName: "app"
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: app
spec:
terminationGracePeriodSeconds: 30
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}/{{ .Values.image.version}}"
imagePullPolicy: '{{ .Values.image.pullPolicy }}'
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: {{ .Values.baseport | add 80 }}
name: app
volumeMounts:
- mountPath: /NAS/$(POD_NAME)
name: store
readOnly: true
volumes:
- name: store
hostPath:
path: /NAS/$(POD_NAME)
Essentially, instead of hardcoding a volume, I'd like to have some kind of dynamic variable as the path. I don't mind using helm or the downward API for this, but ideally it would work when I scale the stateful set outwards.
Is there any way of doing this? All my docs reading seems to think it's not... :(