how to shut down cloud-sql-proxy in a helm chart pre-install hook - kubernetes

I am running a django application in a Kubernetes cluster on gcloud. I implemented the database migration as a helm pre-intall hook that launches my app container and does the database migration. I use cloud-sql-proxy in a sidecar pattern as recommended in the official tutorial: https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine
Basically this launches my app and a cloud-sql-proxy containers within the pod described by the job. The problem is that cloud-sql-proxy never terminates after my app has completed the migration causing the pre-intall job to timeout and cancel my deployment. How do I gracefully exit the cloud-sql-proxy container after my app container completes so that the job can complete?
Here is my helm pre-intall hook template definition:
apiVersion: batch/v1
kind: Job
metadata:
name: database-migration-job
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed
spec:
activeDeadlineSeconds: 230
template:
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: db-migrate
image: {{ .Values.my-project.docker_repo }}{{ .Values.backend.image }}:{{ .Values.my-project.image.tag}}
imagePullPolicy: {{ .Values.my-project.image.pullPolicy }}
env:
- name: DJANGO_SETTINGS_MODULE
value: "{{ .Values.backend.django_settings_module }}"
- name: SENDGRID_API_KEY
valueFrom:
secretKeyRef:
name: sendgrid-api-key
key: sendgrid-api-key
- name: DJANGO_SECRET_KEY
valueFrom:
secretKeyRef:
name: django-secret-key
key: django-secret-key
- name: DB_USER
value: {{ .Values.postgresql.postgresqlUsername }}
- name: DB_PASSWORD
{{- if .Values.postgresql.enabled }}
value: {{ .Values.postgresql.postgresqlPassword }}
{{- else }}
valueFrom:
secretKeyRef:
name: database-password
key: database-pwd
{{- end }}
- name: DB_NAME
value: {{ .Values.postgresql.postgresqlDatabase }}
- name: DB_HOST
{{- if .Values.postgresql.enabled }}
value: "postgresql"
{{- else }}
value: "127.0.0.1"
{{- end }}
workingDir: /app-root
command: ["/bin/sh"]
args: ["-c", "python manage.py migrate --no-input"]
{{- if eq .Values.postgresql.enabled false }}
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- "/cloud_sql_proxy"
- "-instances=<INSTANCE_CONNECTION_NAME>=tcp:<DB_PORT>"
- "-credential_file=/secrets/service_account.json"
securityContext:
#fsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
volumeMounts:
- name: db-con-mnt
mountPath: /secrets/
readOnly: true
volumes:
- name: db-con-mnt
secret:
secretName: db-service-account-credentials
{{- end }}
Funny enough, if I kill the job with "kubectl delete jobs database-migration-job" after the migration is done the helm upgrade completes and my new app version gets installed.

Well, I have a solution which will work but might be hacky. First of all this is Kubernetes is lacking feature which is in discussion in this issue.
With Kubernetes v1.17, containers in same Pods can share process namespaces. This enables us to kill proxy container from app container. Since this is a Kubernetes job there shouldn't be any anomalies to enable postStop handlers for app container.
With this solution when your app finishes and exits normally(or abnormally) then Kubernetes will run one last command from your dying container which will be kill another process in this case. This should result in job completion with success or fail depending on how you will be killing process. Process exit code will be container exit code, then it will be job exit code basically.

Related

helm - run pods and dependencies with a predefined flow order

I am using K8S with helm.
I need to run pods and dependencies with a predefined flow order.
How can I create helm dependencies that run the pod only once (i.e - populate database for the first time), and exits after first success?
Also, if I have several pods, and I want to run the pod only on certain conditions occurs and after creating a pod.
Need to build 2 pods, as is described as following:
I have a database.
1st step is to create the database.
2nd step is to populate the db.
Once I populate the db, this job need to finish.
3rd step is another pod (not the db pod) that uses that database, and always in listen mode (never stops).
Can I define in which order the dependencies are running (and not always parallel).
What I see for helm create command that there are templates for deployment.yaml and service.yaml, and maybe pod.yaml is better choice?
What are the best charts types for this scenario?
Also, need the to know what is the chart hierarchy.
i.e: when having a chart of type: listener, and one pod for database creation, and one pod for the database population (that is deleted when finished), I may have a chart tree hierarchy that explain the flow.
The main chart use the populated data (after all the sub-charts and templates are run properly - BTW, can I have several templates for same chart?).
What is the correct tree flow
Thanks.
There is a fixed order with which helm with create resources, which you cannot influence apart from hooks.
Helm hooks can cause more problems than they solve, in my experience. This is because most often they actually rely on resources which are only available after the hooks are done. For example, configmaps, secrets and service accounts / rolebindings. Leading you to move more and more things into the hook lifecycle, which isn't idiomatic IMO. It also leaves them dangling when uninstalling a release.
I tend to use jobs and init containers that blocks until the jobs are done.
---
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- name: mysql
image: mysql
---
apiVersion: batch/v1
kind: Job
metadata:
name: migration
spec:
ttlSecondsAfterFinished: 100
template:
spec:
initContainers:
- name: wait-for-db
image: bitnami/kubectl
args:
- wait
- pod/mysql
- --for=condition=ready
- --timeout=120s
containers:
- name: migration
image: myapp
args: [--migrate]
restartPolicy: Never
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
initContainers:
- name: wait-for-migration
image: bitnami/kubectl
args:
- wait
- job/migration
- --for=condition=complete
- --timeout=120s
containers:
- name: myapp
image: myapp
args: [--server]
Moving the migration into its own job, is beneficial if you want to scale your application horizontally. Your migration need to run only 1 time. So it doesn't make sense to run it for each deployed replica.
Also, in case a pod crashes and restarts, the migration doest need to run again. So having it in a separate one time job, makes sense.
The main chart structure would look like this.
.
├── Chart.lock
├── charts
│ └── mysql-8.8.26.tgz
├── Chart.yaml
├── templates
│ ├── deployment.yaml # waits for db migration job
│ └── migration-job.yaml # waits for mysql statefulset master pod
└── values.yaml
You can achieve this using helm hooks and K8s Jobs, below is defining the same setup for Rails applications.
The first step, define a k8s job to create and populate the db
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "my-chart.name" . }}-db-prepare
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded
labels:
app: {{ template "my-chart.name" . }}
chart: {{ template "my-chart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
backoffLimit: 4
template:
metadata:
labels:
app: {{ template "my-chart.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ template "my-chart.name" . }}-db-prepare
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/docker-entrypoint.sh"]
args: ["rake", "db:extensions", "db:migrate", "db:seed"]
envFrom:
- configMapRef:
name: {{ template "my-chart.name" . }}-configmap
- secretRef:
name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
initContainers:
- name: init-wait-for-dependencies
image: wshihadeh/wait_for:v1.2
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/docker-entrypoint.sh"]
args: ["wait_for_tcp", "postgress:DATABASE_HOST:DATABASE_PORT"]
envFrom:
- configMapRef:
name: {{ template "my-chart.name" . }}-configmap
- secretRef:
name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
imagePullSecrets:
- name: {{ .Values.imagePullSecretName }}
restartPolicy: Never
Note the following :
1- The Job definitions have helm hooks to run on each deployment and to be the first task
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded
2- the container command, will take care of preparing the db
command: ["/docker-entrypoint.sh"]
args: ["rake", "db:extensions", "db:migrate", "db:seed"]
3- The job will not start until the db-connection is up (this is achieved via initContainers)
args: ["wait_for_tcp", "postgress:DATABASE_HOST:DATABASE_PORT"]
the second step is to define the application deployment object. This can be a regular deployment object (make sure that you don't use helm hooks ) example :
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "my-chart.name" . }}-web
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
labels:
app: {{ template "my-chart.name" . }}
chart: {{ template "my-chart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.webReplicaCount }}
selector:
matchLabels:
app: {{ template "my-chart.name" . }}
release: {{ .Release.Name }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
labels:
app: {{ template "my-chart.name" . }}
release: {{ .Release.Name }}
service: web
spec:
imagePullSecrets:
- name: {{ .Values.imagePullSecretName }}
containers:
- name: {{ template "my-chart.name" . }}-web
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/docker-entrypoint.sh"]
args: ["web"]
envFrom:
- configMapRef:
name: {{ template "my-chart.name" . }}-configmap
- secretRef:
name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
{{ toYaml .Values.resources | indent 12 }}
restartPolicy: {{ .Values.restartPolicy }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
if I understand correctly, you want to build a dependency chain in your deployment strategy to ensure certain things are prepared before any of your applications starts. in your case, you want a deployed and pre-populated database, before your app starts.
I propose to not build a dependency chain like this, because it makes things complicated in your deployment pipeline and prevents proper scaling of your deployment processes if you start to deploy more than a couple apps in the future. in highly dynamic environments like kubernetes, every deployment should be able to check the prerequisites it needs to start on its own without depending on a order of deployments.
this can be achieved with a combination of initContainers and probes. both can be specified per deployment to prevent it from failing if certain prerequisites are not met and/or to fullfill certain prerequisites before a service starts routing traffic to your deployment (in your case the database).
in short:
to populate a database volume before the database starts, use an initContainer
to let the database serve traffic after its initialization and prepopulation, define probes to check for these conditions. your database will only start to serve traffic after its livenessProbe and readinessProbe has succeeded. if it needs extra time, protect the pod from beeing terminated with a startupProbe.
to ensure the deployment of your app does not start and fail before the database is ready, use an initContainer to check if the database is ready to serve traffic before your app starts.
check out
https://12factor.net/ (general guideline for dynamic systems)
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
for more information.

DB Migration with helm

I am trying to migrate our cassandra tables to use liquibase. Basically the idea is trivial, have a pre-install and pre-upgrade job that will run some liquibase scripts and will manage our database upgrade.
For that purpose I have created a custom docker image that will have the actual liquibase cli and then I can invoke it from the job. For example:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-update-job"
namespace: spring-k8s
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
template:
metadata:
name: "{{ .Release.Name }}-cassandra-update-job"
namespace: spring-k8s
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: pre-install-upgrade-job
image: "lq/liquibase-cassandra:1.0.0"
command: ["/bin/bash"]
args:
- "-c"
- "./liquibase-cassandra.sh --username {{ .Values.liquibase.username }} --password {{ .Values.liquibase.username }} --url {{ .Values.liquibase.url | squote }} --file {{ .Values.liquibase.file }}"
Where .Values.liquibase.file == databaseChangelog.json.
So this image lq/liquibase-cassandra:1.0.0 basically has a script liquibase-cassandra.sh that when passed some arguments can do its magic and update the DB schema (not going to go into the details).
The problem is that last argument: --file {{ .Values.liquibase.file }}. This file resides not in the image, obviously, but in each micro-services repository.
I need a way to "copy" that file to the image, so that I could invoke it. One way would be to build this lq/liquibase-cassandra all the time (with the same lifecycle of the project itself) and copy the file into it, but that will take time and seems like at least cumbersome. What am I missing?
It turns out that helm hooks can be used for other things, not only jobs. As such, I can mount this file into a ConfigMap before the Job even starts (the file I care about resides in resources/databaseChangelog.json):
apiVersion: v1
kind: ConfigMap
metadata:
name: "liquibase-changelog-config-map"
namespace: spring-k8s
annotations:
helm.sh/hook: pre-install, pre-upgrade
helm.sh/hook-delete-policy: hook-succeeded
helm.sh/hook-weight: "1"
data:
{{ (.Files.Glob "resources/*").AsConfig | indent 2 }}
And then just reference it inside the job:
.....
spec:
restartPolicy: Never
volumes:
- name: liquibase-changelog-config-map
configMap:
name: liquibase-changelog-config-map
defaultMode: 0755
containers:
- name: pre-install-upgrade-job
volumeMounts:
- name: liquibase-changelog-config-map
mountPath: /liquibase-changelog-file
image: "lq/liquibase-cassandra:1.0.0"
command: ["/bin/bash"]
args:
- "-c"
- "./liquibase-cassandra.sh --username {{ .Values.liquibase.username }} --password {{ .Values.liquibase.username }} --url {{ .Values.liquibase.url | squote }} --file {{ printf "/liquibase-changelog-file/%s" .Values.liquibase.file }}"

In a Helm Chart, how can I block the upgrade until a deployment in it is complete?

I am using Rancher Pipelines and catalogs to run Helm Charts like this:
.rancher-pipeline.yml
stages:
- name: Deploy app-web
steps:
- applyAppConfig:
catalogTemplate: cattle-global-data:chart-web-server
version: 0.4.0
name: ${CICD_GIT_REPO_NAME}-${CICD_GIT_BRANCH}-serv
targetNamespace: ${CICD_GIT_REPO_NAME}
answers:
pipeline.sequence: ${CICD_EXECUTION_SEQUENCE}
...
- name: Another chart needs to wait until the previous one success
...
And in the chart-web-server app, it has a deployment:
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-dpy
labels:
{{- include "labels" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Release.Name }}
{{- include "labels" . | nindent 6 }}
template:
metadata:
labels:
app: {{ .Release.Name }}
{{- include "labels" . | nindent 8 }}
spec:
containers:
- name: "web-server-{{ include "numericSafe" .Values.git.commitID }}"
image: "{{ .Values.harbor.host }}/{{ .Values.web.imageTag }}"
imagePullPolicy: Always
env:
...
ports:
- containerPort: {{ .Values.web.port }}
protocol: TCP
resources:
{{- .Values.resources | toYaml | nindent 12 }}
Now, I need the pipeline to be blocked until the deployment is upgraded since I want to do some server testing in the following stages.
My idea is to use Helm hook: If I can create a Job hooking post-install and post-upgrade and waiting for the deployment to be completed, I can then block the whole pipeline until the deployment (a web server) is updated.
Does this idea work? If so, how can I write such a blocking and detecting Job?
Does not appear to be supported from what I can find of their code. It would appear they just shell out to helm upgrade, would need to use the --wait mode.

Knative service with Keycloak gatekeeper sidecar

I am trying to deploy the following service:
{{- if .Values.knativeDeploy }}
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
{{- if .Values.service.name }}
name: {{ .Values.service.name }}
{{- else }}
name: {{ template "fullname" . }}
{{- end }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
template:
spec:
containers:
- image: quay.io/keycloak/keycloak-gatekeeper:9.0.3
name: gatekeeper-sidecar
ports:
- containerPort: {{ .Values.keycloak.proxyPort }}
env:
- name: KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ template "keycloakclient" . }}
key: secret
args:
- --resources=uri=/*
- --discovery-url={{ .Values.keycloak.url }}/auth/realms/{{ .Values.keycloak.realm }}
- --client-id={{ template "keycloakclient" . }}
- --client-secret=$(KEYCLOAK_CLIENT_SECRET)
- --listen=0.0.0.0:{{ .Values.keycloak.proxyPort }} # listen on all interfaces
- --enable-logging=true
- --enable-json-logging=true
- --upstream-url=http://127.0.0.1:{{ .Values.service.internalPort }} # To connect with the main container's port
resources:
{{ toYaml .Values.gatekeeper.resources | indent 12 }}
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $pkey, $pval := .Values.env }}
- name: {{ $pkey }}
value: {{ quote $pval }}
{{- end }}
envFrom:
{{ toYaml .Values.envFrom | indent 10 }}
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: {{ .Values.probePath }}
port: {{ .Values.service.internalPort }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
httpGet:
path: {{ .Values.probePath }}
port: {{ .Values.service.internalPort }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
resources:
{{ toYaml .Values.resources | indent 12 }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- end }}
Which fails with the following error:
Error from server (BadRequest): error when creating "/tmp/helm-template-workdir-290082188/jx/output/namespaces/jx-staging/env/charts/docs/templates/part0-ksvc.yaml": admission webhook "webhook.serving.knative.dev" denied the request: mutation failed: expected exactly one, got both: spec.template.spec.containers'
Now, if I read the specs (https://knative.dev/v0.15-docs/serving/getting-started-knative-app/), I can see this example:
apiVersion: serving.knative.dev/v1 # Current version of Knative
kind: Service
metadata:
name: helloworld-go # The name of the app
namespace: default # The namespace the app will use
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go # The URL to the image of the app
env:
- name: TARGET # The environment variable printed out by the sample app
value: "Go Sample v1"
Which has exactly the same structure. Now, my questions are:
How can I validate my yam without waiting for a deployment? Intellij has a k8n plugin, but I can't find the CRD schema for serving.knative.dev/v1 that are machine consumable. (https://knative.dev/docs/serving/spec/knative-api-specification-1.0/)
Is it allowed with knative to have multiple container? (that configuration works perfectly with apiVersion: apps/v1 kind: Deployment)
Multi container is alpha feature in knative version 0.16.
This feature need to be enabled by setting multi-container to enabled in the config-features ConfigMap. So edit the configmap using
kubectl edit cm config-features and enable that feature.
apiVersion: v1
kind: ConfigMap
metadata:
name: config-features
namespace: knative-serving
labels:
serving.knative.dev/release: devel
annotations:
knative.dev/example-checksum: "983ddf13"
data:
_example: |
...
# Indicates whether multi container support is enabled
multi-container: "enabled"
...
What version of Knative are you using?
Support for multiple containers was added as an alpha feature in 0.16. If you're not using 0.16 or later or don't have the alpha flag enabled, the request will probably be blocked.
There were a number of edge cases to define for multi-container support in Knative, so the default was to be conservative and only allow one container until the constraints had been explored.

Error running script in Helm pre-install hook - script not found

I have defined a Helm pre-install hook in one of my microservices, to create a user, database and several tables in PostgreSQL before the service is started for the first time. My pre-install.yaml file is:
apiVersion: batch/v1
kind: Job
metadata:
name: psql-init-job
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": hook-succeeded
# hook-succeeded, hook-failed
spec:
template:
metaData:
name: psql-init-job
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: psql-init-job
image: registry.local:5000/sandbox/init-pgsql:1.0.0
imagePullPolicy: "IfNotPresent"
command: ["/bin/sh", "-c", "/k8s-init.sh" ]
env:
- name: DATABASE_HOST
value: psql-postgresql.svc.cluster.local
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_WAIT_TIME
value: "60s"
backoffLimit: 0
The Dockerfile includes:
# Copy the SQL script.
COPY ./docker/pgsql/psql/k8s-init.sh /k8s-init.sh
# Copy the SQL files.
COPY ./docker/pgsql/psql/k8s-init-postgres.sql /k8s-init-postgres.sql
COPY ./docker/pgsql/psql/k8s-init-test.sql /k8s-init-test.sql
The script that it's trying to run is:
#!/bin/sh
# Import the SQL files.
pg_restore -d postgres://username:password#psql-postgresql.svc.cluster.local:5432/postgres --jobs 4 /k8s-init-postgres.sql
pg_restore -d postgres://username:password#psql-postgresql.svc.cluster.local:5432/test --jobs 4 /k8s-init-test.sql
When I install the Helm chart that contains the pre-install hook, I get a startError from the microservice's pod:
/bin/sh: /k8s-init.sh: not found
Any idea why it can't find the k8s-init.sh script?