Command is executed before env mount - kubernetes

I'm very new to Kubernetes so sorry if i'm not explaining my problem right.
I'm trying to spin up 3 replicas of a pod that run a php command. After a while the command should crash and restart.
The problem is that it starts with the local .env the first few times, after a few restarts the mounted .env is used. When it fails and restarts it launches with the wrong local env again.
I suspect the the command is run before the mount, what should I try to mount before my entrypoint command starts?
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
labels:
app.kubernetes.io/name: project
app.kubernetes.io/instance: project-release
spec:
imagePullSecrets: {{ toYaml .Values.gitlab.secrets | nindent 8 }}
containers:
- name: project
image: {{ .Values.gitlab.image }}
imagePullPolicy: IfNotPresent
command: [ "/bin/sh","-c" ]
args: [ "bin/console php:command:name" ]
volumeMounts:
- name: env
mountPath: /var/www/deploy/env
volumes:
- name: env
secret:
secretName: project-env

Related

Injecting environment variables to Postgres pod from Hashicorp Vault

I'm trying to set the POSTGRES_PASSWORD, POSTGRES_USER and POSTGRES_DB environment variables in a Kubernetes Pod, running the official postgres docker image, with values injected from Hashicorp Vault.
The issue I experience is that the Postgres Pod will not start and provides no logs as to what might have caused it to stop.
I'm trying to source the injected secrets on startup using args /bin/bash/ source /vault/secrets/backend. Nothing seems to happen once this command is reached. If i add an echo statement in front of source it will display this in the kubectl logs.
Steps taken so far include removing the - args part of configuration and setting the required POSTGRES_PASSWORD variable directly with a test value. When done the pod starts and I can exec into it and verify that the secrets are indeed injected and I'm able to source them. Running cat command on it gives me the following output:
export POSTGRES_PASSWORD="jiasjdi9u2easjdu##djasj#!-d2KDKf"
export POSTGRES_USER="postgres"
export POSTGRES_DB="postgres"
To me this indicates that the Vault injection is working as expected and that this part is configured according to my needs.
*edit: commands after sourcing is indeed run. Tested with echo command
My configuration is as follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-db
namespace: planet9-demo
labels:
app: postgres-db
environment: development
spec:
serviceName: postgres-service
selector:
matchLabels:
app: postgres-db
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-backend: secret/data/backend
vault.hashicorp.com/agent-inject-template-backend: |
{{ with secret "secret/backend/database" -}}
export POSTGRES_PASSWORD="{{ .Data.data.adminpassword}}"
export POSTGRES_USER="{{ .Data.data.postgresadminuser}}"
export POSTGRES_DB="{{ .Data.data.postgresdatabase}}"
{{- end }}
vault.hashicorp.com/role: postgresDB
labels:
app: postgres-db
tier: backend
spec:
containers:
- args:
- /bin/bash
- -c
- source /vault/secrets/backend
name: postgres-db
image: postgres:latest
resources:
requests:
cpu: 300m
memory: 1Gi
limits:
cpu: 400m
memory: 2Gi
volumeMounts:
- name: postgres-pvc
mountPath: /mnt/data
subPath: postgres-data/planet9-demo
env:
- name: PGDATA
value: /mnt/data
restartPolicy: Always
serviceAccount: sa-postgres-db
serviceAccountName: sa-postgres-db
volumes:
- name: postgres-pvc
persistentVolumeClaim:
claimName: postgres-pvc

Vault secrets into kubernetes secrets or environment variable

I'm using external vault with kubernetes and i want all my secrets be either in pod env or in kubernetes secrets.
I tried to use
apiVersion: apps/v1
kind: Deployment
metadata:
name: orgchart
labels:
app: orgchart
spec:
selector:
matchLabels:
app: orgchart
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "devwebapp"
vault.hashicorp.com/agent-inject-secret-config: "kv/secret/devwebapp/config"
# Environment variable export template
vault.hashicorp.com/agent-inject-template-config: |
{{ with secret "kv/secret/devwebapp/config" -}}
export user="{{ .Data.username }}"
export pass="{{ .Data.password }}"
{{- end }}
labels:
app: orgchart
spec:
serviceAccountName: devwebapp123
containers:
- name: orgchart
image: jweissig/app:0.0.1
args: ["sh", "-c", "source /vault/secrets/config"]
but when i execut pod env there is no secrets in env
kubectl exec -it orgchart-659b57dc47-2dwdf -c orgchart -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
TERM=xterm
HOSTNAME=orgchart-659b57dc47-2dwdf
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.233.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.233.0.1
KUBERNETES_SERVICE_HOST=10.233.0.1
HOME=/root
files in pod on path /vault/secrets/config are existing. After that i got 2 questions. Why its not working and is there any why how can i inject it in kubernetes secrets
You should use this syntax instead:
args: ["sh", "-c", "source /vault/secrets/config && <entry-point script>"]
to inject the environment variables into the application environment.
If I got the right docker image, the entry-point should be /app/web.
It will maybe necessary to overwrite the default one:
image:
name: jweissig/app:0.0.1
entrypoint: [""]

Kubernetes copying jars into a pod and restart

I have a Kubernetes problem where I need to copy 2 jars (each jar > 1Mb) into a pod after it is deployed. So ideally the solution is we cannot use configMap (> 1Mb) but we need to use "wget" in "initcontainer" and download the jars.
so below is my kubernetes-template configuration which i have modified. The original one is available at https://github.com/dremio/dremio-cloud-tools/blob/master/charts/dremio/templates/dremio-executor.yaml
{{ if not .Values.DremioAdmin }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dremio-executor
spec:
serviceName: "dremio-cluster-pod"
replicas: {{.Values.executor.count}}
podManagementPolicy: "Parallel"
revisionHistoryLimit: 1
selector:
matchLabels:
app: dremio-executor
template:
metadata:
labels:
app: dremio-executor
role: dremio-cluster-pod
annotations:
dremio-configmap/checksum: {{ (.Files.Glob "config/*").AsConfig | sha256sum }}
spec:
terminationGracePeriodSeconds: 5
{{- if .Values.nodeSelector }}
nodeSelector:
{{- range $key, $value := .Values.nodeSelector }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
containers:
- name: dremio-executor
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
resources:
requests:
memory: {{.Values.executor.memory}}M
cpu: {{.Values.executor.cpu}}
volumeMounts:
- name: dremio-executor-volume
mountPath: /opt/dremio/data
##################### START added this section #####################
- name: dremio-connector
mountPath: /opt/dremio/jars
#################### END added this section ##########################
- name: dremio-config
mountPath: /opt/dremio/conf
env:
- name: DREMIO_MAX_HEAP_MEMORY_SIZE_MB
value: "{{ template "HeapMemory" .Values.executor.memory }}"
- name: DREMIO_MAX_DIRECT_MEMORY_SIZE_MB
value: "{{ template "DirectMemory" .Values.executor.memory }}"
- name: DREMIO_JAVA_EXTRA_OPTS
value: >-
-Dzookeeper=zk-hs:2181
-Dservices.coordinator.enabled=false
{{- if .Values.extraStartParams }}
{{ .Values.extraStartParams }}
{{- end }}
command: ["/opt/dremio/bin/dremio"]
args:
- "start-fg"
ports:
- containerPort: 45678
name: server
initContainers:
################ START added this section ######################
- name: installjars
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
volumeMounts:
- name: dremio-connector
mountPath: /opt/dremio/jars
command: ["/bin/sh","-c"]
args: ["wget --no-check-certificate -O /dir/connector.jar https://<some nexus repo URL>/connector.jar; sleep 10;"]
################ END added this section ###############
- name: wait-for-zk
image: busybox
command: ["sh", "-c", "until ping -c 1 -W 1 zk-hs > /dev/null; do echo waiting for zookeeper host; sleep 2; done;"]
# since we're mounting a separate volume, reset permission to
# dremio uid/gid
- name: chown-data-directory
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
volumeMounts:
- name: dremio-executor-volume
mountPath: /opt/dremio/data
command: ["chown"]
args:
- "dremio:dremio"
- "/opt/dremio/data"
volumes:
- name: dremio-config
configMap:
name: dremio-config
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end}}
#################### START added this section ########################
- name: dremio-connector
emptyDir: {}
#################### END added this section ########################
volumeClaimTemplates:
- metadata:
name: dremio-executor-volume
spec:
accessModes: [ "ReadWriteOnce" ]
{{- if .Values.storageClass }}
storageClassName: {{ .Values.storageClass }}
{{- end }}
resources:
requests:
storage: {{.Values.executor.volumeSize}}
{{ end }}
So the above is NOT working and I don't see any jars being downloaded once I "exec" into the pod. I don't understand what is wrong with the above. however do note that inside the pod if i run the same wget command it does download the jar which baffles me. So the URL is working, readwrite of directory is no problem but still jar is not downloaded ???
If you can remove the need for Wget altogether it would make life easier...
Option 1
Using your own docker image will save some pain if thats an option
Dockerfile
# docker build -f Dockerfile -t ghcr.io/yourOrg/projectId/dockerImageName:0.0.1 .
# docker push ghcr.io/yourOrg/projectId/dockerImageName:0.0.1
FROM nginx:1.19.10-alpine
# Use local copies of config
COPY files/some1.jar /dir/
COPY files/some2.jar /dir/
Files will be ready in the container, no need for cryptic commands in your pod definition that will make little sense. Alternatively if you need to download the files you could copy a script to do that work into the Docker image instead and run that on startup via the docker directive CMD.
Option 2
Alternatively, you could do a two stage deployment...
Create a persistent volume
mount the volume to a pod (use busybox as a base?) that will run for enough time for the files to copy across from your local machine (or for them to be downloaded if you continue to use Wget)
kubectl cp the files you need to the (Retained) PersistentVolume
Now mount the PV to your pod's container(s) so the files are readily available when the pod fires up.
Your approch seems right.
Another solution could be to include the jar on the Docker image but I think it's not possible right ?
You could just use an emptyDir instead of a VolumeClaim.
Last one, I would have download the jar before waiting for ZooKeeper to gain some time.

How can I start a job automatically after a successful deployment in kubernetes?

I have a deployment .yaml file that basically create a pod with mariadb, as follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-pod
spec:
replicas: 1
selector:
matchLabels:
pod: {{ .Release.Name }}-pod
strategy:
type: Recreate
template:
metadata:
labels:
pod: {{ .Release.Name }}-pod
spec:
containers:
- env:
- name: MYSQL_ROOT_PASSWORD
value: {{ .Values.db.password }}
image: {{ .Values.image.repository }}
name: {{ .Release.Name }}
ports:
- containerPort: 3306
resources:
requests:
memory: 2048Mi
cpu: 0.5
limits:
memory: 4096Mi
cpu: 1
volumeMounts:
- mountPath: /var/lib/mysql
name: dbsvr-claim
- mountPath: /etc/mysql/conf.d/my.cnf
name: conf
subPath: my.cnf
- mountPath: /docker-entrypoint-initdb.d/init.sql
name: conf
subPath: init.sql
restartPolicy: Always
volumes:
- name: dbsvr-claim
persistentVolumeClaim:
claimName: {{ .Release.Name }}-claim
- name: conf
configMap:
name: {{ .Release.Name }}-configmap
status: {}
Upon success on
helm install abc ./abc/ -f values.yaml
I have a job that generates a mysqldump backup file and it completes successfully (just showing the relevant code)
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-job
spec:
template:
metadata:
name: {{ .Release.Name }}-job
spec:
containers:
- name: {{ .Release.Name }}-dbload
image: {{ .Values.image.repositoryRoot }}/{{.Values.image.imageName}}
command: ["/bin/sh", "-c"]
args:
- mysqldump -p$(PWD) -h{{.Values.db.source}} -u$(USER) --databases xyz > $(FILE);
echo "done!";
imagePullPolicy: Always
# Do not restart containers after they exit
restartPolicy: Never
So, here's my question. Is there a way to automatically start the job after the helm install abc ./ -f values.yaml finishes with success?
you can use kubectl wait -h command to execute job when the condition=Ready for the deployment.
Here the article wait-for-condition demonstrate quite similar situation

How to copy a local file into a helm deployment

I'm trying to deploy in Kubernetes several pods using a mongo image with a initialization script in them. I'm using helm for the deployment. Since I'm beginning with the official Mongo docker image, I'm trying to add a script at /docker-entrypoint-initdb.d so it will be executed right at the beginning to initialize some parameters of my Mongo.
What I don't know is how can I insert my script, that is, let's say, in my local machine, in /docker-entrypoint-initdb.d using helm.
I'm trying to do something like docker run -v hostfile:mongofile but I need the equivalent in helm, so this will be done in all the pods of the deployment
You can use configmap. Lets put nginx configuration file to container via configmap. We have directory name called nginx with same level values.yml. Inside there we have actual configuration file.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-file
labels:
app: ...
data:
nginx.conf: |-
{{ .Files.Get "nginx/nginx.conf" | indent 4 }}
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: SomeDeployment
...
spec:
replicas:
selector:
matchLabels:
app: ...
release: ...
template:
metadata:
labels:
app: ...
release: ...
spec:
volumes:
- name: nginx-conf
configMap:
name: nginx-config-file
items:
- key: nginx.conf
path: nginx.conf
containers:
- name: ...
image: ...
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
You can also check initContainers concept from this link :
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/