Can I use env in postStart command - kubernetes

Can I use environment variable in lifecycl.postStart.exe.command?
I have a script that has to be run in postStart command.
The command contains a secret, can I use valueFrom to get the secret to env, and use the env in postStart command?

Yes, it is possible.
Using the example from this post to create hooks, let's read a secret and pass it as environment variable to the container, to later read it in the postStart hook.
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: loap
spec:
replicas: 1
template:
metadata:
labels:
app: loap
spec:
containers:
-
command:
- sh
- "-c"
- "echo $(date +%s): START >> /loap/timing; sleep 10; echo $(date +%s): END >> /loap/timing;"
image: busybox
env:
- name: SECRET_THING
valueFrom:
secretKeyRef:
name: supersecret
key: password
lifecycle:
postStart:
exec:
command:
- sh
- "-c"
- "echo ${SECRET_THING} $(date +%s): POST-START >> /loap/timing"
preStop:
exec:
command:
- sh
- "-c"
- "echo $(date +%s): PRE-HOOK >> /loap/timing"
livenessProbe:
exec:
command:
- sh
- "-c"
- "echo $(date +%s): LIVENESS >> /loap/timing"
name: main
readinessProbe:
exec:
command:
- sh
- "-c"
- "echo $(date +%s): READINESS >> /loap/timing"
volumeMounts:
-
mountPath: /loap
name: timing
initContainers:
-
command:
- sh
- "-c"
- "echo $(date +%s): INIT >> /loap/timing"
image: busybox
name: init
volumeMounts:
-
mountPath: /loap
name: timing
volumes:
-
hostPath:
path: /tmp/loap
name: timing
If you examine the contents of /tmp/loap/timings, you can see the secret being shown
my-secret-password 1515415872: POST-START
1515415873: READINESS
1515415879: LIVENESS
1515415882: END
1515415908: START
my-secret-password 1515415908: POST-START
1515415909: LIVENESS
1515415913: READINESS
1515415918: END

Related

Bash script in postStart is not executing

I'm trying to run Stateful set with my own scripts, and I'm able to run first script that will spin up mongodb and setup some users etc, but the second script in postStart block, named configure.sh is never executed for some reason.
Here's the StatefulSet manifest yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
labels:
component: mongo
spec:
selector:
matchLabels:
component: mongo
serviceName: mongo
replicas: 1
template:
metadata:
labels:
component: mongo
spec:
containers:
- name: mongo
image: mongo:latest
command: [ "/bin/bash", "-c" , "+m"]
workingDir: /mongo/scripts
args:
- /mongo/scripts/mongo-start.sh
livenessProbe:
exec:
command:
- "bin/bash"
- "-c"
- mongo -u $MONGO_USER -p $MONGO_PASSWORD --eval db.adminCommand\(\"ping\"\)
failureThreshold: 3
successThreshold: 1
periodSeconds: 10
timeoutSeconds: 5
ports:
- containerPort: 27017
imagePullPolicy: Always
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "/mongodb/configure.sh"]
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-scripts
mountPath: /mongo/scripts
- name: mongo-config
mountPath: /mongodb/configure.sh
subPath: configure.sh
env:
- name: MONGO_USER_APP_NAME
valueFrom:
configMapKeyRef:
key: MONGO_USER_APP_NAME
name: mongo-auth-env
- name: MONGO_USER_APP_PASSWORD
valueFrom:
configMapKeyRef:
key: MONGO_USER_APP_PASSWORD
name: mongo-auth-env
- name: MONGO_USER
valueFrom:
configMapKeyRef:
key: MONGO_USER
name: mongo-auth-env
- name: MONGO_PASSWORD
valueFrom:
configMapKeyRef:
key: MONGO_PASSWORD
name: mongo-auth-env
- name: MONGO_BIND_IP
valueFrom:
configMapKeyRef:
key: MONGO_BIND_IP
name: mongo-config-env
restartPolicy: Always
volumes:
- name: mongo-scripts
configMap:
name: mongo-scripts
defaultMode: 0777
- name: mongo-config
configMap:
name: mongo-config
defaultMode: 0777
- name: mongo-config-env
configMap:
name: mongo-config-env
defaultMode: 0777
- name: mongo-auth-env
configMap:
name: mongo-auth-env
defaultMode: 0777
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
mongo-start.sh which is in /scripts folder with another scripts, is being executed, but after Pod is up and running, configure.sh is never executed, logs are not helpful, kubectl describe pod returns it as recognized, but it never runs. ConfigMaps are all deployed and their content and paths are also ok. Is there any other way to run the script after another, or I'm doing something wrong, been searching on SO and official docs, that's the only examples I found. Tnx
EDIT
it started somehow, but with:
Exec lifecycle hook ([/bin/bash -c /mongodb/mongodb-config.sh]) for Container "mongo" in Pod "mongo-0_test(e9db216d-c1c2-4f19-b85e-19b210a22bbb)" failed - error: command '/bin/bash -c /mongodb/mongodb-config.sh' exited with 1: , message: "MongoDB shell version v4.2.12\nconnecting to: mongodb://mongo:27017/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb\n2021-11-24T22:16:50.520+0000 E QUERY [js] Error: couldn't connect to server mongo:27017, connection attempt failed: SocketException: Error connecting to mongo:27017 (172.20.3.3:27017) :: caused by :: Connection refused :\nconnect#src/mongo/shell/mongo.js:353:17\n#(connect):2:6\n2021-11-24T22:16:50.522+0000
Content of the configure.sh:
#!/bin/bash
mongo --username $MONGO_USER_ROOT_NAME --password "$MONGO_USER_ROOT_PASSWORD" --authenticationDatabase "$MONGO_AUTH_SOURCE" --host mongo --port "$MONGO_PORT" < create.js
If I remove postStart part, and init into container, I can successfully run the script..
There is no guarantee that postStart hook will be call after container entry point. Also, postStart hook can be called more than once. The error was because by the time configure.sh was executed; the mongodb was not up and running yet. If your configure.sh script is idempotent, you can do a wait before proceed to next step:
#!/bin/bash
until mongo --nodb --disableImplicitSessions --host mongo --username $MONGO_USER_ROOT_NAME --password $MONGO_USER_ROOT_PASSWORD --eval 'db.adminCommand("ping")'; do sleep 1; done
mongo --username $MONGO_USER_ROOT_NAME --password "$MONGO_USER_ROOT_PASSWORD" --authenticationDatabase "$MONGO_AUTH_SOURCE" --host mongo --port "$MONGO_PORT" < create.js

Helm lifecycle commands in deployment

I have deployment.yaml template.
I have AWS EKS 1.8 and same kubectl.
I'm using Helm 3.3.4.
When I tried to deploy same template directly thru kubectl apply -f deployment.yaml,is everything good, init containers and main container in pod works fine.
But if I tried to start deployment thru Helm I got this error:
OCI runtime exec failed: exec failed: container_linux.go:349: starting
container process caused "process_linux.go:101: executing setns
process caused \"exit status 1\"": unknown\r\n"
kubectl describe pods osad-apidoc-6b74c9bcf9-tjnrh
Looks like I have missed something in annotations or I'm using wrong syntaxes in command description.
Some not important parameters are omitted in this example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "apidoc.fullname" . }}
labels:
{{- include "apidoc.labels" . | nindent 4 }}
spec:
template:
metadata:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- |
echo 'ServerName 127.0.0.1' >> /etc/apache2/apache2.conf
a2enmod rewrite
a2enmod headers
a2enmod ssl
a2dissite default
a2ensite 000-default
a2enmod rewrite
service apache2 start
I have tried invoke simple command like:
lifecycle:
postStart:
exec:
command: ['/bin/sh', '-c', 'printenv']
Unfortunately I got same error.
If I will delete this Lifecycle everything works fine thru Helm.
But I need invoke this commands, I can't omit this steps.
Also I checked helm template thru lint, looks good:
Original deployment.yaml before move to Helm:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apidoc
namespace: apidoc
labels:
app: apidoc
stage: dev
version: v-1
spec:
selector:
matchLabels:
app: apidoc
stage: dev
replicas: 1
template:
metadata:
labels:
app: apidoc
stage: dev
version: v-1
spec:
initContainers:
- name: git-clone
image: '123123123123.dkr.ecr.us-east-1.amazonaws.com/helper:latest'
volumeMounts:
- name: repos
mountPath: /var/repos
workingDir: /var/repos
command:
- sh
- '-c'
- >-
git clone --single-branch --branch k8s
git#github.com:examplerepo/apidoc.git -qv
- name: copy-data
image: '123123123123.dkr.ecr.us-east-1.amazonaws.com/helper:latest'
volumeMounts:
- name: web-data
mountPath: /var/www/app
- name: repos
mountPath: /var/repos
workingDir: /var/repos
command:
- sh
- '-c'
- >-
if cp -r apidoc/* /var/www/app/; then echo 'Success!!!' && exit 0;
else echo 'Failed !!!' && exit 1;fi;
containers:
- name: apache2
image: '123123123123.dkr.ecr.us-east-1.amazonaws.com/apache2:2.2'
tty: true
volumeMounts:
- name: web-data
mountPath: /var/www/app
- name: configfiles
mountPath: /etc/apache2/sites-available/000-default.conf
subPath: 000-default.conf
ports:
- name: http
protocol: TCP
containerPort: 80
lifecycle:
postStart:
exec:
command:
- /bin/sh
- '-c'
- |
echo 'ServerName 127.0.0.1' >> /etc/apache2/apache2.conf
a2enmod rewrite
a2enmod headers
a2enmod ssl
a2dissite default
a2ensite 000-default
a2enmod rewrite
service apache2 start
volumes:
- name: web-data
emptyDir: {}
- name: repos
emptyDir: {}
- name: configfiles
configMap:
name: apidoc-config

Initcontainer not initializing in kubernetes

I´m trying to retrieve some code from gitlab in my yaml.
Unfortunatly the job fails to initalize the pod. I have checked all the logs and it fails with the following message:
0 container "filter-human-genome-and-create-mapping-stats" in pod "create-git-container-5lhp2" is waiting to start: PodInitializing
Here is the yaml file:
apiVersion: batch/v1
kind: Job
metadata:
name: create-git-container
namespace: diag
spec:
template:
spec:
initContainers:
- name: bash-script-git-downloader
image: alpine/git
volumeMounts:
- mountPath: /bash_scripts
name: bash-script-source
command: ["/bin/sh","-c"]
args: ["git", "clone", "https://.......#gitlab.com/scripts.git" ]
containers:
- name: filter-human-genome-and-create-mapping-stats
image: myimage
env:
- name: OUTPUT
value: "/output"
command: ["ls"]
volumeMounts:
- mountPath: /bash_scripts
name: bash-script-source
- mountPath: /output
name: output
volumes:
- name: bash-script-source
emptyDir: {}
- name: output
persistentVolumeClaim:
claimName: output
restartPolicy: Never
If you use bash -c, it expects only one argument. So you have to pass your args[] as one argument. There are ways to do it:
command: ["/bin/sh","-c"]
args: ["git clone https://.......#gitlab.com/scripts.git"]
or
command: ["/bin/sh","-c", "git clone https://.......#gitlab.com/scripts.git"]
or
args: ["/bin/sh","-c", "git clone https://.......#gitlab.com/scripts.git"]
or
command:
- /bin/sh
- -c
- |
git clone https://.......#gitlab.com/scripts.git
or
args:
- /bin/sh
- -c
- |
git clone https://.......#gitlab.com/scripts.git

Kubernetes | Any hooks available for Pod restarts?

Are there any hooks available for Pod lifecycle events? Specifically, I want to run a command to upload logs on pod restart.
Edit: PreStop hook doesn't work for container restart - please see rest of answer below
As standing in documentation there are PreStop and PostStart events and you can attach to them.
Example from docs:
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
preStop:
exec:
command: ["/bin/sh","-c","nginx -s quit; while killall -0 nginx; do sleep 1; done"]
Edit:
So I checked with following POC if that preStop hook is executed on container crash and conclusion is: NOT
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
volumeMounts:
- mountPath: /data
name: test-volume
image: nginx
command: ["/bin/sh"]
args: ["-c", "sleep 5; exit 1"]
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /data/postStart"]
preStop:
exec:
command: ["/bin/sh","-c","echo preStop handler! > /data/preStop"]
volumes:
- name: test-volume
hostPath:
path: /data
type: Directory
As solution for you I would recommend to override command section for you container this way:
command: ["/bin/sh"]
args: ["-c", "your-application-executable; your-logs-upload"]
so your-logs-upload executable will be executed after your-application-executable crash/end

Can I use env in preStop command

Can I use environment variable in lifecycle.preStop.exec.command? I have a script that has to be run in preStop command. The answer here states that it's possible to use env variables in postStart Can I use env in postStart command. It doesn't work with preStop though. Is it a bug or am I doing something wrong?
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: loap
spec:
replicas: 1
template:
metadata:
labels:
app: loap
spec:
containers:
-
command:
- sh
- "-c"
- "echo $(date +%s): START >> /loap/timing; sleep 10; echo $(date +%s): END >> /loap/timing;"
image: busybox
env:
- name: secretThing
valueFrom:
secretKeyRef:
name: supersecret
key: password
lifecycle:
preStop:
exec:
command:
- sh
- "-c"
- "echo ${secretThing} $(date +%s): PRE-HOOK >> /loap/timing"
livenessProbe:
exec:
command:
- sh
- "-c"
- "echo $(date +%s): LIVENESS >> /loap/timing"
name: main
readinessProbe:
exec:
command:
- sh
- "-c"
- "echo $(date +%s): READINESS >> /loap/timing"
volumeMounts:
-
mountPath: /loap
name: timing
initContainers:
-
command:
- sh
- "-c"
- "echo $(date +%s): INIT >> /loap/timing"
image: busybox
name: init
volumeMounts:
-
mountPath: /loap
name: timing
volumes:
-
hostPath:
path: /tmp/loap
name: timing
This is explained in the Kubernetes docs Working with objects - Names.
A client-provided string that refers to an object in a resource URL, such as /api/v1/pods/some-name.
Only one object of a given kind can have a given name at a time. However, if you delete the object, you can make a new object with the same name.
By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, -, and ., but certain resources have more specific restrictions.