Kubernetes initContainers to copy file and execute as part of Lifecycle Hook PostStart - kubernetes

I am trying to execute some scripts as part of statefulset deployment kind. This script I have added as configmap and I use this as volumeMount inside the pod definition. I use the lifecycle poststart exec command to execute this script. It fails with the permission issue.
based on certain articles, I found that we should copy this file as part of InitContainer and then use that (I am not sure why should we do and what will make a difference)
Still, I tried it and that also gives the same error.
Here is my ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap-initscripts
data:
poststart.sh: |
#!/bin/bash
echo "It`s done"
Here is my StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-statefulset
spec:
....
serviceName: postgres-service
replicas: 1
template:
...
spec:
initContainers:
- name: "postgres-ghost"
image: alpine
volumeMounts:
- mountPath: /scripts
name: postgres-scripts
containers:
- name: postgres
image: postgres
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "/scripts/poststart.sh" ]
ports:
- containerPort: 5432
name: dbport
....
volumeMounts:
- mountPath: /scripts
name: postgres-scripts
volumes:
- name: postgres-scripts
configMap:
name: postgres-configmap-initscripts
items:
- key: poststart.sh
path: poststart.sh
The error I am getting:

postStart hook will be call at least once but may be call more than once, this is not a good place to run script.
The poststart.sh file that mounted as ConfigMap will not have execute mode hence the permission error.
It is better to run script in initContainers, here's an quick example that do a simple chmod; while in your case you can execute the script instead:
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: busybox
data:
test.sh: |
#!/bin/bash
echo "It's done"
---
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
run: busybox
spec:
volumes:
- name: scripts
configMap:
name: busybox
items:
- key: test.sh
path: test.sh
- name: runnable
emptyDir: {}
initContainers:
- name: prepare
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c"]
args: ["cp /scripts/test.sh /runnable/test.sh && chmod +x /runnable/test.sh"]
volumeMounts:
- name: scripts
mountPath: /scripts
- name: runnable
mountPath: /runnable
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c"]
args: ["while :; do . /runnable/test.sh; sleep 1; done"]
volumeMounts:
- name: scripts
mountPath: /scripts
- name: runnable
mountPath: /runnable
EOF

Related

How to concatenate secret files mounted in a volume in kubernetes

I have several secrets that are mounted and need to be read as a properties file. It seems kubernetes can't mount them as a single file so I'm trying to concatenate the files after the pod starts. I tried running a cat command in a postStart handler but it seems execute before the secrets are mounted as I get this error:
Error: failed to create containerd task: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword": stat cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword: no such file or directory: unknown
Then here is the yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: K8S_ID
spec:
selector:
matchLabels:
app: K8S_ID
replicas: 1
template:
metadata:
labels:
app: K8S_ID
spec:
containers:
- name: K8S_ID
image: IMAGE_NAME
ports:
- containerPort: 8080
env:
- name: PROPERTIES_FILE
value: "/properties/dbPassword"
volumeMounts:
- name: secret-properties
mountPath: "/properties"
lifecycle:
postStart:
exec:
command: ["cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword"]
volumes:
- name: secret-properties
secret:
secretName: secret-properties
items:
- key: SECRET_ITEM
path: dbPassword
- key: S3Key
path: S3Key
- key: S3Secret
path: S3Secret
You need a shell session for your command like this:
...
lifecycle:
postStart:
exec:
command: ["/bin/sh","-c","cat /properties/S3Secret /properties/S3Key >> /properties/dbPassword"]
...

Running bash script in a kubernetes pod

I am trying to run an external bash script using the below yaml file.
The script is inside the /scripts/run.sh folder. I have also given the defaultMode: 0777
This is the error I get.
sh: 0: Can't open /scripts/run.sh
apiVersion: v1
data:
script.sh: |-
echo "Hello world!"
kubectl get pods
kind: ConfigMap
metadata:
name: script-configmap
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app: script-job
name: script-job
spec:
backoffLimit: 2
template:
spec:
containers:
- command:
- sh
- /scripts/run.sh
image: 'bitnami/kubectl:1.12'
name: script
volumeMounts:
- name: script-configmap
mountPath: /scripts
subPath: run.sh
readOnly: false
restartPolicy: Never
volumes:
- name: script-configmap
configMap:
name: script-configmap
defaultMode: 0777
The file name is script.sh and not run.sh
Try
containers:
- command:
- sh
- /scripts/script.sh

How can I check whether K8s volume was mounted correctly?

I'm testing out whether I can mount data from S3 using initContainer. What I intended and expected was same volume being mounted to both initContainer and Container. Data from S3 gets downloaded using InitContainer to mountPath called /s3-data, and as the Container is run after the initContainer, it can read from the path the volume was mounted to.
However, the Container doesn't show me any logs, and just says 'stream closed'. The initContainer shows logs that data were successfully downloaded from S3.
What am I doing wrong? Thanks in advance.
apiVersion: batch/v1
kind: Job
metadata:
name: train-job
spec:
template:
spec:
initContainers:
- name: data-download
image: <My AWS-CLI Image>
command: ["/bin/sh", "-c"]
args:
- aws s3 cp s3://<Kubeflow Bucket>/kubeflowdata.tar.gz /s3-data
volumeMounts:
- mountPath: /s3-data
name: s3-data
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef: {key: AWS_ACCESS_KEY_ID, name: aws-secret}
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef: {key: AWS_SECRET_ACCESS_KEY, name: aws-secret}
containers:
- name: check-proper-data-mount
image: <My Image>
command: ["/bin/sh", "-c"]
args:
- cd /s3-data
- echo "Just s3-data dir"
- ls
- echo "After making a sample file"
- touch sample.txt
- ls
volumeMounts:
- mountPath: /s3-data
name: s3-data
volumes:
- name: s3-data
emptyDir: {}
restartPolicy: OnFailure
backoffLimit: 6
You can try like the below mentioned the argument part
---
apiVersion: v1
kind: Pod
metadata:
labels:
purpose: demonstrate-command
name: command-demo
spec:
containers:
-
args:
- cd /s3-data;
echo "Just s3-data dir";
ls;
echo "After making a sample file";
touch sample.txt;
ls;
command:
- /bin/sh
- -c
image: "<My Image>"
name: containername
for reference:
How to set multiple commands in one yaml file with Kubernetes?

How to run a script as command in Kubernetes yaml file

I have this script. A Pod will have two containers, one for the main application and the other for logging. I want the logging container to sleep to help me debug an issue.
apiVersion: apps/v1
kind: Deployment
metadata:
name: codingjediweb
spec:
replicas: 2
selector:
matchLabels:
app: codingjediweb
template:
metadata:
labels:
app: codingjediweb
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: codingjediweb
image: docker.io/manuchadha25/codingjediweb:03072020v2
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
env:
- name: db.cassandraUri
value: cassandra://xx.yy.xxx.yyy:9042
- name: db.password
value: 9__
- name: db.keyspaceName
value: somei
- name: db.username
value: supserawesome
ports:
- containerPort: 9000
- name: logging
image: busybox
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
command: ["tail -f /deploy/codingjediweb-1.0/logs/*.log"]
Before running tail -f ..., I want to add a sleep/delay to avoid a race condition (the application takes sometime before logging and tail -f fails in the meanwhile because the log file doesn't exist. Alternatively, I am ok to run a script like this - while true; do sleep 86400; done .
How can I do that?
got it - had to do command: ['sh', '-c', "while true; do sleep 86400; done"]

Initcontainer not initializing in kubernetes

I´m trying to retrieve some code from gitlab in my yaml.
Unfortunatly the job fails to initalize the pod. I have checked all the logs and it fails with the following message:
0 container "filter-human-genome-and-create-mapping-stats" in pod "create-git-container-5lhp2" is waiting to start: PodInitializing
Here is the yaml file:
apiVersion: batch/v1
kind: Job
metadata:
name: create-git-container
namespace: diag
spec:
template:
spec:
initContainers:
- name: bash-script-git-downloader
image: alpine/git
volumeMounts:
- mountPath: /bash_scripts
name: bash-script-source
command: ["/bin/sh","-c"]
args: ["git", "clone", "https://.......#gitlab.com/scripts.git" ]
containers:
- name: filter-human-genome-and-create-mapping-stats
image: myimage
env:
- name: OUTPUT
value: "/output"
command: ["ls"]
volumeMounts:
- mountPath: /bash_scripts
name: bash-script-source
- mountPath: /output
name: output
volumes:
- name: bash-script-source
emptyDir: {}
- name: output
persistentVolumeClaim:
claimName: output
restartPolicy: Never
If you use bash -c, it expects only one argument. So you have to pass your args[] as one argument. There are ways to do it:
command: ["/bin/sh","-c"]
args: ["git clone https://.......#gitlab.com/scripts.git"]
or
command: ["/bin/sh","-c", "git clone https://.......#gitlab.com/scripts.git"]
or
args: ["/bin/sh","-c", "git clone https://.......#gitlab.com/scripts.git"]
or
command:
- /bin/sh
- -c
- |
git clone https://.......#gitlab.com/scripts.git
or
args:
- /bin/sh
- -c
- |
git clone https://.......#gitlab.com/scripts.git