Initcontainer not initializing in kubernetes - kubernetes

I´m trying to retrieve some code from gitlab in my yaml.
Unfortunatly the job fails to initalize the pod. I have checked all the logs and it fails with the following message:
0 container "filter-human-genome-and-create-mapping-stats" in pod "create-git-container-5lhp2" is waiting to start: PodInitializing
Here is the yaml file:
apiVersion: batch/v1
kind: Job
metadata:
name: create-git-container
namespace: diag
spec:
template:
spec:
initContainers:
- name: bash-script-git-downloader
image: alpine/git
volumeMounts:
- mountPath: /bash_scripts
name: bash-script-source
command: ["/bin/sh","-c"]
args: ["git", "clone", "https://.......#gitlab.com/scripts.git" ]
containers:
- name: filter-human-genome-and-create-mapping-stats
image: myimage
env:
- name: OUTPUT
value: "/output"
command: ["ls"]
volumeMounts:
- mountPath: /bash_scripts
name: bash-script-source
- mountPath: /output
name: output
volumes:
- name: bash-script-source
emptyDir: {}
- name: output
persistentVolumeClaim:
claimName: output
restartPolicy: Never

If you use bash -c, it expects only one argument. So you have to pass your args[] as one argument. There are ways to do it:
command: ["/bin/sh","-c"]
args: ["git clone https://.......#gitlab.com/scripts.git"]
or
command: ["/bin/sh","-c", "git clone https://.......#gitlab.com/scripts.git"]
or
args: ["/bin/sh","-c", "git clone https://.......#gitlab.com/scripts.git"]
or
command:
- /bin/sh
- -c
- |
git clone https://.......#gitlab.com/scripts.git
or
args:
- /bin/sh
- -c
- |
git clone https://.......#gitlab.com/scripts.git

Related

Show Pod IP Address using environment variable

I want to display the pod IP address in an nginx pod. Currently I am using an init container to initialize the pod by writing to a volume.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox:1.28
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- echo
- $(POD_IP) >> /work-dir/index.html
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
This should work in theory, but the file redirect doesn't work and the mounted file in the nginx container is blank. There's probably an easier way to do this, but I'm curious why this doesn't work.
Nothing is changed, except how command is passed in the init container. See this for an explanation.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox:1.28
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- 'sh'
- '-c'
- 'echo $(POD_IP) > /work-dir/index.html'
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}

K8s postgres backup failed during S3 copy

I am trying to backup postgres database from RDS using K8s cronjob.
I have created cronjob for it my EKS cluster and credentials are in Secrets.
When Its try to copy backup fail into AWS S3 bucket pod fails with error:
aws: error: argument command: Invalid choice, valid choices are:
I tried different options but its not working.
Anybody please help in resolving this issue.
Here is brief info:
K8s cluster is on AWS EKS
Db is on RDS
I am using following config for my cronjob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: postgres-backup
spec:
schedule: "*/3 * * * *"
jobTemplate:
spec:
backoffLimit: 0
template:
spec:
initContainers:
- name: dump
image: postgres:12.1-alpine
volumeMounts:
- name: data
mountPath: /backup
args:
- pg_dump
- "-Fc"
- "-f"
- "/backup/redash-postgres.pgdump"
- "-Z"
- "9"
- "-v"
- "-h"
- "postgress.123456789.us-east-2.rds.amazonaws.com"
- "-U"
- "postgress"
- "-d"
- "postgress"
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
# Retrieve postgres password from a secret
name: postgres
key: POSTGRES_PASSWORD
containers:
- name: save
image: amazon/aws-cli
volumeMounts:
- name: data
mountPath: /backup
args:
- aws
- "--version"
envFrom:
- secretRef:
# Must contain AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION
name: s3-backup-credentials
restartPolicy: Never
volumes:
- name: data
emptyDir: {}
Try this:
...
containers:
- name: save
image: amazon/aws-cli
...
args:
- "--version" # <-- the image entrypoint already call "aws", you only need to specify the arguments here.
...

Kubernetes initContainers to copy file and execute as part of Lifecycle Hook PostStart

I am trying to execute some scripts as part of statefulset deployment kind. This script I have added as configmap and I use this as volumeMount inside the pod definition. I use the lifecycle poststart exec command to execute this script. It fails with the permission issue.
based on certain articles, I found that we should copy this file as part of InitContainer and then use that (I am not sure why should we do and what will make a difference)
Still, I tried it and that also gives the same error.
Here is my ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap-initscripts
data:
poststart.sh: |
#!/bin/bash
echo "It`s done"
Here is my StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-statefulset
spec:
....
serviceName: postgres-service
replicas: 1
template:
...
spec:
initContainers:
- name: "postgres-ghost"
image: alpine
volumeMounts:
- mountPath: /scripts
name: postgres-scripts
containers:
- name: postgres
image: postgres
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "/scripts/poststart.sh" ]
ports:
- containerPort: 5432
name: dbport
....
volumeMounts:
- mountPath: /scripts
name: postgres-scripts
volumes:
- name: postgres-scripts
configMap:
name: postgres-configmap-initscripts
items:
- key: poststart.sh
path: poststart.sh
The error I am getting:
postStart hook will be call at least once but may be call more than once, this is not a good place to run script.
The poststart.sh file that mounted as ConfigMap will not have execute mode hence the permission error.
It is better to run script in initContainers, here's an quick example that do a simple chmod; while in your case you can execute the script instead:
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: busybox
data:
test.sh: |
#!/bin/bash
echo "It's done"
---
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
run: busybox
spec:
volumes:
- name: scripts
configMap:
name: busybox
items:
- key: test.sh
path: test.sh
- name: runnable
emptyDir: {}
initContainers:
- name: prepare
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c"]
args: ["cp /scripts/test.sh /runnable/test.sh && chmod +x /runnable/test.sh"]
volumeMounts:
- name: scripts
mountPath: /scripts
- name: runnable
mountPath: /runnable
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c"]
args: ["while :; do . /runnable/test.sh; sleep 1; done"]
volumeMounts:
- name: scripts
mountPath: /scripts
- name: runnable
mountPath: /runnable
EOF

How can I check whether K8s volume was mounted correctly?

I'm testing out whether I can mount data from S3 using initContainer. What I intended and expected was same volume being mounted to both initContainer and Container. Data from S3 gets downloaded using InitContainer to mountPath called /s3-data, and as the Container is run after the initContainer, it can read from the path the volume was mounted to.
However, the Container doesn't show me any logs, and just says 'stream closed'. The initContainer shows logs that data were successfully downloaded from S3.
What am I doing wrong? Thanks in advance.
apiVersion: batch/v1
kind: Job
metadata:
name: train-job
spec:
template:
spec:
initContainers:
- name: data-download
image: <My AWS-CLI Image>
command: ["/bin/sh", "-c"]
args:
- aws s3 cp s3://<Kubeflow Bucket>/kubeflowdata.tar.gz /s3-data
volumeMounts:
- mountPath: /s3-data
name: s3-data
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef: {key: AWS_ACCESS_KEY_ID, name: aws-secret}
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef: {key: AWS_SECRET_ACCESS_KEY, name: aws-secret}
containers:
- name: check-proper-data-mount
image: <My Image>
command: ["/bin/sh", "-c"]
args:
- cd /s3-data
- echo "Just s3-data dir"
- ls
- echo "After making a sample file"
- touch sample.txt
- ls
volumeMounts:
- mountPath: /s3-data
name: s3-data
volumes:
- name: s3-data
emptyDir: {}
restartPolicy: OnFailure
backoffLimit: 6
You can try like the below mentioned the argument part
---
apiVersion: v1
kind: Pod
metadata:
labels:
purpose: demonstrate-command
name: command-demo
spec:
containers:
-
args:
- cd /s3-data;
echo "Just s3-data dir";
ls;
echo "After making a sample file";
touch sample.txt;
ls;
command:
- /bin/sh
- -c
image: "<My Image>"
name: containername
for reference:
How to set multiple commands in one yaml file with Kubernetes?

Kubernetes: Perform command shell only container

I'm using this spec that contains one init container and two containers.
Init container creates a file on /etc/secrets/secrets.env that the first container has to source: source /etc/secrets/secrets.env.
I'm trying to do that using this spec:
spec:
containers:
- name: source-envs
image: ????
command: ["/bin/sh", "-c", "source /etc/secrets/secrets.env"]
volumes:
- name: sidekick-backend-volume
emptyDir: {}
I don't quite figure out how to do that.
Any ideas?
It should work by sharing a volume between the init container and the first container, mounted on /etc/secrets:
spec:
initContainers:
- name: create-envs
image: ????
command: ["/bin/sh", "-c", "touch /etc/secrets/secrets.env"]
volumeMounts:
- mountPath: /etc/secrets/
name: sidekick-backend-volume
containers:
- name: source-envs
image: ????
command: ["/bin/sh", "-c", "source /etc/secrets/secrets.env"]
volumeMounts:
- mountPath: /etc/secrets/
name: sidekick-backend-volume
volumes:
- name: sidekick-backend-volume
emptyDir: {}