Running sh shell in Busybox - kubernetes

Hope all is well. I am stuck with this Pod executing a shell script, using the BusyBox image. The one below works,
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: loop
name: busybox-loop
spec:
containers:
- args:
- /bin/sh
- -c
- |-
for i in 1 2 3 4 5 6 7 8 9 10; \
do echo "Welcome $i times"; done
image: busybox
name: loop
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
But this one doesn't works as I am using "- >" as the operator,
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox-loop
name: busybox-loop
spec:
containers:
- image: busybox
name: busybox-loop
args:
- /bin/sh
- -c
- >
- for i in {1..10};
- do
- echo ("Welcome $i times");
- done
restartPolicy: Never
Is it because the for syntax "for i in {1..10};" will not work on sh shell. As we know we don't have any other shells in Busybox or the "- >" operator is incorrect, I don't think so because it works for other shell scripts.
Also when can use "- |" multiline operator(I hope the term is correct) and "- >" operator. I know this syntax below is easy to use, but the problem is when we use double quotes in the script, the escape sequence confuses and never works.
args: ["-c", "while true; do echo hello; sleep 10;done"]
Appreciate your support.

...But this one doesn't works as I am using "- >" as the operator...
You don't need '-' after '>' in this case, try:
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
run: busybox
spec:
containers:
- name: busybox
image: busybox
args:
- ash
- -c
- >
for i in 1 2 3 4 5 6 7 8 9 10;
do
echo "hello";
done
kubectl logs pod busybox will print hello 10 times.

Brace expansion in for loop is not supported by ash shell. One more solution here can be replacing it with seq command and update formatting like this:
spec:
containers:
- image: busybox
name: busybox-loop
args:
- /bin/sh
- -c
- >
for i in `seq 1 10`;
do echo "Welcome $i times";
done
restartPolicy: Never
It doesn't matter in this case which operator to use - - | or - >, both will work fine.

I think you can use this command to create the pod:
$ kubectl run busybox --image=busybox --dry-run=client -o yaml --command -- /bin/sh -c "for i in {1..10}; do echo 'Welcome $i times'; done" | kubectl apply -f -
pod/busybox created
$ kubectl logs busybox
Welcome 10 times
Note that you can take a look into YAML created by the dry run command as well.
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
containers:
- image: busybox
command:
- /bin/sh
- -c
- for i in {1..10}; do echo 'Welcome $i times'; done

Related

2 kubernetes cronjobs that mount to the same directory, mount file Deleting the Existing Files

I created 2 Kubernetes cronjobs.
1 cronjob for backups the Mariradb and the second cronjob backups the PostgreSQL.
mariradb
apiVersion: batch/v1
kind: CronJob
metadata:
name: deploy-cron-job-mariadb
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
metadata:
labels:
cj2job: cronjob
spec:
serviceAccountName: cron-user
volumes:
- name: deploy-script
configMap:
name: deploy-script
containers:
- name: cron-job-mariadb-1
image: bitnami/kubectl
env:
- name: mariadb-user
value: "bla"
- name: mariadb-password
value: "bla"
command: ["bash", "/var/backups/deployScript.sh"]
volumeMounts:
- name: deploy-script
mountPath: /var/backups
restartPolicy: OnFailure
ttlSecondsAfterFinished: 60
postgres
apiVersion: batch/v1
kind: CronJob
metadata:
name: deploy-cron-job-postgresql
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
metadata:
labels:
cj2job: cronjob
spec:
serviceAccountName: cron-user
volumes:
- name: deploy-script
configMap:
name: deploy-script
containers:
- name: cron-job-postgresql-1
image: bitnami/kubectl
env:
- name: postgresql-user
value: "bla"
- name: postgresql-password
value: "bla"
command: ["bash", "/var/backups/deployScript.sh"]
volumeMounts:
- name: deploy-script
mountPath: /var/backups
restartPolicy: OnFailure
ttlSecondsAfterFinished: 60
I have an ansible playbook that deploys these 2 cronjob files :
ansible playbook
And just the second deployment is mount to the /var/backups directory, in this case, the PostgreSQL but if the Mariradb was the second one so his backups were in the directory
directory
If I run the kubectl apply command manually on the file it will work, but I want to do it with my ansible playbook automatically.
Solution:
backups.sh file (backups script):
#!/bin/bash
date=$(date '+%Y-%m-%d')
kubectl exec -it postgres-0 -n blabla -- bash -c "PGPASSWORD="postgres" pg_dump -U postgres -h 127.0.0.1 -p 5432 dealer > /var/backups/postgresql.sql1"
kubectl exec -it postgres-0 -n blabla -- bash -c "mv /var/backups/postgresql.sql1 /var/backups/postgresql-$date.sql1"
kubectl cp postgres-0:/var/backups/postgresql-$date.sql1 /var/backups/postgresql-$date.sql1 -n blabla
kubectl exec -it mariadb-0 -n blabla -- bash -c "mysqldump monitor -u root -p123456 > /var/backups/mysql.dump"
kubectl exec -it mariadb-0 -n blabla -- bash -c "mv /var/backups/mysql.dump /var/backups/mysql-$date.dump"
kubectl cp mariadb-0:/var/backups/mysql-$date.dump /var/backups/mysql-$date.dump -n blabla
cronjobs.sh file (4 last files in the folder):
#!/bin/bash
cd /var/backups
ls -lt | tail -n +6 | awk '{print $9}' | xargs rm -rf
skipper.sh (scheduler):
#!/bin/bash
mv docker-playbook/files/cronjobs.sh /var/
mv docker-playbook/files/backups.sh /var/
cat <<EOF | crontab -
0 0 * * * bash /var/backups.sh
0 1 * * * bash /var/cronjobs.sh
EOF

How do I run a command while starting a Pod in Kubernetes?

I want to execute a command during the creation of the pod.
I see two options available :
kubectl run busybox --image=busybox --restart=Never -- sleep 3600
kubectl run busybox --image=busybox --restart=Never -- /bin/sh -c "sleep 3600"
What is the difference between the above two commands ?
In short no difference in the outcome if you want to run "sleep 3600".
Both perform the same operation.
To understand the behaviour of those options add dry-run option to it
First one passes "sleep" & "3600" as arguments to Entrypoint of busybox image which is "/bin/sh"
kubectl run busybox --image=busybox --restart=Never --dry-run=client -o yaml -- sleep 3600
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- args:
- sleep
- "3600"
image: busybox
name: busybox
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
second one passes "/bin/sh -c" , "sleep" & "3600" as arguments to Entrypoint of busybox image which is "/bin/sh" . So it will open a new shell to run "sleep 3600" inside the container.
kubectl run busybox --image=busybox --restart=Never --dry-run=client -o yaml -- /bin/sh -c "sleep 3600"
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- args:
- /bin/sh
- -c
- sleep 3600
image: busybox
name: busybox
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
As mentioned in the beginning it does not make any difference in the outcome of "sleep 3600" But this method is useful when you want to run multiple commands by container for example "sleep 3600" & "echo boo". so the syntax would be
kubectl run busybox --image=busybox --restart=Never -- /bin/sh -c "sleep 3600;echo boo"

Multi container pod with command sleep k8

I am trying out mock exams on udemy and have created a multi container pod . but exam result says command is not set correctly on container test2 .I am not able to identify the issue.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: multi-pod
name: multi-pod
spec:
containers:
- image: nginx
name: test1
env:
- name: type
value: demo1
- image: busybox
name: test2
env:
- name: type
value: demo2
command: ["sleep", "4800"]
An easy way to do this is by using imperative kubectl command to generate the yaml for a single container and edit the yaml to add the other container
kubectl run nginx --image=nginx --command -oyaml --dry-run=client -- sh -c 'sleep 1d' > nginx.yaml
In this example sleep 1d is the command.
The generated yaml looks like below.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- command:
- sh
- -c
- sleep 1d
image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
Your issue is with your YAML in line 19.
Please keep in mind that YAML syntax is very sensitive for spaces and tabs.
Your issue:
- image: busybox
name: test2
env:
- name: type
value: demo2 ### Issue is in this line, you have one extra space
command: ["sleep", "4800"]
Solution:
Remove space, it wil looks like that:
env:
- name: type
value: demo2
For validation of YAML you can use external validators like yamllint.
If you would paste your YAML to mentioned validator, you will receive error:
(<unknown>): mapping values are not allowed in this context at line 19 column 14
After removing this extra space you will get
Valid YAML!

Creating a Docker container that runs forever using bash

I'm trying to create a Pod with a container in it for testing purposes that runs forever using the K8s API. I have the following yaml spec for the Pod which would run a container and exit straight away:
apiVersion: v1
kind: Pod
metadata:
name: pod-example
spec:
containers:
- name: ubuntu
image: ubuntu:trusty
command: ["echo"]
args: ["Hello World"]
I can't find any documentation around the command: tag but ideally I'd like to put a while loop in there somewhere printing out numbers forever.
If you want to keep printing Hello every few seconds you can use:
apiVersion: v1
kind: Pod
metadata:
name: busybox2
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
ports:
- containerPort: 80
command: ["/bin/sh", "-c", "while :; do echo 'Hello'; sleep 5 ; done"]
You can see the output using kubectl logs <pod-name>
Other option to keep a container running without printing anything is to use sleep command on its own, for example:
command: ["/bin/sh", "-ec", "sleep 10000"]

Serialize creation of Pods in a deployment manifest using Helm charts

So I have a helm chart that deploys a pod, so the next task is to create another pod once the first pod is running.
So I created a simple pod.yaml in chart/templates which creates a simple pod-b, so next step to only create pod-b after pod-a is running.
So was only at helm hooks but don't think they care about pod status.
Another idea is to use Init container like below but not sure how to write command to lookup a pod is running?
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox
command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
Another idea is a simple script to check pod status something like:
y=`kubectl get po -l app=am -o 'jsonpath={.items[0].status.phase}'`
while [ $i -le 5 ]
do
if [[ "$y" == "Running" ]]; then
break
fi
sleep 5
done
Any advice would be great.
If you want your post-install/post-upgrade chart hooks to work, you should add readiness probes to your first pod and use --wait flag.
helm upgrade --install -n test --wait mychart .
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: readiness-exec
labels:
test: readiness
spec:
containers:
- name: readiness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- sleep 30; touch /tmp/healthy; sleep 600
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 10
hook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: "post-deploy"
annotations:
"helm.sh/hook": post-upgrade,post-install
"helm.sh/hook-delete-policy": before-hook-creation
spec:
backoffLimit: 1
template:
metadata:
name: "post-deploy"
spec:
restartPolicy: Never
containers:
- name: post-deploy
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- echo "executed only after previous pod is ready"