Serialize creation of Pods in a deployment manifest using Helm charts - kubernetes

So I have a helm chart that deploys a pod, so the next task is to create another pod once the first pod is running.
So I created a simple pod.yaml in chart/templates which creates a simple pod-b, so next step to only create pod-b after pod-a is running.
So was only at helm hooks but don't think they care about pod status.
Another idea is to use Init container like below but not sure how to write command to lookup a pod is running?
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox
command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
Another idea is a simple script to check pod status something like:
y=`kubectl get po -l app=am -o 'jsonpath={.items[0].status.phase}'`
while [ $i -le 5 ]
do
if [[ "$y" == "Running" ]]; then
break
fi
sleep 5
done
Any advice would be great.

If you want your post-install/post-upgrade chart hooks to work, you should add readiness probes to your first pod and use --wait flag.
helm upgrade --install -n test --wait mychart .
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: readiness-exec
labels:
test: readiness
spec:
containers:
- name: readiness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- sleep 30; touch /tmp/healthy; sleep 600
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 10
hook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: "post-deploy"
annotations:
"helm.sh/hook": post-upgrade,post-install
"helm.sh/hook-delete-policy": before-hook-creation
spec:
backoffLimit: 1
template:
metadata:
name: "post-deploy"
spec:
restartPolicy: Never
containers:
- name: post-deploy
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- echo "executed only after previous pod is ready"

Related

Running sh shell in Busybox

Hope all is well. I am stuck with this Pod executing a shell script, using the BusyBox image. The one below works,
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: loop
name: busybox-loop
spec:
containers:
- args:
- /bin/sh
- -c
- |-
for i in 1 2 3 4 5 6 7 8 9 10; \
do echo "Welcome $i times"; done
image: busybox
name: loop
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
But this one doesn't works as I am using "- >" as the operator,
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox-loop
name: busybox-loop
spec:
containers:
- image: busybox
name: busybox-loop
args:
- /bin/sh
- -c
- >
- for i in {1..10};
- do
- echo ("Welcome $i times");
- done
restartPolicy: Never
Is it because the for syntax "for i in {1..10};" will not work on sh shell. As we know we don't have any other shells in Busybox or the "- >" operator is incorrect, I don't think so because it works for other shell scripts.
Also when can use "- |" multiline operator(I hope the term is correct) and "- >" operator. I know this syntax below is easy to use, but the problem is when we use double quotes in the script, the escape sequence confuses and never works.
args: ["-c", "while true; do echo hello; sleep 10;done"]
Appreciate your support.
...But this one doesn't works as I am using "- >" as the operator...
You don't need '-' after '>' in this case, try:
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
run: busybox
spec:
containers:
- name: busybox
image: busybox
args:
- ash
- -c
- >
for i in 1 2 3 4 5 6 7 8 9 10;
do
echo "hello";
done
kubectl logs pod busybox will print hello 10 times.
Brace expansion in for loop is not supported by ash shell. One more solution here can be replacing it with seq command and update formatting like this:
spec:
containers:
- image: busybox
name: busybox-loop
args:
- /bin/sh
- -c
- >
for i in `seq 1 10`;
do echo "Welcome $i times";
done
restartPolicy: Never
It doesn't matter in this case which operator to use - - | or - >, both will work fine.
I think you can use this command to create the pod:
$ kubectl run busybox --image=busybox --dry-run=client -o yaml --command -- /bin/sh -c "for i in {1..10}; do echo 'Welcome $i times'; done" | kubectl apply -f -
pod/busybox created
$ kubectl logs busybox
Welcome 10 times
Note that you can take a look into YAML created by the dry run command as well.
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
containers:
- image: busybox
command:
- /bin/sh
- -c
- for i in {1..10}; do echo 'Welcome $i times'; done

Kubernetes delete deployment after script execution

I'm working on creating a distributed locust service for benchmarking and REST API testing in the platform. Architecture is as follows:
First pod running a docker image with master flag for controlling the whole process
A collection of pods running a docker image with worker flag which will make the work (which can vary depending on the requeriments)
Deployment and Service files are:
01-locust-master.yaml
apiVersion: v1
kind: Service
metadata:
name: locust-master
labels:
name: locust
spec:
type: LoadBalancer
selector:
name: locust
role: master
ports:
- port: 8089
protocol: TCP
name: master-web
- port: 5557
protocol: TCP
name: master-port1
- port: 5558
protocol: TCP
name: master-port2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: locust-master
spec:
replicas: 1
template:
selector:
matchLabels:
name: locust
role: master
template:
metadata:
labels:
name: locust
role: master
spec:
containers:
- name: locust
image: locust-image:latest
imagePullPolicy: Always
env:
- name: LOCUST_MODE
value: master
- name: LOCUST_LOCUSTFILE_PATH
value: "/locust-tasks/locustfiles/the_file.py"
- name: LOCUST_TARGET_HOST
value: "the_endpoint"
- name: LOCUST_USERS
value: !!integerEnv 300
- name: LOCUST_SPAWN_RATE
value: !!integerEnv 100
- name: LOCUST_TEST_TIME
value: "5m"
- name: LOCUST_OUTPUT_DIR
value: "/locust-tasks/locust-output"
- name: LOCUST_TEST_API_TOKEN
value: "some_api_topken"
- name: LOCUST_S3_OUTPUT_BUCKET
value: "s3-bucket"
ports:
- containerPort: 8089
- containerPort: 5557
- containerPort: 5558
resources:
limits:
cpu: 2000m
memory: 2048Mi
02-locust-worker.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: locust-worker
spec:
replicas: 3
selector:
matchLabels:
name: locust
template:
metadata:
labels:
name: locust
role: worker
spec:
containers:
- name: locust
image: locust:latest
imagePullPolicy: Always
env:
- name: LOCUST_MODE
value: worker
- name: LOCUST_MASTER_NODE_HOST
value: locust-master
- name: LOCUST_LOCUSTFILE_PATH
value: "/locust-tasks/locustfiles/the_file.py"
- name: LOCUST_TARGET_HOST
value: "the_endpoint"
- name: LOCUST_TEST_API_TOKEN
value: "the_api_token"
- name: LOCUST_S3_OUTPUT_BUCKET
value: "s3_bucket"
resources:
limits:
cpu: 1500m
memory: 850Mi
requests:
cpu: 1200m
memory: 768Mi
DockerFile
FROM python:3.7.3
# Install packages
COPY requirements.txt /tmp/
RUN pip install --upgrade pip
RUN pip install --requirement /tmp/requirements.txt
RUN pip install awscli
# Add locustfiles
COPY common/ /locust-tasks/common/
COPY templates/ /locust-tasks/templates/
COPY locustfiles/ /locust-tasks/locustfiles/
# Set the entrypoint
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 5557 5558 8089
docker-entrypoint.sh
#!/bin/bash -x
LOCUST_MODE=${LOCUST_MODE:="standalone"}
LOCUST_MASTER=${LOCUST_MASTER:=""}
LOCUST_LOCUSTFILE_PATH=${LOCUST_LOCUSTFILE_PATH:="/locust-tasks/locustfiles/the_file.py"}
LOCUST_TARGET_HOST=${LOCUST_TARGET_HOST:="the_endpoint"}
LOCUST_OUTPUT_DIR=${LOCUST_OUTPUT_DIR:="/locust-tasks/locust-output"}
LOCUST_TEST_API_TOKEN=${LOCUST_TEST_API_TOKEN:="the_token"}
LOCUST_S3_OUTPUT_BUCKET=${LOCUST_S3_OUTPUT_BUCKET:="s3_bucket"}
cd /locust-tasks
if [[ ! -e $LOCUST_OUTPUT_DIR ]]; then
mkdir $LOCUST_OUTPUT_DIR
elif [[ ! -d $LOCUST_OUTPUT_DIR ]]; then
echo "$LOCUST_OUTPUT_DIR already exists but is not a directory" 1>&2
fi
LOCUST_PATH="/usr/local/bin/locust"
LOCUST_FLAGS="-f $LOCUST_LOCUSTFILE_PATH --host=$LOCUST_TARGET_HOST --csv=$LOCUST_OUTPUT_DIR/locust-${LOCUST_MODE}"
if [[ "$LOCUST_MODE" = "master" ]]; then
LOCUST_FLAGS="$LOCUST_FLAGS --master --headless -u $LOCUST_USERS -r $LOCUST_SPAWN_RATE -t $LOCUST_TEST_TIME"
elif [[ "$LOCUST_MODE" = "worker" ]]; then
LOCUST_FLAGS="$LOCUST_FLAGS --worker --master-host=$LOCUST_MASTER_NODE_HOST"
fi
auth_token=$LOCUST_TEST_API_TOKEN $LOCUST_PATH $LOCUST_FLAGS
# Copy test output files to S3
today=$(date +"%Y/%m/%d")
S3_OUTPUT_DIR="s3://${LOCUST_S3_OUTPUT_BUCKET}/${today}/${HOSTNAME}"
echo "Copying locust output files from [$LOCUST_OUTPUT_DIR] to S3 [$S3_OUTPUT_DIR]"
aws s3 cp --recursive $LOCUST_OUTPUT_DIR $S3_OUTPUT_DIR
retVal=$?
if [ $retVal -ne 0 ]; then
echo "Something went wrong, exit code is ${retVal}"
fi
exit $retVal
So my requeriment / idea is to run the script above and after that delete the whole thing. But instead of that, I'm getting and endless pods restarting:
NAME READY STATUS RESTARTS AGE
locust-master-69b4547ddf-7fl4d 1/1 Running 4 23m
locust-worker-59b9689857-l5jhw 1/1 Running 4 23m
locust-worker-59b9689857-l5nd2 1/1 Running 4 23m
locust-worker-59b9689857-lwqbb 1/1 Running 4 23m
How can I delete both deployments after the shellscript ends?
I think you are looking for Jobs.
As pods successfully complete, the Job tracks the successful
completions. When a specified number of successful completions is
reached, the task (ie, Job) is complete. Deleting a Job will clean up
the Pods it created.
You can use ttl mechanism for cleaning up the finished job by specifying the .spec.ttlSecondsAfterFinished field of the Job
https://kubernetes.io/docs/concepts/workloads/controllers/job/#ttl-mechanism-for-finished-jobs

How do I run a command while starting a Pod in Kubernetes?

I want to execute a command during the creation of the pod.
I see two options available :
kubectl run busybox --image=busybox --restart=Never -- sleep 3600
kubectl run busybox --image=busybox --restart=Never -- /bin/sh -c "sleep 3600"
What is the difference between the above two commands ?
In short no difference in the outcome if you want to run "sleep 3600".
Both perform the same operation.
To understand the behaviour of those options add dry-run option to it
First one passes "sleep" & "3600" as arguments to Entrypoint of busybox image which is "/bin/sh"
kubectl run busybox --image=busybox --restart=Never --dry-run=client -o yaml -- sleep 3600
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- args:
- sleep
- "3600"
image: busybox
name: busybox
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
second one passes "/bin/sh -c" , "sleep" & "3600" as arguments to Entrypoint of busybox image which is "/bin/sh" . So it will open a new shell to run "sleep 3600" inside the container.
kubectl run busybox --image=busybox --restart=Never --dry-run=client -o yaml -- /bin/sh -c "sleep 3600"
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- args:
- /bin/sh
- -c
- sleep 3600
image: busybox
name: busybox
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
As mentioned in the beginning it does not make any difference in the outcome of "sleep 3600" But this method is useful when you want to run multiple commands by container for example "sleep 3600" & "echo boo". so the syntax would be
kubectl run busybox --image=busybox --restart=Never -- /bin/sh -c "sleep 3600;echo boo"

Creating a Docker container that runs forever using bash

I'm trying to create a Pod with a container in it for testing purposes that runs forever using the K8s API. I have the following yaml spec for the Pod which would run a container and exit straight away:
apiVersion: v1
kind: Pod
metadata:
name: pod-example
spec:
containers:
- name: ubuntu
image: ubuntu:trusty
command: ["echo"]
args: ["Hello World"]
I can't find any documentation around the command: tag but ideally I'd like to put a while loop in there somewhere printing out numbers forever.
If you want to keep printing Hello every few seconds you can use:
apiVersion: v1
kind: Pod
metadata:
name: busybox2
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
ports:
- containerPort: 80
command: ["/bin/sh", "-c", "while :; do echo 'Hello'; sleep 5 ; done"]
You can see the output using kubectl logs <pod-name>
Other option to keep a container running without printing anything is to use sleep command on its own, for example:
command: ["/bin/sh", "-ec", "sleep 10000"]

Kubernetes Pod is changing status from running to completed very soon ,how do i prevent that

Created a pod using yaml and once pod is created I am running kubectl exec to run my gatling perf test code
kubectl exec gradlecommandfromcommandline -- ./gradlew gatlingRun-
simulations.RuntimeParameters -DUSERS=500 -DRAMP_DURATION=5 -DDURATION=30
but this is ending at kubectl console with below message :-
command terminated with exit code 137
On Investigation its found that pod is changing status from running to completed stage.
How do i increase life span of a pod so that it waits for my command to get executed.Here is pod yaml
apiVersion: v1
kind: Pod
metadata:
name: gradlecommandfromcommandline
labels:
purpose: gradlecommandfromcommandline
spec:
containers:
- name: gradlecommandfromcommandline
image: tarunkumard/tarungatlingscript:v1.0
workingDir: /opt/gatling-fundamentals/
command: ["./gradlew"]
args: ["gatlingRun-simulations.RuntimeParameters", "-DUSERS=500", "-
DRAMP_DURATION=5", "-DDURATION=30"]
restartPolicy: OnFailure
Here is yaml file to make pod running always
apiVersion: v1
kind: Pod
metadata:
name: gradlecommandfromcommandline
labels:
purpose: gradlecommandfromcommandline
spec:
volumes:
- name: docker-sock
hostPath:
path: /home/vagrant/k8s/pods/gatling/user-files/simulations # A file or
directory location on the node that you want to mount into the Pod
# command: [ "git clone https://github.com/TarunKDas2k18/PerfGatl.git" ]
containers:
- name: gradlecommandfromcommandline
image: tarunkumard/tarungatlingscript:v1.0
workingDir: /opt/gatling-fundamentals/
command: ["./gradlew"]
args: ["gatlingRun-simulations.RuntimeParameters", "-DUSERS=500", "-
DRAMP_DURATION=5", "-DDURATION=30"]
- name: gatlingperftool
image: tarunkumard/gatling:FirstScript # Run the ubuntu 16.04
command: [ "/bin/bash", "-c", "--" ] # You need to run some task inside a
container to keep it running
args: [ "while true; do sleep 10; done;" ] # Our simple program just sleeps inside
an infinite loop
volumeMounts:
- mountPath: /opt/gatling/user-files/simulations # The mount path within the
container
name: docker-sock # Name must match the hostPath volume name
ports:
- containerPort: 80