k8s check wait for pod existence - kubernetes

In K8s job file declared in yaml (with helm), I need that the job will run only whether the pod database exists and ready.
For the readiness, it works fine, since I added the following:
initContainers
- name: wait-mysql-ready
image: "bitnami//kubectl:latest"
imagePullPolicy: IfNotPresent
command:
- kubectl
args:
- wait
- pod/mysql-pod
- --for=condition=ready
- --timeout=120s
It works fine, but I need the job to run once (without duplications, and with long time period till it ends).
The job doesn't as usual run once, and it's name, when running kubectl get pods is <jobname-hash>
If it doesn't run once, at a the last attempt it will succeed (because the pod isn't created yet). Other attempt may failed.
So I added the following lines in main spec:
spec:
parallelism: 1
completions: 1
backoffLimit: 0
and in init container section (befor the previous one), I have added:
initContainers
- name: wait-mysql-exist-pod
image: "bitnami/kubectl:latest"
imagePullPolicy: IfNotPresent
command:
- /bin/sh
args:
- -c
- "while !(kubectl get pod mysql-pod); do echo 'Waiting for mysql pod to be existed...'; sleep 5; done"
(I could not find other good syntax for ! - for multiline string. I would be glade knowing how).
Also need another job to wait for the current job.
How can I create a job that run once, and check in the job that the pod exists before checking the ready state?

You can use this script as init-container to wait for other resources.

Related

kubernetes lifecycle commands not running

I have been trying to get prestop to run a script before the pod terminates (to prolong the termination until the current job has finished), but command doesn't seem to be executing the commands. I've temporarily added an echo command, which i would expect to see in kubectl logs for the pod, i can't see this either.
This is part of the (otherwise working) deployment spec:
containers:
- name: file-blast-app
image: my_image:stuff
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command: ["echo","PRE STOP!"]
Does anyone know why this would not be working and if i'm correct to expect the logs from hook in kubectl logs for the pod?
You forgot to mention the shell through which you want this command to be executed.
Try using the following in your YAML.
containers:
- name: file-blast-app
image: my_image:stuff
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command: ["/bin/sh","-c","echo PRE STOP!"]
Also, one thing to note is that a preStop hook only gets executed when a pod is terminated, and not when it is completed. You can read more on this here.
You can also refer to the K8S official documentation for lifecycle hooks here.

Handling cronjobs in a Pod with multiple containers

I have a requirement in which I need to create a cronjob in kubernetes but the pod is having multiple containers (with single container its working fine).
Is it possible?
The requirement is something like this:
1. First container: Run the shell script to do a job.
2. Second container: run fluentbit conf to parse the log and send it.
Previously I thought to have a deployment in place and that is working fine but since that deployment was used just for 10 mins jobs I thought to make it a cron job.
Any help is really appreciated.
Also about the cronjob I am not sure if a pod can support multiple containers to do that same.
Thank you,
Sunny
Yes you can create a cronjob with multiple containers. CronJob is an abstraction on top of pod. So in the pod spec you can have multiple containers just like you can have in a normal pod. As an example
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
namespace: default
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
- name: app
image: alpine
command:
- echo
- Hello World!
restartPolicy: OnFailure
I need to agree with the answer provided by #Arghya Sadhu. It shows how you can run multi container Pod with a CronJob. Before the answer I would like to give more attention to the comment provided by #Chris Stryczynski:
It's not clear whether the containers are run in parallel or sequentially
It is not entirely clear if the workload that you are trying to run:
The requirement is something like this:
First container: Run the shell script to do a job.
Second container: run fluentbit conf to parse the log and send it.
could be used in parallel (both running at the same time) or require sequential approach (after X completed successfully, run Y).
If the workload could be run in parallel the answer provided by #Arghya Sadhu is correct, however if one workload is depending on another, I'd reckon you should be using initContainers instead of multi container Pods.
The example of a CronJob that implements the initContainer could be following:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: ubuntu
image: ubuntu
command: [/bin/bash]
args: ["-c","cat /data/hello_there.txt"]
volumeMounts:
- name: data-dir
mountPath: /data
initContainers:
- name: echo
image: busybox
command: ["bin/sh"]
args: ["-c", "echo 'General Kenobi!' > /data/hello_there.txt"]
volumeMounts:
- name: data-dir
mountPath: "/data"
volumes:
- name: data-dir
emptyDir: {}
This CronJob will write a specific text to a file with an initContainer and then a "main" container will display its result. It's worth to mention that the main container will not start if the initContainer won't succeed with its operations.
$ kubectl logs hello-1234567890-abcde
General Kenobi!
Additional resources:
Linchpiner.github.io: K8S multi container pods
Whats about sidecar container for logging as second container which keep running without exit code. Even the job might run the state of the job still failed.

How to make a k8s pod exits itself when the main container exits?

I am using a sidecard pattern for a k8s pod within which there're two containers: the main container and the sidecar container. I'd like to have the pod status depends on the main container only (say if the main container failed/completed, the pod should be in the same status) and discard the sidecard container.
Is there an elegant way to doing this?
Unfortunately the restartPolicy flag applies to all containers in the pod so the simple solution isn’t really going to work. Are you sure your logic shouldn’t be in an initContainer rather than a sidecar? If it does need to be a sidecar, have it sleep forever at the end of your logic.
As per documentation:
Pod is running and has two Containers. Container 1 exits with failure.
Log failure event.
If restartPolicy is:
Always: Restart Container; Pod phase stays Running.
OnFailure: Restart Container; Pod phase stays Running.
Never: Do not restart Container; Pod phase stays Running.
If Container 1 is not running, and Container 2 exits:
Log failure event.
If restartPolicy is:
Always: Restart Container; Pod phase stays Running.
OnFailure: Restart Container; Pod phase stays Running.
Never : Pod phase becomes Failed.
As workaround (partial solution for this problem) with restartPolicy: Never - you can apply the result of livenees probe from the main container to the sidecar container (using using exec, http or tcp probe).
It's not the good solution while working with microservices.
example:
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness1
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /test-pd/healthy; sleep 30; rm -rf /test-pd/healthy; sleep 30
livenessProbe:
exec:
command:
- cat
- /test-pd/healthy
initialDelaySeconds: 5
periodSeconds: 5
volumeMounts:
- mountPath: /test-pd
name: test-volume
- name: liveness2
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- sleep 120
livenessProbe:
exec:
command:
- cat
- /test-pd2/healthy
initialDelaySeconds: 5
periodSeconds: 5
volumeMounts:
- mountPath: /test-pd2
name: test-volume
restartPolicy: Never
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
type: Directory
Please let me know if that helped.

PreStop hook in kubernetes never gets executed

I am trying to create a little Pod example with two containers that share data via an emptyDir volume. In the first container I am waiting a couple of seconds before it gets destroyed.
In the postStart I am writing a file to the shared volume with the name "started", in the preStop I am writing a file to the shared volume with the name "finished".
In the second container I am looping for a couple of seconds outputting the content of the shared volume but the "finished" file never gets created. Describing the pod doesn't show an error with the hooks either.
Maybe someone has an idea what I am doing wrong
apiVersion: v1
kind: Pod
metadata:
name: shared-data-example
labels:
app: shared-data-example
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: first-container
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "for i in {1..4}; do echo Welcome $i;sleep 1;done"]
imagePullPolicy: Never
env:
- name: TERM
value: xterm
volumeMounts:
- name: shared-data
mountPath: /myshareddata
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "echo First container finished > /myshareddata/finished"]
postStart:
exec:
command: ["/bin/sh", "-c", "echo First container started > /myshareddata/started"]
- name: second-container
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "for i in {1..20}; do ls /myshareddata;sleep 1;done"]
imagePullPolicy: Never
env:
- name: TERM
value: xterm
volumeMounts:
- name: shared-data
mountPath: /myshareddata
restartPolicy: Never
It is happening because the final status of your pod is Completed and applications inside containers stopped without any external calls.
Kubernetes runs preStop hook only if pod resolves an external signal to stop. Hooks were made to implement a graceful custom shutdown for applications inside a pod when you need to stop it. In your case, your application is already gracefully stopped by itself, so Kubernetes has no reason to call the hook.
If you want to check how a hook works, you can try to create Deployment and update it's image by kubectl rolling-update, for example. In that case, Kubernetes will stop the old version of the application, and preStop hook will be called.

How to roll kubernetes updates in intervals

We have a case where we need to make sure that pods in k8s have the latest version possible. What is the best way to accomplish this?
First idea was to kill the pod after some point, knowing that the new ones will come up pulling the latest image. Here is what we found so far. Still don't know how to do it.
Another idea is having rolling-update executed in intervals, like every 5 hours. Is there a way to do this?
As mentioned by #svenwltr using activeDeadlineSeconds is an easy option but comes with the risk of loosing all pods at once. To mitigate that risk I'd use a deployment to manage the pods and their rollout, and configure a small second container along with the actual application. The small helper could be configured like this (following the official docs):
apiVersion: v1
kind: Pod
metadata:
name: app-liveness
spec:
containers:
- name: liveness
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep $(( RANDOM % (3600) + 1800 )); rm -rf /tmp/healthy; sleep 600
image: gcr.io/google_containers/busybox
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
- name: yourapplication
imagePullPolicy: Always
image: nginx:alpine
With this configuration every pod would break randomly within the configured timeframe (here between 30 and 90mins) and that would trigger the start of a new pod. The imagePullPolicy: Always would then make sure that the image is updated during that cycle.
This of course assumes that your application versions are always available under the same name/tag.
Another alternative is to use a deployment and let the controller handle roll outs. To be more specific: If you update the image field in the deployment yaml, it automatically updates every pod. IMO that's the cleanest way, but it has some requirements:
You cannot use the latest tag. The assumption is that a container only needs an update, when the image tag changes.
If an updated happens, you have to update image tag manually, somehow. This might be done by a custom controller which checks for new tags and updates the deployment accordingly. Or this could be triggered by a Continuous Delivery system.
To use your linked feature you just have to specify activeDeadlineSeconds in your pods.
Not tested example:
apiVersion: v1
kind: Pod
metadata:
name: "nginx"
spec:
activeDeadlineSeconds: 3600
containers:
- name: nginx
image: nginx:alpine
imagePullPolicy: Always
The downside of this is, that you cannot control when the deadline kicks in. This means it might happen, that all your pods get killed at the same time and the whole service gets offline (that depends on you applications).
I tried using Pagid's solution, but unfortunately my observation and subsequent research indictate that his assertion that a failing container will restart the whole pod is incorrect. It turns out that only the failing container will be restarted, which obviously does not help much when the point is to restart the other containers in the pod at random intervals.
The good news is that I have a solution that seems to work which is based on his answer. Basically, instead of writing to /tmp/healthy, you instead write to a shared volume which each of the containers within the pod have mounted. You also need to add the liveness probe to each of those pods. Here's an example based on the one I am using:
volumes:
- name: healthcheck
emptyDir:
medium: Memory
containers:
- image: alpine:latest
volumeMounts:
- mountPath: /healthcheck
name: healthcheck
name: alpine
livenessProbe:
exec:
command:
- cat
- /healthcheck/healthy
initialDelaySeconds: 5
periodSeconds: 5
- name: liveness
args:
- /bin/sh
- -c
- touch /healthcheck/healthy; sleep $(( RANDOM % (3600) + 1800 )); rm -rf /healthcheck/healthy; sleep 600
image: gcr.io/google_containers/busybox
volumeMounts:
- mountPath: /healthcheck
name: healthcheck
livenessProbe:
exec:
command:
- cat
- /healthcheck/healthy
initialDelaySeconds: 5
periodSeconds: 5