google container startup script - kubernetes

I have created /usr/startup.sh script in google container which would like to execute it on startup of every pod.
I tried it doing it through command in yaml like below.
command: "sh /usr/start.sh"
command: ["sh", "-c", "/usr/start.sh"]
Please let me if there is any kind of way that can execute defined script at the startup in google container/pod.

You may want to look at the postStart lifecycle hook.
An example can be found in the kubernetes repo:
containers:
- name: nginx
image: resouer/myapp:v6
lifecycle:
postStart:
exec:
command:
- "cp"
- "/app/myapp.war /work"
Here are the API docs:
// Lifecycle describes actions that the management system should take in response to container lifecycle
// events. For the PostStart and PreStop lifecycle handlers, management of the container blocks
// until the action is complete, unless the container process fails, in which case the handler is aborted.
type Lifecycle struct {
// PostStart is called immediately after a container is created. If the handler fails, the container
// is terminated and restarted.
PostStart *Handler `json:"postStart,omitempty"`
// PreStop is called immediately before a container is terminated. The reason for termination is
// passed to the handler. Regardless of the outcome of the handler, the container is eventually terminated.
PreStop *Handler `json:"preStop,omitempty"`
}

Startup scripts run on node startup, not for every pod. We don't currently have a "hook" in kubelet to run whenever a pod starts on a node. Can you maybe explain what you're trying to do?

Related

Running a pod/container in Kubernetes that applies maintenance to a DB

I have found several people asking about how to start a container running a DB, then run a different container that runs maintenance/migration on the DB which then exits. Here are all of the solutions I've examined and what I think are the problems with each:
Init Containers - This wont work because these run before the main container is up and they block the starting of the main container until they successfully complete.
Post Start Hook - If the postStart hook could start containers rather than simply exec a command inside the container then this would work. Unfortunately, the container with the DB does not (and should not) contain the rather large maintenance application required to run it this way. This would be a violation of the principle that each component should do one thing and do it well.
Sidecar Pattern - This WOULD work if the restartPolicy were assignable or overridable at the container level rather than the pod level. In my case the maintenance container should terminate successfully before the pod is considered Running (just like would be the case if the postStart hook could run a container) while the DB container should Always restart.
Separate Pod - Running the maintenance as a separate pod can work, but the DB shouldn't be considered up until the maintenance runs. That means managing the Running state has to be done completely independently of Kubernetes. Every other container/pod in the system will have to do a custom check that the maintenance has run rather than a simple check that the DB is up.
Using a Job - Unless I misunderstand how these work, this would be equivalent to the above ("Separate Pod").
OnFailure restart policy with a Sidecar - This means using a restartPolicy of OnFailure for the POD but then hacking the DB container so that it always exits with an error. This is doable but obviously just a hacked workaround. EDIT: This also causes problems with the state of the POD. When the maintenance runs and stays up and both containers are running, the state of the POD is Ready, but once the maintenance container exits, even with a SUCCESS (0 exit code), the state of the POD goes to NotReady 1/2.
Is there an option I've overlooked or something I'm missing about the above solutions? Thanks.
One option would be to use the Sidecar pattern with 2 slight changes to the approach you described:
after the maintenance command is executed, you keep the container running with a while : ; do sleep 86400; done command or something similar.
You set an appropriate startupProbe in place that resolves successfully only when your maintenance command is executed successfully. You could for example create a file /maintenance-done and use a startupProbe like this:
startupProbe:
exec:
command:
- cat
- /maintenance-done
initialDelaySeconds: 5
periodSeconds: 5
With this approach you have the following outcome:
Having the same restartPolicy for both your database and sidecar containers works fine thanks to the sleep hack.
You Pod only becomes ready when both containers are ready. In the sidecar container case this happens when the startupProbe succeedes.
Furthermore, there will be no noticeable overhead in your pod: even if the sidecar container keeps running, it will consume close to zero resources since it is only running the sleep command.

How to terminate another container in the same pod when your main container finish its job

Using OpenShift 3.9, I run a daily CronJob that consists of 2 containers:
A Redis server
A Python script that uses the Redis server
When the python script finishes its execution, the container is terminated normally but the Redis server container stays up.
Is there a way to tell the Redis server container to automatically terminate its execution when the python script exit? Is there an equivalent to the depends_on of docker compose?
Based on Dawid Kruk comment, I added this line at the end of my python script to shutdown the server:
os.system('redis-cli shutdown NOSAVE')
It effectively terminates the container.

Kubernetes - run job after pod status is ready

I am basically looking for mechanics similair to init containers with a caveat, that I want it to run after pod is ready (responds to readinessProbe for instance). Are there any hooks that can be applied to readinessProbe, so that it can fire a job after first sucessfull probe?
thanks in advance
you can use some short lifecycle hook to pod or say container.
for example
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo In postStart > /dev/termination-log"]
it's postStart hook so I think it will work.
But Post hook is async function so as soon as container started it will be triggered sometime may possible before the entry point of container it triggers.
Update
Above postStart runs as soon as the container is created not when it's Ready.
So I you are looking for when it become read you have to use either startup probe.
Startup probe is like Readiness & Liveness probe only but it's one time. Startup probe check for application Readiness once the application is Ready liveness probe takes it's place.
Read More about startup probe
So from the startup probe you can invoke the Job or Run any type of shell script file it will be one time, also it's after your application sends 200 to /healthz Endpoint.
startupProbe:
exec:
command:
- bin/bash
- -c
- ./run-after-ready.sh
failureThreshold: 30
periodSeconds: 10
file run-after-ready.sh in container
#!/bin/sh
curl -f -s -I "http://localhost/healthz" &>/dev/null && echo OK || echo FAIL
.
. #your extra code or logic, wait, sleep you can handle now everything
.
You can add more checks or conditions shell script if the application Ready or some API as per need.
I don't think there are anything in vanilla k8s that can achieve this right now. However there are 2 options to go about this:
If it is fine to retry the initialization task multiple times until it succeed then I would just start the task as a job at the same time as the pod you want to initialize. This is the easiest option but might be a bit slow though because of exponential backoff.
If it is critical that the initialization task only run after the pod is ready or if you want the task to not waste time failing and backoff a few times, then you should still run that task as a job but this time have it watch the pod in question using k8s api and execute the task as soon as the pod becomes ready.

Kubernetes - How can a container be started with 2 processes and bound to both of them?

I need a deployment where each pod has a single container and each container has 2 java processes running. Since a container starts with a process(P1), and if that particular process(P1) is killed, the pod restarts. Is it possible, that container starts with 2 processes, and even if one of them is killed, the container(or pod in our case, since each pod has only one container) restarts? I could not find any documentation related to this which says it can/cannot be done. Also, how can I start the container with 2 processes? If I try something like this (javaProcess is a java file) in my docker image, it runs only the first process :
java -jar abc.jar
java javaProcess
or
java javaProcess
java -jar abc.jar
If I start the container with one process(P1) and start the other process(P2) after the container is up, the container would not be bound to P2 and hence if P2 terminates, the container won't restart. But, I need it to restart!
You can do this using supervisord. Your main process should be bound to supervisord in docker image and two java processes should be managed using supervisord.
supervisordā€˜s primary purpose is to create and manage processes based
on data in its configuration file. It does this by creating
subprocesses. Each subprocess spawned by supervisor is managed for the
entirety of its lifetime by supervisord (supervisord is the parent
process of each process it creates). When a child dies, supervisor is
notified of its death via the SIGCHLD signal, and it performs the
appropriate operation.
Following is a sample supervisord config file which start two java processes. (supervisord.conf)
[supervisord]
nodaemon=true
[program:java1]
user=root
startsecs = 120
autorestart = true
command=java javaProcess1
[program:java2]
user=root
startsecs = 120
autorestart = true
command=java javaProcess2
In your docker file you should, do something like this:
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
Run supervisor from Kubernetes config
To add to #AnuruddhaLankaLiyanarachchi answer you can also run supervisor from your Kubernetes setup by supplying command and args keys in the yaml file:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod-example
spec:
containers:
- name: nginx-pod-example
image: library/nginx
command: ["/usr/bin/supervisord"]
args: ["-n", "-c", "/etc/supervisor/supervisord.conf"]
You can add '&' sign to run a process in the background.
java javaProcess &
java -jar abc.jar
In this way you will get two processes running inside your pod. But, your container would be bound to the process which is in the foreground!
I need a deployment where each pod has a single container and each container
has 2 java processes running.
This is a textbook case for running two containers in one pod. The only exceptions I can think of are P1 and P2 sharing a socket or (worse) using shared memory.
Using supervisord will work, but it is an anti-pattern for Kubernetes. As you already noticed, there are several combinations of P1/P2 failing that are best dealt with by Kubernetes directly.

Is there a way to cancel a handler after it was notified?

I've got an enrvironment consisting of a stack of dockerized microapps, where some are dependant from others, linked to each other and communicate over http on the docker interface. My problem was that the docker-compose tracked only the docker-compose.yml file and recreated containers only when the docker-compose.yml has been changed.
With ansible i can finally start tracking config files, that get mounted as volumes inside the containers, so they can be deployed from templates - which works fantastically.
Before ansible I used to run:
docker-compose stop <app> && docker-compose rm -f <app> && docker-compose up -d
to refresh a single app when I knew the mounted file has been changed and the volumes needed to be refreshed.
I've defined multiple roles with the docker_service module for each app each one with its own handler that, when notified, runs the code above, to refresh that particular app.
The problem is, when multiple apps have their mounted files changed, ansible notifies each handler and each one gets executed which is not exactly the case i need as when the primary container (on which others depend) gets recreated the others don't need to because they have already been recreated, yet their handlers also are being executed. So my question is: is there a way to cancel a notified handler? I know about flush_handlers but that just executes notified handlers, not exactly what I need.
You can use conditionals in handlers.
Use a flag variable to indicate that some handlers shouldn't execute.
- name: restart myapp1
shell: docker ...
when: not block_apps_restart