How to terminate another container in the same pod when your main container finish its job - kubernetes

Using OpenShift 3.9, I run a daily CronJob that consists of 2 containers:
A Redis server
A Python script that uses the Redis server
When the python script finishes its execution, the container is terminated normally but the Redis server container stays up.
Is there a way to tell the Redis server container to automatically terminate its execution when the python script exit? Is there an equivalent to the depends_on of docker compose?

Based on Dawid Kruk comment, I added this line at the end of my python script to shutdown the server:
os.system('redis-cli shutdown NOSAVE')
It effectively terminates the container.

Related

Running a pod/container in Kubernetes that applies maintenance to a DB

I have found several people asking about how to start a container running a DB, then run a different container that runs maintenance/migration on the DB which then exits. Here are all of the solutions I've examined and what I think are the problems with each:
Init Containers - This wont work because these run before the main container is up and they block the starting of the main container until they successfully complete.
Post Start Hook - If the postStart hook could start containers rather than simply exec a command inside the container then this would work. Unfortunately, the container with the DB does not (and should not) contain the rather large maintenance application required to run it this way. This would be a violation of the principle that each component should do one thing and do it well.
Sidecar Pattern - This WOULD work if the restartPolicy were assignable or overridable at the container level rather than the pod level. In my case the maintenance container should terminate successfully before the pod is considered Running (just like would be the case if the postStart hook could run a container) while the DB container should Always restart.
Separate Pod - Running the maintenance as a separate pod can work, but the DB shouldn't be considered up until the maintenance runs. That means managing the Running state has to be done completely independently of Kubernetes. Every other container/pod in the system will have to do a custom check that the maintenance has run rather than a simple check that the DB is up.
Using a Job - Unless I misunderstand how these work, this would be equivalent to the above ("Separate Pod").
OnFailure restart policy with a Sidecar - This means using a restartPolicy of OnFailure for the POD but then hacking the DB container so that it always exits with an error. This is doable but obviously just a hacked workaround. EDIT: This also causes problems with the state of the POD. When the maintenance runs and stays up and both containers are running, the state of the POD is Ready, but once the maintenance container exits, even with a SUCCESS (0 exit code), the state of the POD goes to NotReady 1/2.
Is there an option I've overlooked or something I'm missing about the above solutions? Thanks.
One option would be to use the Sidecar pattern with 2 slight changes to the approach you described:
after the maintenance command is executed, you keep the container running with a while : ; do sleep 86400; done command or something similar.
You set an appropriate startupProbe in place that resolves successfully only when your maintenance command is executed successfully. You could for example create a file /maintenance-done and use a startupProbe like this:
startupProbe:
exec:
command:
- cat
- /maintenance-done
initialDelaySeconds: 5
periodSeconds: 5
With this approach you have the following outcome:
Having the same restartPolicy for both your database and sidecar containers works fine thanks to the sleep hack.
You Pod only becomes ready when both containers are ready. In the sidecar container case this happens when the startupProbe succeedes.
Furthermore, there will be no noticeable overhead in your pod: even if the sidecar container keeps running, it will consume close to zero resources since it is only running the sleep command.

Airflow: what do `airflow webserver`, `airflow scheduler` and `airflow worker` exactly do?

I've been working with Airflow for a while now, which was set up by a colleague. Lately I run into several errors, which require me to more in dept know how to fix certain things within Airflow.
I do understand what the 3 processes are, I just don't understand the underlying things that happen when I run them. What exactly happens when I run one of the commands? Can I somewhere see afterwards that they are running? And if I run one of these commands, does this overwrite older webservers/schedulers/workers or add a new one?
Moreover, if I for example run airflow webserver, the screen shows some of the things that are happening. Can I simply get out of this by pressing CTRL + C? Because when I do this, it says things like Worker exiting and Shutting down: Master. Does this mean I'm shutting everything down? How else should I get out of the webserver screen then?
Each process does what they are built to do while they are running (webserver provides a UI, scheduler determines when things need to be run, and workers actually run the tasks).
I think your confusion is that you may be seeing them as commands that tell some sort of "Airflow service" to do something, but they are each standalone commands that start the processes to do stuff. ie. Starting from nothing, you run airflow scheduler: now you have a scheduler running. Run airflow webserver: now you have a webserver running. When you run airflow webserver, it is starting a python flask app. While that process is running, the webserver is running, if you kill command, is goes down.
All three have to be running for airflow as a whole to work (assuming you are using an executor that needs workers). You should only ever had one scheduler running, but if you were to run two processes of airflow webserver (ignoring port conflicts, you would then have two separate http servers running using the same metadata database. Workers are a little different in that you may want multiple worker processes running so you can execute more tasks concurrently. So if you create multiple airflow worker processes, you'll end up with multiple processes taking jobs from the queue, executing them, and updating the task instance with the status of the task.
When you run any of these commands you'll see the stdout and stderr output in console. If you are running them as a daemon or background process, you can check what processes are running on the server.
If you ctrl+c you are sending a signal to kill the process. Ideally for a production airflow cluster, you should have some supervisor monitoring the processes and ensuring that they are always running. Locally you can either run the commands in the foreground of separate shells, minimize them and just keep them running when you need them. Or run them in as a background daemon with the -D argument. ie airflow webserver -D.

Airflow 1.9 - Tasks stuck in queue

Latest Apache-Airflow install from PyPy (1.9.0)
Set up includes:
Apache-Airflow
Apache-Airflow[celery]
RabbitMQ 3.7.5
Celery 4.1.1
Postgres
I have the installation across 3 hosts.
Host #1
Airflow Webserver
Airflow Scheduler
RabbitMQ Server
Postgres Server
Host #2
Airflow Worker
Host #3
Airflow Worker
I have a simple DAG that executes a BashOperator Task that runs every 1 minute. I can see the scheduler "queue" the job however, it nevers gets added to a Celery/RabbitMQ queue and gets picked up by the workers. I have a custom RabbitMQ user, authentication seems fine. Flower, however, doesn't show any of the queues populating with data. It does see the two worker machines listening on their respective queues.
Things I've checked:
Airflow Pool configuration
Airflow environment variables
Upgrade/Downgrade Celery and RabbitMQ
Postgres permissions
RabbitMQ Permissions
DEBUG level airflow logs
I read the documentation section about jobs not running. My "start_date" variable is a static date that exists before the current date.
OS: Centos 7
I was able to figure it out but I'm not sure why this is the answer.
Changing the "broker_url" variable to use "pyamqp" instead of "amqp" was the fix.

sbt docker:Publish - app crashes but container doesn't

I'm building docker images for my Scala applications using the sbt-native-packager plugin. I noticed that when the process inside a container crashes (log shows Exception in thread "main"... and the process is definitely dead), the container is still "alive":
me#my-laptop$ docker exec 5cca ps
PID TTY TIME CMD
1 ? 00:00:08 java
152 ? 00:00:00 ps
The generated Dockerfile is:
FROM java:openjdk-8-jre
WORKDIR /opt/docker
ADD opt /opt
RUN ["chown", "-R", "daemon:daemon", "."]
USER daemon
ENTRYPOINT ["bin/the-app-name"]
CMD []
where bin/the-app-name is a pretty big auto-generated bash script that gathers all the necessary parameters (classpath, main class name, etc.) and runs the app using the java command. So my guess is that something about this setup makes docker consider the container to be "running" as long as the JVM is running, regardless of my code crashing...
Any idea how i can cause my container to exit when the app crashes?
When running naked pods this behavior is expected, because naked pods are not rescheduled in the event of node failure.
When you deploy the pod, do you set the restartPolicy to "Always", "OnFailure" or "Never"?
The current status of the pod might be "Ok" right now, but this does not necessarily mean that the pod was not restarted before.
Can you run kubectl get po and print the output to check if the pod was restarted or not?
Info on naked pods here: https://kubernetes.io/docs/concepts/configuration/overview/#naked-pods-vs-replication-controllers-and-jobs
More info on restart policy: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle
After some experimenting it looks like there's a thread-leak somewhere that prevents the application from exiting. I'm suspecting it may be coming from the akka ActorSystem but did not find it yet.
Either way, catching the exception on the main thread and calling System.exit(1) causes the java process to die and the container stops.

Kubernetes - How can a container be started with 2 processes and bound to both of them?

I need a deployment where each pod has a single container and each container has 2 java processes running. Since a container starts with a process(P1), and if that particular process(P1) is killed, the pod restarts. Is it possible, that container starts with 2 processes, and even if one of them is killed, the container(or pod in our case, since each pod has only one container) restarts? I could not find any documentation related to this which says it can/cannot be done. Also, how can I start the container with 2 processes? If I try something like this (javaProcess is a java file) in my docker image, it runs only the first process :
java -jar abc.jar
java javaProcess
or
java javaProcess
java -jar abc.jar
If I start the container with one process(P1) and start the other process(P2) after the container is up, the container would not be bound to P2 and hence if P2 terminates, the container won't restart. But, I need it to restart!
You can do this using supervisord. Your main process should be bound to supervisord in docker image and two java processes should be managed using supervisord.
supervisordā€˜s primary purpose is to create and manage processes based
on data in its configuration file. It does this by creating
subprocesses. Each subprocess spawned by supervisor is managed for the
entirety of its lifetime by supervisord (supervisord is the parent
process of each process it creates). When a child dies, supervisor is
notified of its death via the SIGCHLD signal, and it performs the
appropriate operation.
Following is a sample supervisord config file which start two java processes. (supervisord.conf)
[supervisord]
nodaemon=true
[program:java1]
user=root
startsecs = 120
autorestart = true
command=java javaProcess1
[program:java2]
user=root
startsecs = 120
autorestart = true
command=java javaProcess2
In your docker file you should, do something like this:
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
Run supervisor from Kubernetes config
To add to #AnuruddhaLankaLiyanarachchi answer you can also run supervisor from your Kubernetes setup by supplying command and args keys in the yaml file:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod-example
spec:
containers:
- name: nginx-pod-example
image: library/nginx
command: ["/usr/bin/supervisord"]
args: ["-n", "-c", "/etc/supervisor/supervisord.conf"]
You can add '&' sign to run a process in the background.
java javaProcess &
java -jar abc.jar
In this way you will get two processes running inside your pod. But, your container would be bound to the process which is in the foreground!
I need a deployment where each pod has a single container and each container
has 2 java processes running.
This is a textbook case for running two containers in one pod. The only exceptions I can think of are P1 and P2 sharing a socket or (worse) using shared memory.
Using supervisord will work, but it is an anti-pattern for Kubernetes. As you already noticed, there are several combinations of P1/P2 failing that are best dealt with by Kubernetes directly.