Gracefully update running celery pod in Kubernetes - celery

I have a Kubernetes cluster running Django, Celery, RabbitMq and Celery Beat. I have several periodic tasks spaced out throughout the day (so as to keep server load down). There are only a few hours when no tasks are running, and I want to limit my rolling-updates to those times, without having to track it manually. So I'm looking for a solution that will allow me to fire off a script or task of some sort that will monitor the Celery server, and trigger a rolling update once there's a window in which no tasks are actively running. There are two possible ways I thought of doing this, but I'm not sure which is best, nor how to implement either one.
Run a script (bash or otherwise) that checks up on the Celery server every few minutes, and initiates the rolling-update if the server is inactive
Increment the celery app name before each update (in the Beat run command, the Celery run command, and in the celery.py config file), create a new Celery pod, rolling-update the Beat pod, and then delete the old Celery 12 hours later (a reasonable time span for all running tasks to finish)
Any thoughts would be greatly appreciated.

Related

Upgrade container for long running jobs in kubernetes

I have long running workers running in kubernetes - more than 5 hours. I want to update the container without interrupting the long running jobs. I want any newly started work off the queue to start with the new version of the release but I don't want to interrupt the currently running work.
BTW I'm not actually using Jobs, I'm using Deployments with workers that get work off a redis queue.
What is the best way to do to do a release without killing the long running work?
Have a huge timeout for SIGTERM
preStop hooks?
Another container in the pod that checks for the latest version and updates once work is done?

Airflow: what do `airflow webserver`, `airflow scheduler` and `airflow worker` exactly do?

I've been working with Airflow for a while now, which was set up by a colleague. Lately I run into several errors, which require me to more in dept know how to fix certain things within Airflow.
I do understand what the 3 processes are, I just don't understand the underlying things that happen when I run them. What exactly happens when I run one of the commands? Can I somewhere see afterwards that they are running? And if I run one of these commands, does this overwrite older webservers/schedulers/workers or add a new one?
Moreover, if I for example run airflow webserver, the screen shows some of the things that are happening. Can I simply get out of this by pressing CTRL + C? Because when I do this, it says things like Worker exiting and Shutting down: Master. Does this mean I'm shutting everything down? How else should I get out of the webserver screen then?
Each process does what they are built to do while they are running (webserver provides a UI, scheduler determines when things need to be run, and workers actually run the tasks).
I think your confusion is that you may be seeing them as commands that tell some sort of "Airflow service" to do something, but they are each standalone commands that start the processes to do stuff. ie. Starting from nothing, you run airflow scheduler: now you have a scheduler running. Run airflow webserver: now you have a webserver running. When you run airflow webserver, it is starting a python flask app. While that process is running, the webserver is running, if you kill command, is goes down.
All three have to be running for airflow as a whole to work (assuming you are using an executor that needs workers). You should only ever had one scheduler running, but if you were to run two processes of airflow webserver (ignoring port conflicts, you would then have two separate http servers running using the same metadata database. Workers are a little different in that you may want multiple worker processes running so you can execute more tasks concurrently. So if you create multiple airflow worker processes, you'll end up with multiple processes taking jobs from the queue, executing them, and updating the task instance with the status of the task.
When you run any of these commands you'll see the stdout and stderr output in console. If you are running them as a daemon or background process, you can check what processes are running on the server.
If you ctrl+c you are sending a signal to kill the process. Ideally for a production airflow cluster, you should have some supervisor monitoring the processes and ensuring that they are always running. Locally you can either run the commands in the foreground of separate shells, minimize them and just keep them running when you need them. Or run them in as a background daemon with the -D argument. ie airflow webserver -D.

Should Restart celery beat after restart celery workers?

i have two systemd service
one handles my celery workers(10 queue for different tasks) and one handles celery beat
after deploying new code i restart celery worker service to get new tasks and update celery jobs
Should i restart celery beat with celery worker service too?
or it gets new tasks automatically ?
It depends on what type of scheduler you're using.
If it's default PersistentScheduler then yes, you need to restart beat daemon to allow it to pick up new configuration from the beat_schedule setting.
But if you're using something like django-celery-beat which allows managing periodic tasks at runtime then you don't have to restart celery beat.

Will celery handle tasks gracefully if supervisorctl stop/start/restart is used?

In a case recently, I had to restart some inexplicably idle workers run by supervisord. We are thinking about adding a periodic restart, say, once or twice a day.
This could easily be done using supervisorctl, but is there any chance tasks will be lost while the restart occurs?

Setting up deployments with Capistrano, Sidekiq and Monit

My application uses Sidekiq to handle long (several minutes) running background tasks. Deployments are done with Capistrano 2 and all processes are monitored with Monit.
I have used capistrano-sidekiq to manage the sidekiq process during deployments but it has not worked perfectly. Some times during the deployment a new sidekiq process is started but the old one is not killed. I believe this happens because capistrano-sidekiq is not operating through Monit during the deployment.
Second problem is that because my background tasks can take several minutes to complete my deployment should allow two sidekiq processes to co-exisit. The old sidekiq process should be allowed to complete the tasks it is processing and a new sidekiq process should start taking new tasks into processing.
I have been thinking about something like this into my deploy script
When deployment starts:
I tell Monit to unmonitor the sidekiq process
I stop the current sidekiq process and give it 10 minutes to finish its tasks
After the code has been updated:
I start a new sidekiq process and tell Monit to start monitoring it.
I may need to move the sidekiq process pid file into the release directory if the pid file is not removed until the stopped sidekiq process has eventually been killed.
How does this sound? Any caveats spotted?
EDIT:
Found a good thread about this same issue.
http://librelist.com/browser//sidekiq/2014/6/5/rollback-signal-after-usr1/#f6898deccb46801950f40ad22e75471d
Seems reasonable to me. The only possible issue is losing track of the old Sidekiq's PID but you should be able to use ps and grep for "stopping" to find old Sidekiqs.