celery task clean-up with DB backend - celery

I'm trying to understand how and when tasks are cleaned up in celery. From looking at the task docs I see that:
Old results will be cleaned automatically, based on the
CELERY_TASK_RESULT_EXPIRES setting. By default this is set to expire
after 1 day: if you have a very busy cluster you should lower this
value.
But this quote is from the RabbitMQ Result Backend section and I do not see any similar text in the Database Backend section. So my question is: is there a backend agnostic approach I can take for old task clean-up with celery and if not is there a DB Backend specific approach I should take? Incase it makes any difference I'm using django-celery. Thanks.

If you click on the link to the setting doc for CELERY_TASK_RESULT_EXPIRES:
http://docs.celeryproject.org/en/latest/userguide/configuration.html#result-expires
It does say that database supports this, but then you need to run celery beat (there's a default periodic task, called every day, to remove expired results).
The backend docs in the task should probably mention this as well, maybe there should be a dedicated guide for backends too. If you want to lobby for this, then please open up an issue at https://github.com/celery/celery/issues

Related

How to (properly) Create Jobs On Demand

What I would like to do
I would like to create a Kubernetes workflow where users could POST jobs whenever they wanted, and they might do it at any time, not necessarily scheduling anything (CronJobs), or specifying parallelism or completion requirements, i.e., users could create Jobs on demand.
How I would do it right now
The way I'm thinking about accomplishing this is by simply applying the Jobs to the Kubernetes cluster (I also have to make sure the job doesn't have the same name of a current one because otherwise Kubernetes will think it's a mistake and won't create another one). However, this feels improper because the Jobs will be kind of scattered on the cluster and I would lose control over them (though Kubernetes would supposedly automatically manage them optimally).
Is there a better, proper a way?
I imagine a more proper way of configuring all this is to create some sort of Deployment and Service on top of the Jobs, but is that an existing feature on Kubernetes? Huge companies probably have had this problem in the past so I wonder: what are the best practices for this Kubernetes Jobs On Demand use case?
Not a full answer but you might be interested in this project: https://github.com/ivoscc/kubernetes-task-runner.
It provides an API to launch one-time tasks as Jobs on a Kubernetes cluster, handles input/output files via GCS and periodically cleans up finished Jobs.

Why should I store kubernetes deployment configuration into source control if kubernetes already keeps track of it?

One of the documented best practices for Kubernetes is to store the configuration in version control. It is mentioned in the official best practices and also summed up in this Stack Overflow question. The reason is that this is supposed to speed-up rollbacks if necessary.
My question is, why do we need to store this configuration if this is already stored by Kubernetes and there are ways with which we can easily go back to a previous version of the configuration using for example kubectl? An example is a command like:
kubectl rollout history deployment/nginx-deployment
Isn't storing the configuration an unnecessary duplication of a piece of information that we will then have to keep synchronized?
The reason I am asking this is that we are building a configuration service on top of Kubernetes. The user will interact with it to configure multiple deployments, I was wondering if we should keep a history of the Kubernetes configuration and the content of configMaps in a database for possible roll backs or if we should just rely on kubernetes to retrieve the current configuration and rolling back to previous versions of the configuration.
You can use Kubernetes as your store of configuration, to your point, it's just that you probably shouldn't want to. By storing configuration as code, you get several benefits:
Configuration changes get regular code reviews.
They get versioned, are diffable, etc.
They can be tested, linted, and whatever else you desired.
They can be refactored, share code, and be documented.
And all this happens before actually being pushed to Kubernetes.
That may seem bad ("but then my configuration is out of date!"), but keep in mind that configuration is actually never in date - just because you told Kubernetes you want 3 replicas running doesn't mean there are, or if there were that 1 isn't temporarily down right now, and so on.
Configuration expresses intent. It takes a different process to actually notice when your intent changes or doesn't match reality, and make it so. For Kubernetes, that storage is etcd and it's up to the master to, in a loop forever, ensure the stored intent matches reality. For you, the storage is source control and whatever process you want, automated or not, can, in a loop forever, ensure your code eventually becomes reflected in Kubernetes.
The rollback command, then, is just a very fast shortcut to "please do this right now!". It's for when your configuration intent was wrong and you don't have time to fix it. As soon as you roll back, you should chase your configuration and update it there as well. In a sense, this is indeed duplication, but it's a rare event compared to the normal flow, and the overall benefits outweigh this downside.
Kubernetes cluster doesn't store your configuration it runs it, as you server runs your application code.

Apache Airflow scheduling with a time bound and triggering

I'm using airflow with celery Executor. Now I'm planning to develop user interaction for a task to decide to select branch using BranchOperator in a DAG. Its working by running continuous loop to checking value in database. But I feel it is not the good way of approach. Is there any alternative to do this?
And I want to wait for this interaction up-to particular time otherwise I want to stop. Is it possible to do this in airflow? And if is possible then is the any possibility to change this time bound dynamically?
Thank you in advance.
You shouldn't be using a BranchOperator for this. If you want to proceed in your dag based on some value in the db, you should use a Sensor. There are some off the shelf sensors in airflow and you could also look at some of those to create your own. Sensors basically poll for a certain criteria and timeout after a configurable period of time. From your question it seems this is exactly what you need.

Bluemix Auto Scaling API

Is there a way for me to programmatically get notified when Bluemix auto scaling has scaled up or down?
I'm reading streaming data from a queue and would like to make sure the number of instances that I have are balanced and data is partitioned correctly
At present this kind of notification service is not available, only you can do is query the instance scaling history in Web UI. I think this requirement is interesting and should be considered to provide to developer in the future.
This kind of alert isn't available yet but you can write a simple script monitoring output of
cf app (appname)
It returns the number of instances running and the state of each one, with the right combination of awk and grep (or a perl script for example) you could have your own alerter while waiting for this of functionality

How to handle large amounts of scheduled tasks on a web server?

I'm developing a website (using a LAMP stack) which must handle many user-made scheduling tasks. It works as following: an user creates an event and sets a date, and others users (as many as 63) may join. A few hours before the set date, the system must email each user subscribed to that event. And that's it.
However, I have never handled scheduling, and the only tools I know (poorly) are cron and at. My plan is to create an at job for each event, which will call a script that gets all subscribers emails and mails them.
My question is: is my plan/design good? Is it scalable? Are there better options that I should be aware of?
Why a separate cron job for each event? I've done something similar thing for a newsletter with a cron job just running once per hour and if there are any newsletters to be sent it just handles them. In your case you'd have a script that runs once every hour and gets a list of users for events that happen in the desired time interval since.
It will work. As far as scalability, at the minimum make sure that the script runs in it's own process so it doesn't bog down the server unnecessarily.
Create a php-cli script perhaps?
I'm doing most of my work in Rails nowadays, and there's a wealth of background processing libraries one of them is Resque it uses the redis server to keep track of the jobs
I found a PHP clone https://github.com/chrisboulton/php-resque
Might be overkill for your use case, but give it a shot perhaps
If you would consider a proper framework that uses an application server (and not a simple webserver), Spring has a task scheduling layer that's simple to use. Scheduling jobs on the server really requires more than what a simple LAMP install can do, but I haven't used PHP in a while so maybe there's an equivalent.
Here's an article that compares some of your options.