Can I restart a FAILED celery task? - celery

I am using celery with the djkombu queue.
I've set max_retries=3 for my task. Once the 3rd retry fails, it executes the after_return method with status=FAILURE. The method also receives a task_id parameter. With this task_id, can I restart the task manually (I think I will need to set the Message.visible to 1) ?

You need to re-launch the task with the same args you launched it before.

Related

ADF Scheduling when existing Job not yet finished

Having read https://learn.microsoft.com/en-us/azure/data-factory/v1/data-factory-scheduling-and-execution, it is unclear to me if:
A schedule is made every hr for a job to run,
can we stop the concurrent execution of the next job at hr+1 if the job for hr+0 is still running?
It looks if concurrency = 1 means this,
But is that invocation simply not start until concurrent execution is finished?
Or will it be discarded?
When we set the concurrency 1, only one instance will be allowed to run at a time. When the scheduled trigger runs again and tries to run the pipeline, If the pipeline is already running, the next invocation will be queued. It will start after finishing the current instance.
For your question, the following invocation will be queued. After the first run finishes, the next run will start.

Cancelling all notStarted or inProgress tasks if job is cancelled

I have a pipeline in AzureDevOps. It install several NPM dependencies and afterwards I use npm run <script_name>.
However if I cancel the job it still spawns webdrivers and it can be seen that the job is still running based on the counter.
Is there a way to cancel the tasks which are inProgress/notStarted if I cancel an ongoing job?
Thank you
Is there a way to cancel the tasks which are inProgress/notStarted if I cancel an ongoing job?
This needs to be explained in a case-by-case basis.
One case is that the task has not started yet after you choose to cancel the job. In this case, the task will not start.
Another case is that the task is inProgress when you cancel the job. This situation depends on the specific circumstances of your task running.
If your task just runs its own task, it can be canceled.
But if your task executes the task by calling other programs through the command,like using command line task invoke MSBuild.exe to build the project, there is no way to cancel the task after the command is issued. Even if you cancel the job, the task is still executed in the background until the job is completely closed.

Composed task arguments are not passed after job restart

I'm running a composed task with three child tasks.
Composed task definition:
composed-task-runner --graph='task1 && task2 && task3'
Launch command
task launch my-composed-task --properties "app.composed-task-runner.composed-task-arguments=arg1=a.txt arg2=test"
Scenario 1:
when the composed task runs without any error, the arguments are passed to all child tasks.
Scenario 2:
when the second child task fails and if the job is restarted , the composed task arguments are passed to second child task but not to third child task
Scenario 3 :
when the first and second tasks succeed and third child task fails and if the job is restarted , the composed task arguments are now passed to third child task.
Observation:
After a task failure and restart, the composed-task-arguments are passed only to the failed task and not to the tasks after that.
How the arguments are retrieved in the composed-task after job restart ? what could be the reason for this behavior ?
Version used :
Spring cloud local server - 1.7.3 , Spring boot - 2.0.4 , Spring cloud starter task - 2.0.0
The issue that you are experiencing is that SCDF is not storing the properties specified at launch time.
This issue is being tracked here: https://github.com/spring-cloud/spring-cloud-dataflow/issues/2807 and is scheduled to be fixed in SCDF 2.0.0
[Detail]
So when the job is restarted these properties are not submitted (since they are not currently stored) to the new CTR launch.
And thus subsequent tasks (after the failed task succeeds) will not have the properties set for them.
The reason that the failed job still has this value is that the arguments are stored in the batch-step-execution-context for that step.
[Work Around until Issue is resolved]
Instead of restarting the job, launch the CTR task definition using the properties (so long as they are the same).

how to operate in airflow so that the task rerun and continue downstream tasks

I set up a work flow in airflow, one of jobs was failed, after I fixed the problem, I want to rerun the failed task and continue the workflow.the details like below:
as above, I prepared to run the task "20_validation", I pressed the button 'Run' like below:
but the problem is when the task '20_validation' has finished, these downstream tasks were not continue to be started. How should I do?
Use the clear button directly below the run button you drew the box around in your screenshot.
This will clear the failed task state and all the tasks after (since downstream is selected to the right), causing them all to be run/rerun.
Since the upstream task failed, the downstream tasks shouldn't have been run anyway, so it shouldn't cause successful tasks to rerun.

Celery beat scheduling option to immediately launch task upon celery launch

I can schedule an hourly task in my Django app using celery beat in settings.py like so:
CELERYBEAT_SCHEDULE={
'tasks.my_task':{
'task':'tasks.my_task',
'schedule':timedelta(seconds=60*60),
'args':(),
},
}
But is there a way to schedule a task such that it immediately queues up and is calculated, thereafter following the configured schedule from there on? E.g., something like executing a selected task instantly at celery launch. What's the configuration for that?
Add the following to tasks.py:
obj = locals()['task_function_name']
obj.run()
This ensures the specified task is run when celery is run. Thereafter, it executes according to schedule.