I know about celery chain.
So I can run one task after completion of another easily (without taking arguments):
chain(task1.si(), task2.si())()
But I have a bit more complicated structure:
task1 -- task2
/ | \
sub1 sub2 sub3
I want to run task2 only after completion of task1 and all it's subtasks.
Now, my task2 runs just after completion task1
Is it possible?
Related
I have a periodic Celery task in which I make quite a bit of printing to stdout.
For example:
print(f"Organization {uuid} updated")
All of these print statements look like this in my worker output:
[2019-10-31 10:36:00,466: DEBUG/MainProcess] basic.qos: prefetch_count->102
The counter at the end is incremented for each print. Why is this? What would I have to change to see stdout?
I run the worker as such:
$ celery -A project worker --purge -l DEBUG
You do not call print() in your tasks. Instead, you have something like: logger = get_task_logger(__name__) and use logger in your tasks when you need to write something to the log at certain level.
I have a cap task that call multiple other long running cap tasks:
So lets say I have a task named A
From within this cap task I (depending on condition) call another cap task lets say B.
cap task B sequentially calls 4 more cap tasks c, D, E, & ,F
So be is something like this:
task :B do
on roles(:all) do
invoke 'tasks:C'
invoke 'tasks:D'
Rake::Task['db:E'].invoke("arg1", "arg2")
Rake::Task['db:F'].invoke("arg1", "arg2")
end
end
Each of C, D, E & F are long running and must run sequentially in same order as specified.
Basically tasks C to F are db & assets zipping and uploading tasks which might take long time, so they must not hinder the cap deployment process and should run independently in background.
So I need a way to call task B from task A so that it runs in async mode, and rest of cap tasks during deployment keep running.
I'd suggest making task B an actual Rake task, and then having Capistrano call and immediately background it, e.g. https://stackoverflow.com/a/5829142/3042016
Basically i have scheduled a task in windows task scheduler.
In the task i have scheduled a powershell script
in the program/script path we have given powershell path and in the argument path we have given
-command "& C:\Users\xxxx\Desktop\1.PS1"
I was checking if the task fails or not on powershell script failure.
so the powershell script is getting failed but the last run status says
"The operation completed successfully"
How to configure my task such that it will return fail if powershell script does not run successfully?
edit
I have 3 tasks(all 3 are powershell scripts having parameters)
basically i have created a custom trigger for task 2 and task 3
i.e if task1 is success then the task 2 will be triggered and if task 2 is success then task3 will be triggered.
while testing the above scenario, evenif the powershell script used in task 2 returns error(intentionally done error in code to check the scenario), the last run status says the operation completed successfully and the task 3 task getting triggered
Instead of the current situation, why do you not have 3 Scheduled Tasks.
The first runs the first script, and if the script itself deems itself successful, runs the 2nd Scheduled Task itself?
For example, using Start-ScheduledTask (https://technet.microsoft.com/en-us/library/jj649818(v=wps.630).aspx).
This way, each of your scripts could check themselves for issues, and if none are found, can call the next task. This has the additional bonus of letting you have full control over which scheduled task to run, and when.
Based on the comments in this thread and my own testing, it sounds like it's not possible to get the scheduled task to log itself as failed by the failure of the script because the purpose of the task is to kick off the program, which it does successfully.
However, it is possible to return an exit code to the scheduled task. The exit code will be logged in the event history (Information Level). As stated in this thread, the return code must be referenced in the parameters with which the scheduled task calls the script:
-ExecutionPolicy Bypass –NoProfile –Command "& {C:\ProgramData\ORGNAME\scripts\SetDNS.ps1; exit $LastExitCode}"
I'm running two perl scripts in parallel on Jenkins and one more script which should get executed if the first two succeed. If I get an error in script1, script 2 still runs and hence the exit status becomes successful.
I want to run it in such a way that if any one of the parallel script fails, the job should stop with a failure status.
Currently my setup looks like
perl_script_1 &
perl_script_2 &
wait
perl_script_3
If script 1 or 2 fails in the middle, the job should be terminated with a Failure status without executing job 3.
Note: I'm using tcsh shell in Jenkins.
I have a similar setup where I run several java processes (tests) in parallel and wait for them to finish. If any fail, I fail the rest of my script.
Each test process writes its result to a file to be tested once done.
Note - the code examples below are written in bash, but it should be similar in tcsh.
To do this, I get the process id for every execution:
test1 &
test1_pid=$!
# test1 will write pass or fail to file test1_result
test2 &
test2_pid=$!
...
Now, I wait for the processes to finish by using the kill -0 PID command
For example test1:
# Check test1
kill -0 $test1_pid
# Check if process is done or not
if [ $? -ne 0 ]
then
echo process test1 finished
# check results
grep fail test1_result
if [ $? -eq 0 ]
then
echo test1 failed
mark_whole_build_failed
fi
fi
Same for other tests (you can do a loop to test all running processes periodically).
Later condition the rest of the execution based on mark_whole_build_failed.
I hope this helps.
I want to schedule a job to run every 5 minutes. but the question is that
Is there a way (like creating a crontab) to prevent a job from running, when the previous job has not been completed?
You can write a shell script to start the job only when the job is not already running. Configure the shell script in crontab every 5 minutes. this will ensure that the execution happens only when there is no instance of the job running already. This is how i have done for my cron jobs
Note : make use of ps -ef | grep commands in your shell script to identify if there is a process already running