Ask LSF to allocate immediately - scheduler

How to ask LSF to immediately allocate/execute my job without any wait ?
I ve few timebound jobs and I want them to be executed immediately or killed immediately. Are there any way to do the same ?

You may try brun command. For kill immediately, try bkill

Related

ADF Scheduling when existing Job not yet finished

Having read https://learn.microsoft.com/en-us/azure/data-factory/v1/data-factory-scheduling-and-execution, it is unclear to me if:
A schedule is made every hr for a job to run,
can we stop the concurrent execution of the next job at hr+1 if the job for hr+0 is still running?
It looks if concurrency = 1 means this,
But is that invocation simply not start until concurrent execution is finished?
Or will it be discarded?
When we set the concurrency 1, only one instance will be allowed to run at a time. When the scheduled trigger runs again and tries to run the pipeline, If the pipeline is already running, the next invocation will be queued. It will start after finishing the current instance.
For your question, the following invocation will be queued. After the first run finishes, the next run will start.

Abort a Datastage job at a specified time

I have a scheduled parallel Datastage (11.7) job.
This job has a Hive Connector with a Before and After Statement.
The before statement run ok but After statement remains in running state for several hours (on Hue Log i see this job finished in 1hour) and i have to manually abort it on Datastage Director.
Is there the way to "program an abort"?
For example i want schedule the interruption of the running job every morning at 6.
I hope I was clear :)
Even though you can kill the job - as per other responses - using dsjob to stop the job, this may have no effect because the After statement has been issued synchronously; the job is waiting for it to finish, and (probably) not processing kill signals and the like in the meantime. You would be better advised to work out why the After command is taking too long, and addressing that.

How to gracefully stop job execution when it’s killed on the Rundeck Web UI?

This is the same question as How to gracefully stop job execution when one step fails in Rundeck? except that I want to gracefully stop a job when it’s killed from the Web UI.
I’ve a similar job as the other question author: it dumps some SQL tables; manipulate a bit the dumps; upload them somewhere; and cleanup.
The issue comes when the job is terminated or killed before cleanup; it fills the disk with files that never get deleted. Rundeck supports error handlers for this but the docs say it’s for errors, not manual kills from the UI.
I’m able to cleanup on exit (success or error) by using the following in my script:
#!/bin/bash -ex
finish() {
echo "remove dir $OUTPUT..."
rm -Rf "$OUTPUT"
}
trap finish EXIT
# ... rest of the script ...
However this step doesn’t execute when the job is killed from the UI. I dug in Rundeck’s code source and it appears to simply use Thread#interrupt, which I probably can’t catch.
Is there any solution to this issue? Right now I must ssh into the node and clean things up by myself when I kill the job.

systemd `systemctl stop` aggressively kills subprocesses

I've a daemon-like process that starts two subprocesses (and one of the subprocesses starts ~10 others). When I systemctl stop my process the child subprocesses appear to be 'aggressively' killed by systemctl - which doesn't give my process a chance to clean up.
How do I get systemctl stop to quit the aggressive kill and thus to allow my process to orchestrate an orderly clean up?
I tried timeoutSec=30 to no avail.
KillMode= defaults to control-group. That means every process of your service is killed with SIGTERM.
You have two options:
Handle SIGTERM in each of your processes and shutdown within TimeoutStopSec (which defaults to 90 seconds)
If you really want to delegate the shutdown from your main process, set KillMode=mixed. SIGTERM will be sent to the main process only. Then again shutdown within TimeoutStopSec. If you do not shutdown within TimeoutStopSec, systemd will send SIGKILL to all your processes.
Note: I suggest to use KillMode=mixed in option 2 instead of KillMode=process, as the latter would send the final SIGKILL only to your main process, which means your sub-processes would not be killed if they've locked up.
A late (possible) answer, but as I googled for weeks with a similar issue, finding nothing, I figured I add my solution.
My error was that I ran the systemd unit as root and switched (using sudo) to "the correct" user in the startscript (inherited from SysVinit script).
That starts the processes in the user.slice which is killed mercilessly on shutdown. When I changed the unit file to run as the correct user (USER=myuser) and removed sudo from the start script, the processes start in the system.slice and get properly handled on shutdown.

Can I restart a FAILED celery task?

I am using celery with the djkombu queue.
I've set max_retries=3 for my task. Once the 3rd retry fails, it executes the after_return method with status=FAILURE. The method also receives a task_id parameter. With this task_id, can I restart the task manually (I think I will need to set the Message.visible to 1) ?
You need to re-launch the task with the same args you launched it before.