I'm training several Neural Networks on a server in my University. Due to limited resources for all the students, there is a job scheduling system called (Slurm) that queues all students runs and in addition, we are only allowed to run our commands with a time limit (24h). Once exceed this processing time, our running process is closed to give resource availability to the others.
Specifically, I'm training GAN's and I need more training time than 24h.
Right now, I'm saving the checkpoints of my model to restart from the same training point before the process closure. But, I must execute the same command again after 24h.
For this reason I would like to schedule this execution every 24h automatically.
Currently I'm using 'tmux' to execute the command and be able to close the terminal.
Some suggestion on how to automate this kind of execution?
Thank you in advance!
You can setup your job to automatically resubmit when it's close to the timelimit.
Note that slurm's time granularity is 1 minute, so don't set the
signal timer to anything less than 60 seconds.
#!/bin/bash
#SBATCH --signal=B:USR1#300 # Tell Slurm to send signal USR1 300 seconds before timelimit
#SBATCH -t 24:00:00
resubmit() {
echo "It's time to resubmit"; # <----- Run whatever is necessary. Ideally resubmit the job using the checkpointing files
sbatch ...
}
trap "resubmit" USR1 # Register signal handler
YOUR_TRAINING_COMMAND & # It's important to run on the background otherwise bash will not process the signal until this command finishes
wait # wait until all the background processes are finished. If a signal is received this will stop, process the signal and finish the script.
Related
I'm running dotMemory command line against an IoT Windows Forms application which requires many hours of tests on a custom appliance.
My purpose is to get memory snapshots on a time basis, while the application is running on the appliance. For example, if the test is designed to run for 24h, I want to get a 10 seconds memory snapshot each hour.
I found 2 ways of doing it:
Run dotMemory.exe and get a standalone snapshot on a time basis, by using schtasks to schedule each execution;
Run dotMemory using the attach and trigger arguments and get all the snapshots on a single file.
The first scenario it's ready for me, but as it is easy to see, the second one is much better for further analysis after collecting the data.
I'm able to start it by using a command just like:
C:\dotMemory\dotMemory.exe attach $processId --trigger-on-activation --trigger-timer=10s --trigger-max-snapshots=24 --trigger-delay=3600s --save-to-dir=c:\dotMemory\Snapshots
Here comes my problem:
How can I make the command/process stop after it reaches the max-snapshot value without any human intervention?
Reference: https://www.jetbrains.com/help/dotmemory/Working_with_dotMemory_Command-Line_Profiler.html
If you start your app under profiling instead of attaching to the already running process, stopping the profiling session will kill the app under profiling. You can stop profiling session by passing ##dotMemory["disconnect"] command to the dotMemory console stdin. (E.g. some script can do that after some time).
See dotmemory help service-messages for details
##dotMemory["disconnect"] Disconnect profiler.
If you started profiling with 'start*' commands, the profiled process will be killed.
If you started profiling with 'attach' command, the profiler will detach from the process.
P.S.
Some notes about your command line. With this comand line dotMemory will get a snapshot each 10 seconds but will start to do it after one hour. There is no such thing as "10 seconds memory snapshot" memory snapshot is a momentary snapshot of an object graph in the memory. Right command line for your task will be C:\dotMemory\dotMemory.exe attach $processId --trigger-on-activation --trigger-timer=1h --trigger-max-snapshots=24 --save-to-dir=c:\dotMemory\Snapshots
I have a scheduled parallel Datastage (11.7) job.
This job has a Hive Connector with a Before and After Statement.
The before statement run ok but After statement remains in running state for several hours (on Hue Log i see this job finished in 1hour) and i have to manually abort it on Datastage Director.
Is there the way to "program an abort"?
For example i want schedule the interruption of the running job every morning at 6.
I hope I was clear :)
Even though you can kill the job - as per other responses - using dsjob to stop the job, this may have no effect because the After statement has been issued synchronously; the job is waiting for it to finish, and (probably) not processing kill signals and the like in the meantime. You would be better advised to work out why the After command is taking too long, and addressing that.
I have a job that uses the Kafka Connector Stage in order to read a Kafka queue and then load into the database. That job runs in Continuous Mode, which it has no time to conclude, since it keeps monitoring the Kafka queue in real time.
For unexpected reasons (say, server issues, job issues etc) that job may terminate with failure. In general, that happens after 300 running hours of that job. So, in order to keep the job alive I have to manually look to the job status and then to do a Reset and Run, in order to keep the job running.
The problem is that between the job termination and my manual Reset and Run can pass several hours, which is critical. So I'm looking for a way to eliminate the manual interaction and to reduce that gap by automating the job invocation.
I tried to use Control-M to daily run the job, but with no success: The first day the Control-M called the job, it ran it fine. But in the next day, when the Control-M did an attempt to instantiate the job again it failed (since it was already running). Besides, the Datastage will never tell back Control-M that a job was successfully concluded, since the job's nature won't allow that.
Said that, I would like to hear ideas from you that can light me up.
The first thing that came in mind is to create a intermediate Sequence and then schedule it in Control-M. Then, this new Sequence would call the continuous job asynchronously by using command line stage.
For the case where just this one job terminates unexpectedly and you want it to be restarted as soon as possible, have you considered calling this job from a sequence? The sequence could be setup to loop running this job.
Thus sequence starts job and waits for it to finish. When job finishes, the sequence will then loop and start the job again. You could have added conditions on job exit (for example, if the job aborted, then based on that job end status, you could reset the job before re-running it.
This would not handle the condition where the DataStage engine itself was shut down (such as for maintenance or possibly an error) in which case all jobs end including your new sequence. The same also applies for a server reboot or other situations where someone may have inadvertently stopped your sequence. For those cases (such as DataStage engine stop) your team would need to have process in place for jobs/sequences that need to be started up following a DataStage or System outage.
For the outage scenario, you could create a monitor script (regardless of whether running the job solo or from sequence) that sleeps/loops on 5-10 minute intervals and then checks the status of your job using dsjob command, and if not running can start that job/sequence (also via dsjob command). You can decide whether that script startup would occur at DataSTage startup, machine startup, or run it from Control M or other scheduler.
I have a periodic task that uses a crontab to run every day at 1:01 AM using
run_every = crontab(hour=1, minute=1)
Once I get my server up and running, is that enough to trigger the task to run once a day? Or do I also need to use a database scheduler?
Yes. It should be enough as Celery beat has own state file that is enough to run everything as you require.
I have a shell script that queues multiple tasks for execution on an HPC cluster. The same job submission script works for either torque or grid engine with some minor conditional logic. This is a pipeline where the output of earlier tasks are fed to later tasks for further processing. I'm using qsub to define job dependencies, so later tasks wait for earlier tasks to complete before starting execution. So far so good.
Sometimes, a task fails. When a failure happens, I don't want any of the dependent tasks to attempt processing the output of the failed task. However, the dependent tasks have already been queued for execution long before the failure occurred. What is a good way to prevent the unwanted processing?
You can use the afterok dependency argument. For example, the qsub command may look like:
qsub -w depend=afterok:<jobid> submit.pbs
Torque will only start the next job if the jobid exits without errors. See documentation on the Adaptive Computing page.
Here is what I eventually implemented. The key to making this work is returning error code 100 on error. Sun Grid Engine stops execution of subsequent jobs upon seeing error code 100. Torque stops execution of subsequent jobs upon seeing any non-zero error code.
qsub starts a sequence of bash scripts. Each of those bash scripts has this code:
handleTrappedErrors()
{
errorCode=$?
bashCommand="$BASH_COMMAND"
scriptName=$(basename $0)
lineNumber=${BASH_LINENO[0]}
# log an error message to a log file here -- not shown
exit 100
}
trap handleErrors ERR
Torque (as Derek mentioned):
qsub -W depend=afterok:<jobid> ...
Sun Grid Engine:
qsub -hold_jid <jobid> ...