Executing in background, but limit number of executions - perl

I have a program that performs some operations on a specified log file, flushing to the disk multiple times for each execution. I'm calling this program from a perl script, which lets you specify a directory, and I run this program on all files within the directory. This can take a long time because of all the flushes.
I'd like to execute the program and run it in the background, but I don't want the pipeline to be thousands of executions long. This is a snippet:
my $command = "program $data >> $log";
myExecute("$command");
myExecute basically runs the command using system(), along with some other logging/printing functions. What I want to do is:
my $command = "program $data & >> $log";
This will obviously create a large pipeline. Is there any way to limit how many background executions are present at a time (preferably using &)? (I'd like to try 2-4).

#!/bin/bash
#
# lets call this script "multi_script.sh"
#
#wait until there are less then 4 instances running
#polling with interval 5 seconds
while [ $( pgrep -c program ) -gt 4 ]; do sleep 5; done
/path/to/program "$1" &
Now call it like this:
my $command = "multi_script.sh $data" >> $log;
Your perl script will wait if the bash script waits.
positives:
If a process crashes it will be replaced (the data goes, of course, unprocessed)
Drawbacks:
It is important for your perl script to wait a moment between starting instances
(maybe a sleep period of a second)
because of the latency between invoking the script and passing the while loop test. If you spawn them too quickly (system spamming) you will end up with much more processes than you bargained for.

If you are able to change
my $command = "program $data & >> $log";
into
my $command = "cat $data >>/path/to/datafile";
(or even better: append $data to /path/to/datafile directly from perl )
And when your script is finished that the last line will be:
System("/path/to/quadslotscript.sh");
then I have the script quadslotscript.sh here:
4 execution slots are started and stay until the end
all slots get input from the same datafile
when a slot is ready with processing it will read a new entry to process
until the datafile/queue is empty
no processtable lookup during execution, only when all work is done.
the code:
#!/bin/bash
#use the datafile as a queue where all processes get their input
exec 3< "/path/to/datafile"
#4 seperate processes
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
#only exit when 100% sure that all processes ended
while pgrep "program" &>"/dev/null" ; do wait ; done

Related

Using batch file to open multiple instances of MATLAB files that would just stress cpu 100%

So essentially this is a question about how to use your multi-core processor more efficiently.
I have an optimization script (written in matlab) that would call 20 instances of matlab to evaluate functions. The results will be saved as .mat file and then the optimization script would take these results and do some other work. The way I call 20 matlab instances is first using matlab built-in function "system" to call a batch file, which would then open 20 instances of matlab to evaluate the functions.
The code I'm using in batch file is:
( start matlab -nosplash -nodesktop -minimize -r "Worker01;exit"
ping -n 5 127.0.0.1 >nul
start matlab -nosplash -nodesktop -minimize -r "Worker02;exit"
ping -n 5 127.0.0.1 >nul
...... % repeat the pattern
start matlab -nosplash -nodesktop -minimize -r "Worker19;exit"
ping -n 5 127.0.0.1 >nul
start matlab -nosplash -nodesktop -minimize -r "Worker20;exit"
ping -n 5 127.0.0.1 >nul ) | set /P "="
All "start" commands are included in a parenthesis following by command
"| set/P"=""
because I want my optimization script move on after all 20 evaluations done. I learnt this technique from my another question but I don't really understand what it really does. If you can also explain this I would be very appreciated.
Anyway, this is a way to achieve parallel computing under matlab 2007 which doesn't have original parallel computing feature. However, I found that it's not an efficient way to run 20 instances at the same time because after opening like 12 instances, my cpu (a xeon server cpu, 14 cores available) usage reach 100%. My theory is that opening more instance than cpu could handle would make processor less efficient. So I think the best strategy would be like this:
start the first 12 instances;
start next one(s) on the list once any of current running instance finishes. (Even though workers are opened at roughly the same time and do the same job, they still tend to finish at different times.)
This would make sure that computing power is fully utilized (cpu usage always 100%) all the time without overstressing the cpu.
How could I achieve this strategy in batch file? If batch file is hard to achieve this, could powershell do this?
Please show the actual code and explain. I'm not a programmer so I don't know much of the coding.
Thanks.
I'm thinking this in powershell...
<#
keep a queue of all jobs to be executed
keep a list of running jobs
number of running jobs cannot exceed the throttle value
#>
$throttle = 12
$queue = New-Object System.Collections.Queue
$running = New-Object System.Collections.Generic.List[System.Diagnostics.Process]
# generate x number of queue commands
# runs from 1 to x
1..20 | ForEach-Object {
# the variable $_ contains the current number
$program = "matlab"
$args = "-nosplash -nodesktop -minimize -r `"Worker$_;exit`""
# args will be
# -nosplash -nodesktop -minimize -r "Worker1;exit"
# -nosplash -nodesktop -minimize -r "Worker2;exit"
# etc
# save it
$queue.Enqueue(#($program, $args))
}
# begin executing jobs
while($queue.Count) {
# remove jobs that are done
$running.Where({ $_.HasExited }) |
ForEach-Object { [void]$running.Remove($_) }
if($running.Count -ge $throttle) {
# busy, so wait
Start-Sleep -Milliseconds 50
}
else {
# ready for new job
$cmd = $queue.Dequeue()
[void]$running.Add([System.Diagnostics.Process]::Start($cmd[0], $cmd[1]))
}
}
# wait for rest to be done
while($running.Where({ !$_.HasExited }).Count) {
Start-Sleep -Milliseconds 50
}

Insert text into a file (.bat) as well as increase a number sequentially

For instance, I am creating reboot scripts for about 400 servers to reboot every night. I already have the task scheduler portion done with a script.
What I need is, how to insert "shutdown /r /m \servername-001 /f" into a file called "servername-001_Reboot.bat".
and in the same script change the 001 to 002 into the next batch file and so on and so forth.
Unless someone has a more efficient way of doing an automated reboot schedule.
Have a variable $ServerNumber = 1 and inside your for-loop just keep incrementing it.
The filename for the batchfile can be made like so:
$filename = "servername-$( "{0:D3}" -f $ServerNumber )_Reboot.bat"
(Read here to learn more about formatting numbers in powershell)
Any variable including a string, let's call it $str, can be piped to any file:
$str | Out-File "path/to/file/$filename"
use the -Append flag if you want to append data to the file if it already exists (as opposed to overwriting it).

Script for monitoring CPU

I need to create some kind of script/runnable file for monitoring my Freeswitch PBX on Windows Server 2012 that:
checks a number of calls each ~5s and then writes in into a file,
checks % of CPU usage at that point and also writes it (into second column).
For the first part, I figured out how to check for actual number of calls flowing through:
fs_cli.exe -x "show calls count" > testlog.txt
but I have to do this manually and it always overwrites the previous one. I need the script to do this automatically every 5s until I stop the script.
fs_cli.exe -x "show calls count" >> testlog.txt
(notice the additional >) will append text to the file instead of overwriting the file
You can write a loop using this kind of code in PS:
#never-ending loop, condition is always true
while($true) {
#run a command (fs_cli.exe -x "show calls count" >> testlog.txt)
#or maybe several
date
(Get-WmiObject Win32_Processor).LoadPercentage >> c:\cpu_usage.txt
#sleep for 5 seconds
Start-Sleep -Seconds 5
}

Script response if md5sum returns FAILED

Say I had a script that checked honeypot locations using md5sum.
#!/bin/bash
#cryptocheck.sh
#Designed to check md5 CRC's of honeypot files located throughout the filesystem.
#Must develop file with specific hashes and create crypto.chk using following command:
#/opt/bin/md5sum * > crypto.chk
#After creating file, copy honeypot folder out to specific folders
locations=("/share/ConfData" "/share/ConfData/Archive" "/share/ConfData/Application"
"/share/ConfData/Graphics")
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done
And the output looked like this:
http://pastebin.com/b4AU4s6k
Where would you start to try and recognize the output and perhaps trigger some sort of response by the system if there is a 'FAILED'?
I've worked a bit with PERL trying to parse log files before but my attempts typically failed miserably for one reason or another.
This may not be the proper way to go about this, but I'd want to be putting this script into a cronjob that would run every minute. I had some guys telling me that an inotify job or script (I'm not familiar with this) would be better than doing it this way.
Any suggestions?
--- edit
I made another script to call the script above and send the output to a file. The new script then runs a grep -q on 'FAILED' and if it picks anything up, it sounds the alarm (tbd what the alarm will be).
#!/bin/bash
#cryptocheckinit.sh
#
#rm /share/homes/admin/cryptoalert.warn
/share/homes/admin/cryptocheck.sh > /share/homes/admin/cryptoalert.warn
grep -q "FAILED" /share/homes/admin/cryptoalert.warn && echo "LIGHT THE SIGNAL FIRES"
Use:
if ! /opt/bin/md5sum -c /share/homes/admin/crypto.chk
then
# Do something
fi
Or pipe the output of the loop:
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done | grep -q FAILED && echo "LIGHT THE SIGNAL FIRES"

SLURM display the stdout and stderr of an unfinished job

I used to useĀ a server with LSF but now I just transitioned to one with SLURM.
What is the equivalent command of bpeek (for LSF) in SLURM?
bpeek
bpeek Displays the stdout and stderr output of an unfinished job
I couldn't find the documentation anywhere. If you have some good references for SLURM, please let me know as well. Thanks!
You might also want to have a look at the sattach command.
I just learned that in SLURM there is no need to do bpeek to check the current standard output and standard error since they are printed in running time to the files specified for the stdout and stderr.
Here's a workaround that I use. It mimics the bpeek functionality from LSF
Create a file bpeek.sh:
#!/bin/bash
# take as input an argument - slurm job id - and save it into a variable
jobid=$1
# run scontrol show job $jobid and save the output into a variable
#find the string that starts with StdOut= and save it into a variable without the StdOut= part
stdout=$(scontrol show job $jobid | grep StdOut= | sed 's/StdOut=//')
#show last 10 rows of the file if no argument 2 is given
nrows=${2:-10}
tail -f -n $nrows $stdout
Then you can use it:
sh bpeek.sh JOBID NROWS(optional)
Or add an alias to ~/.bashrc file:
alias bpeek="sh ~/bpeek.sh $1 $2"
and then use it:
bpeek JOBID NROWS(optional)