Stress testing a command-line application - perl

I have a command line perl script that I want to stress test. Basically what I want to do is to run multiple instances of the same script in parallel so that I can figure out at what point our machine becomes unresponsive.
Currently I am doing something like this:
$ prog > output1.txt 2>err1.txt & \
prog > output2.txt 2>err2.txt &
.
.
.
.
and then I am checking ps to see which instances finished and which didn't. Is there any open-source application available that can automated this process? Preferably with a web-interface?

You can use xargs to run commands in parallel:
seq 1 100 | xargs -n 1 -P 0 -I{} sh -c 'prog > output{}.txt 2>err{}.txt'
This will run 100 instances in parallel.
For a better testing framework (including parallel testing via 'spawn') take a look at Expect.

Why not use the crontab or Scheduled Tasks to automatically run the script?
You could write something to automatically parse the output easily.

With GNU Parallel this will run one prog per CPU core:
seq 1 1000 | parallel prog \> output{}.txt 2\>err{}.txt
If you wan to run 10 progs per CPU core do:
seq 1 1000 | parallel -j1000% prog \> output{}.txt 2\>err{}.txt
Watch the intro video to learn more: http://www.youtube.com/watch?v=OpaiGYxkSuQ

Related

qsub -t job "has died"

I am trying to submit an array job to our cluster using qsub. The script is like:
#!/bin/bash
#PBS -l nodes=1:ppn=1 # Number of nodes and processor
#..... (Other options)
#PBS -t 0-50 # List job
cd $PBS_O_WORKDIR
./programname << EOF
some parameters
EOF
This script runs without a problem when removing -t option. But every time I added -t, I got following output:
---------------------------------------------
Check nodes and clean them of stray processes
---------------------------------------------
Checking node XXXXXXXXXX
-> User XXXX running job XXXXX.XXX:state=X:ncpus=X
-> Job XXX.XXX has died
Done clearing all the allocated nodes
------------------------------------------------------
Concluding PBS prologue script - XX-XX-XXXX XX:XX:XX
------------------------------------------------------
-------------- Job will be requeued --------------
Where it died and started requeue. No error message. I did not find any similar issue online. Has anyone experienced this before? Thank you!
(I wrote another "manual" array qsub script which works. But I do wish to get the work, as it is in the command option and much cleaner.)

How to make handbrake use the cpu with less intensity?

I've recently began using HandBrake to process some videos I downloaded to make them lighter. I built a small python GUI program to automate the processing, making use of the CLI version. What I am doing is generating the command according to the video and executing it with os.system. Something like this:
import os
def process(args):
#some algorithm to generate cmd using args
cmd = "handbrakecli -i raw_video.mp4 -o video.mp4 -O -e x264" #example command
os.system(cmd)
os.remove("raw_video.mp4")
The code works perfectly, but the problem is the overuse of my CPU. Usually, this takes 100% of CPU usage during considerable amount of time. I use the program CoreTemp to keep track of my processor temperature and, usually, it hits 78 °C.
I tried using BES (Battle Encoder Shirase) by saving the cmd command into a batch file called exec.bat and doing os.system("BES_1.7.7\BES.exe -J -m exec.exe 20"), but this simply does nothing.
Speed isn't important at all. Even if it takes longer, I just want to use less of my CPU, something around 50% would be great. Any idea on how I could do so?
In Handbrake you can pass advanced parameters so you only use a certain amount of CPU threads.
You can use threads, view the Handbrake CLI Documentation
When using threads you can specify any number of CPU processors to use. The default is auto.
The -x parameter stands for Advanced settings in the GUI of Handbrake, that is where threads will go.
The below tells Handbrake to only use one CPU thread for the Advanced setting:
-x threads=1
You can also use the veryslow for the --encoder-preset setting to help the CPU load.
--encoder-preset=veryslow
I actually prefer using the --encoder-preset=veryslow preset since I see an overall better quality in the encode.
And both together:
--encoder-preset=veryslow -x threads=1
So formatted with your cmd variable:
cmd = "handbrakecli -i raw_video.mp4 -o video.mp4 -O -e x264 --encoder-preset=veryslow -x threads=1" #example command
See if that helps.
One easy way in Linux is to use taskset. You can use the terminal or make a custom shortcut/command.
For example, my CPU has 8 threads but I only want to use 6 for Handbrake.
Just start the program with taskset -c 2,3,4,5,6,7 handbrake, this way the threads 0 and 1 will be free to another task/process and the program will run on threads 2,3,4,5,6,7.
In Windows you can change the Target of the shortcut or use on cmd:
C:\Windows\System32\cmd.exe /C start "" /affinity FC "C:\Program Files\HandBrake\HandBrake.exe""
As far as I understand It reads the value backwards for each four bits, it means the first four bits in Hexadecimal are for threads 7-4 (1111) and the second four bits in Hexadecimal are for threads 3-0 (1100). In my case I have a 8 threads CPU and leaving free theads 1 and 0 (see image below).

Parallel execution of robot tests in Sauce Labs

I am using Eclipse+Maven based Robot Framework with Java implementation of SeleniumLibrary.
I could execute tests in sauce labs but it executes only on one VM. Has anyone achieved parallel execution of robot tests in Sauce Labs say in multiple VMs? Or can anyone guide to achieve this? Thanks in advance.
This is what I am using to run on multiple concurrent VM's on saucelabs. I have a 1-click batch file that uses start pybot to invoke parallel execution. Example:
ECHO starting parallel run on saucelabs.com
cd c:\base\dir\script
ECHO Win7/Chrome40:
start pybot -v REMOTE_URL:http://user:key#ondemand.saucelabs.com:80/wd/hub -T -d results/Win7Chrome40 -v DESIRED_CAPABILITIES:"name:Win7 + Chrome40, platform:Windows 7, browserName:chrome, version:40" tests/test.robot
ECHO Win8/IE11
start pybot -v REMOTE_URL:http://user:key#ondemand.saucelabs.com:80/wd/hub -T -d results/Win8IE11 -v DESIRED_CAPABILITIES:"name:Win8 + IE11, platform:Windows 8.1, browserName:internet explorer, version:11" tests/test.robot
-T tells pybot to not overwrite result logs but create a timestamped log for each run
-d specifies where the results will go
Works like a charm!
A parallel executor for Robot Framework tests. With Pabot you can split one execution into multiple and save test execution time.
https://github.com/mkorpela/pabot

is there a way to kill a spark job if it running over x minutes

i am using a bash to run the same spark(scala) function over multiple data sets. some of these data sets will take extremely long time and I want to skip them so that I can finish as many as possible data sets in limited time.
is there a way in scala function that I can use to terminate the job if it runs over x minutes?
for you information I am using a bash as:
for filename in dataFolder/*; do spark-2.3.0-bin-hadoop2.7/bin/spark-submit --class myclass myclass.jar ${filename}; done
Before running the for loop
try something like this
(sleep 3600 && yarn application -kill myclass)& to start timeout in background
if myclass is dynamic, use a bash function to get current yarn application id, like APPID=$(yarn application -list | grep some id | awk 'print {$1}') or something

Submission of Scala code to a cluster

Is it possible to run some akka code on the oracle grid engine with the use of multiple nodes?
So if I use the actor-model, which is a "message-passing model", is it possible to use Scala and the akka framework to run my code on a distributed memory system like a cluster or a grid?
If so, is there something similar like mpirun in mpi -c, to run my program on different nodes? Can you give a submission example using oracle grid engine?
how do I know inside scala on which node am I and to how many nodes the job has been submitted?
Is it possible to communicate with other nodes through the actor-model?
mpirun or (mpiexec on some systems) can run any kind of executables (even if they don't use MPI). I currently use it to launch java and scala codes on clusters. It may be tricky to pass arguments to the executable when calling mpirun so you could use an intermediate script.
We use Torque/Maui scripts which are not compatible with GridEngine, but here is a script my colleague is currently using:
#!/bin/bash
#PBS -l walltime=24:00:00
#PBS -l nodes=10:ppn=1
#PBS -l pmem=45gb
#PBS -q spc
# Find the list of nodes in the cluster
id=$PBS_JOBID
nodes_fn="${id}.nodes"
# Config file
config_fn="human_stability_article.conf"
# Java command to call
java_cmd="java -Xmx10g -cp akka/:EvoProteo-assembly-0.0.2.jar ch.unige.distrib.BuildTree ${nodes_fn} ${config_fn} ${id}"
# Create a small script to pass properly the parameters
aktor_fn="./${id}_aktor.sh"
echo -e "${java_cmd}" >> $aktor_fn
# Copy the machine file to the proper location
rm -f $nodes_fn
cp $PBS_NODEFILE $nodes_fn
# Launch the script on 10 notes
mpirun -np 10 sh $aktor_fn > "${id}_human_stability_out.txt"