Parallel execution of robot tests in Sauce Labs - eclipse

I am using Eclipse+Maven based Robot Framework with Java implementation of SeleniumLibrary.
I could execute tests in sauce labs but it executes only on one VM. Has anyone achieved parallel execution of robot tests in Sauce Labs say in multiple VMs? Or can anyone guide to achieve this? Thanks in advance.

This is what I am using to run on multiple concurrent VM's on saucelabs. I have a 1-click batch file that uses start pybot to invoke parallel execution. Example:
ECHO starting parallel run on saucelabs.com
cd c:\base\dir\script
ECHO Win7/Chrome40:
start pybot -v REMOTE_URL:http://user:key#ondemand.saucelabs.com:80/wd/hub -T -d results/Win7Chrome40 -v DESIRED_CAPABILITIES:"name:Win7 + Chrome40, platform:Windows 7, browserName:chrome, version:40" tests/test.robot
ECHO Win8/IE11
start pybot -v REMOTE_URL:http://user:key#ondemand.saucelabs.com:80/wd/hub -T -d results/Win8IE11 -v DESIRED_CAPABILITIES:"name:Win8 + IE11, platform:Windows 8.1, browserName:internet explorer, version:11" tests/test.robot
-T tells pybot to not overwrite result logs but create a timestamped log for each run
-d specifies where the results will go
Works like a charm!

A parallel executor for Robot Framework tests. With Pabot you can split one execution into multiple and save test execution time.
https://github.com/mkorpela/pabot

Related

Is there a way to run pytests using xdist by file(s)?

I am trying to run 2 test files using xdist with 2 gateways (-n=2). Each test file contains tests which are user permission specific. While running the test with pytest and pytest-xdist, I noticed some of the test fail randomly. It is happening because some of the tests from file1 getting executed by a different gw. So, if [gw0] was running most of the tests from file0, sometimes, [gw0] also executes some tests from file1 which causes the failure.
I am trying to find out if there is a way I can force/ask xdist to execute a specific file or perhaps if there is a way to assign a file to a gw?
pytest test_*.py -n=2 -s -v
also tried:
pytest test_*.py -n=2 -s -v --dist=loadfile
Assuming your file for running parallel tests is correctly distributed (properly receiving PYTEST_XDIST_WORKER and PYTEST_XDIST_WORKER_COUNT environment variables), you only need to run:
pytest test_*.py --tx '2*popen' --dist=loadfile --dist=each

Run Play application in production mode using dist taks

I am using 'dist' task to generate a distribution of my play application. But if I unzip the generated artifact, in the bin/ directory I have access to the bash file generated by the 'dist' task. The last line of the script is : run "$#"
I saw in the official Play Framework documentation that 'run' command should not be used in production mode, and the recommended way is to generated a distribution with task 'dist'
Why 'dist' is generating a bash script which is using 'run' commmand if it is not recommended in production mode?
I am asking this, because when I deploy my application in production, the first request is slow...it seems the development behavior. But I am using the 'dist' command.
I would appreciate any help.
Thank you.
You are mixing two different things.
The run command stated in the Play documentation is a SBT command, that will start your app in dev mode. So to use that command you have to use activator or sbt (ex: ./activator run).
The run you see in that script is a bash function (defined a little above), that will start your app in production mode. A little snippet from that function:
# Actually runs the script.
run() {
# TODO - check for sane environment
# process the combined args, then reset "$#" to the residuals
# (...)
execRunner "$java_cmd" \
${java_opts[#]} \
"${java_args[#]}" \
-cp "$(fix_classpath "$app_classpath")" \
"${mainclass[#]}" \
"${app_commands[#]}" \
"${residual_args[#]}"
(...)
}
So, if you use this script to run your app, it will start in production mode.

Scala Process Terminal Output

I'm using the Scala (2.10.3) Process object to run commands. The docs show me how to run a command and then capture the standard output, but I'm running s3cmd and want to see upload progresses. How can I capture the output as if the command were running in a terminal?
Solution:
"s3cmd sync --recursive --delete-removed --progress local/ s3://remote" ! ProcessLogger(line => log.info(line))
Line at a time:
http://www.scala-lang.org/api/2.10.3/#scala.sys.process.ProcessLogger
for vanilla stdout
https://github.com/s3tools/s3cmd/blob/master/S3/Progress.py

Submission of Scala code to a cluster

Is it possible to run some akka code on the oracle grid engine with the use of multiple nodes?
So if I use the actor-model, which is a "message-passing model", is it possible to use Scala and the akka framework to run my code on a distributed memory system like a cluster or a grid?
If so, is there something similar like mpirun in mpi -c, to run my program on different nodes? Can you give a submission example using oracle grid engine?
how do I know inside scala on which node am I and to how many nodes the job has been submitted?
Is it possible to communicate with other nodes through the actor-model?
mpirun or (mpiexec on some systems) can run any kind of executables (even if they don't use MPI). I currently use it to launch java and scala codes on clusters. It may be tricky to pass arguments to the executable when calling mpirun so you could use an intermediate script.
We use Torque/Maui scripts which are not compatible with GridEngine, but here is a script my colleague is currently using:
#!/bin/bash
#PBS -l walltime=24:00:00
#PBS -l nodes=10:ppn=1
#PBS -l pmem=45gb
#PBS -q spc
# Find the list of nodes in the cluster
id=$PBS_JOBID
nodes_fn="${id}.nodes"
# Config file
config_fn="human_stability_article.conf"
# Java command to call
java_cmd="java -Xmx10g -cp akka/:EvoProteo-assembly-0.0.2.jar ch.unige.distrib.BuildTree ${nodes_fn} ${config_fn} ${id}"
# Create a small script to pass properly the parameters
aktor_fn="./${id}_aktor.sh"
echo -e "${java_cmd}" >> $aktor_fn
# Copy the machine file to the proper location
rm -f $nodes_fn
cp $PBS_NODEFILE $nodes_fn
# Launch the script on 10 notes
mpirun -np 10 sh $aktor_fn > "${id}_human_stability_out.txt"

Stress testing a command-line application

I have a command line perl script that I want to stress test. Basically what I want to do is to run multiple instances of the same script in parallel so that I can figure out at what point our machine becomes unresponsive.
Currently I am doing something like this:
$ prog > output1.txt 2>err1.txt & \
prog > output2.txt 2>err2.txt &
.
.
.
.
and then I am checking ps to see which instances finished and which didn't. Is there any open-source application available that can automated this process? Preferably with a web-interface?
You can use xargs to run commands in parallel:
seq 1 100 | xargs -n 1 -P 0 -I{} sh -c 'prog > output{}.txt 2>err{}.txt'
This will run 100 instances in parallel.
For a better testing framework (including parallel testing via 'spawn') take a look at Expect.
Why not use the crontab or Scheduled Tasks to automatically run the script?
You could write something to automatically parse the output easily.
With GNU Parallel this will run one prog per CPU core:
seq 1 1000 | parallel prog \> output{}.txt 2\>err{}.txt
If you wan to run 10 progs per CPU core do:
seq 1 1000 | parallel -j1000% prog \> output{}.txt 2\>err{}.txt
Watch the intro video to learn more: http://www.youtube.com/watch?v=OpaiGYxkSuQ