Protractor - Run a instances to each spec file - protractor

so, I'm new in the tests world. srsr.
I've somy spec files and I running it 4 instances for about 10 spec files.
I would like to know if is a good idea to create a instance to run each file?
I know that if I have 10 files, doing it is ok.
But if I have 30 files?
setup 30 instancies, one for each.
is it good idea ?
thanks guys!

By instance I assume you are talking about running your tests in parallel. Running tests in parallel is meant to speed up your the time it takes to execute your test suite. How many tests you can reliably run in parallel depends on your setup. At some point, if you have to many tests running in parallel tests will begin to timeout. If 30 instances for 30 tests will reliably run on your machine, then that is what you should do. But it defeats the purpose if tests are timing out from to much stuff going on.

Related

Does ReactTestingLibrary take up a lot of CPU/RAM, causing tests to timeout & fail?

I've had some issues with tests timing out randomly. Usually on CircleCI, but sometimes locally. Based on Kent Dodds suggestion to write fewer longer tests I now have more tests with multiple clicks & multiple network requests (mocking fetch too). Theses tests seem to timeout. Just recently CircleCI added a Resources tab to the pipeline for some interesting metrics. When the tests timeout, the 4GB ram clearly gets to 100% for extended time, and the test fails. On a passed test, the ram stays mostly below 100%.
Failed test (4GB):
Passed test (4GB):
Updated Resource_class to 8GB
I tried a single experiment to update my circleci config so that the resource_class gets updated to large/8GB. Test passed and even better CPU usage %.
So, does React Testing Library take up a lot of horsepower?
Is our default 4GB RAM docker image ok?

Pytest live logging with parallel execution - possible?

I have a test suite that I run with
python3 -mpytest --log-cli-level=DEBUG ...
on the build server. The live logs are useful to troubleshoot if the tests get stuck or are slow for some reason (the tests use external resources).
To speed things up, it is possible to run them with e.g.
python3 -mpytest -n 4 --log-cli-level=DEBUG ...
to have four parallel test runners. Speedup is almost linear with number of processes, which is great, but unfortunately the parent process swallows all live logs. I get the captured logs in case of a test failure, but I need the live logs as well to understand what is going on in real time. I understand that the output from all four parallel runs will be intermixed and that is fine. The purpose is for the committer to just check the build server output and know roughly what is going on.
I am currently using pytest-xdist, but use none of the more advanced features from it (just the multiprocessing).

Rundeck - any command execution fails when running on 5.8k nodes

I'm running a rundeck server to delegate a simple script to 5.8k other linux servers.
The very simple script is bellow
!/bin/bash
A=$(hostname)
echo $A
When i run the same job with a smaller number of targets (4089 nodes)
the comands work fine
I tried looking at my service.log page and its not incrementing anything
Any ideas on how to be able to run on all the 5.8k nodes? And where should i look for errors?
Rundeck does not have limits to nodes, certainly depends on how many executions you want to run, how much ram, how many processors and disk space.
Maybe you need to increase the Java heap size:
https://rundeck.org/docs/administration/maintenance/tuning-rundeck.html#java-heap-size
And how to adapt this to your SSH plugin:
https://rundeck.org/docs/administration/maintenance/tuning-rundeck.html#built-in-ssh-plugins

Running tests in parallel on multiple machines using py.test

I know that UI tests can be run in parallel on multiple machines using selenium grid. How about API tests?
I looked at pytest-xdist plugin and it can run tests in parallel on the local machine using py.test -n NUM, which will send tests to multiple CPUs and run them in parallel. This may not be as effective and fast, if the number of tests that we would like to run in parallel is much more than the no of CPUs on the machine. For example: If the machine has 4 CPUs and we would like to run 50 tests in parallel.
And it seems to run the tests on remote machine we need to do something like
py.test -d --tx socket=192.168.1.102:8888 --rsyncdir mypkg mypkg
I am wondering if there is a way to distribute the tests to multiple remote machines and run them in parallel. For example: If i have 1000 tests and 50 remote machines, then i would like each remote machine to run 1 or more tests at the same time so that tests complete faster. Which means, all the 1000 tests will complete in the time it takes for 20 tests or less.
Thanks.
It looks like you want the load distribution mode, followed by multiple invocations of the --tx argument:
py.test --dist=load --tx socket=192.168.1.110:8888 --tx socket=192.168.1.111:8888 --tx socket=192.168.1.112:8888 --rsyncdir mypkg mypkg
I'm sure you've looked at CPU usage of the python processes when running the tests. If you are doing what what I expect you are doing (running an integration test suite against a single instance of a network service with high response times), your test suite isn't CPU bound but is actually I/O bound. For this type of workload, CPU usage may appear high, but actually includes the amount of time the test runner spent waiting for a response from the system under test.
The biggest problem I've encountered when parallelizing that type of test suite is that the order tests complete sometimes matters, and when run in parallel tests finish in a different order when they run in series just due to variation in response times, causing intermittent and difficult to troubleshoot test failures.
If that doesn't happen with multiple cores on a single machine, that's a good sign your plan will work. That having been said, because there is operational overhead involved with keeping any pool of hosts around - patching with updates, dealing with configuration, provisioning, and networking, not to mention other unexpected issues, I suggest you try something different.
I think you should consider refactoring your test code to use asynchronous IO instead of setting up the test grid. When you do this correctly, multiple tests will be able to run on one core at the same time. Your sysadmin (which may be you!) will thank you.

Running parallel jobs in Jenkins

I'm using Jenkins for my builds, and I wrote some test scripts that I need to run after the compilation of the build.
I want to save some time, so I have to run the test scripts parallel. How can I do that?
EDIT: ok, I understand know that I need a separate Job for each test (for 4 tests I need 4 jobs, right?)
So, I did that, and via the parent job I ran this jobs. (using "build other projects" plugin).
But I didn't managed to aggregate the results (using aggregate downstream test results). The parent job exits before the downstream jobs were finished.
What shall I do?
Thanks.
You can use multi-job plugin. This would allow you to run multiple jobs in parallel and the parent job would wait for the sub jobs to be completed. The parent jobs status can be determined by the sub jobs status.
https://wiki.jenkins-ci.org/display/JENKINS/Multijob+Plugin
Jenkins doesn't really allow you to run things in parallel. You can however split your build into different jobs to achieve this. It would look like this.
Job to compile the source runs.
Jobs that run the tests are triggered by the completion of the compilation and start running. They copy compilation results from the previous job into their workspaces.
This is a big kludgy though. The better alternative would be to parallelise within the scripts that run the tests, i.e. you run a single script and this then runs the tests in parallel. If this is not possible, you'll have to split into different jobs though.
Have you looked at the Jenkins JOIN Plugin? I have not used it but I believe it is what you are attempting to accomplish.
- Mike
Actually you can but you will need some coding behind.
In my case, I have parallel test execution on jenkins.
1) Create a small job with parameters that is supposed to do a test run with a small suite
2) Edit this job to run on a list of slaves (where you have the proper environment)
3) Edit this build to allow concurrent builds
And now the hard part.
4) Create a small java program for computing the list of parameters for each job to run.
5) Iterate trough the list and launch a new Jenkins job on a new thread.
Put a Thread.sleep(5000) between runs in order to avoid communication errors
6) Join the threads
At the end of each job, I send the results to a shared location in order to perform some reporting at the end of all tests.
For starting a jenkins job with parameters use CLI
I intend to make my code as generic as possible and publish it if anyone else will need it.
You can use https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin with code like this
parallel (
// job 1, 2 and 3 will be scheduled in parallel.
{ build("job1") },
{ build("job2") },
{ build("job3") }
)
You can use any one of the followings:
Multijob plugin
Build Flow plugin