I know that UI tests can be run in parallel on multiple machines using selenium grid. How about API tests?
I looked at pytest-xdist plugin and it can run tests in parallel on the local machine using py.test -n NUM, which will send tests to multiple CPUs and run them in parallel. This may not be as effective and fast, if the number of tests that we would like to run in parallel is much more than the no of CPUs on the machine. For example: If the machine has 4 CPUs and we would like to run 50 tests in parallel.
And it seems to run the tests on remote machine we need to do something like
py.test -d --tx socket=192.168.1.102:8888 --rsyncdir mypkg mypkg
I am wondering if there is a way to distribute the tests to multiple remote machines and run them in parallel. For example: If i have 1000 tests and 50 remote machines, then i would like each remote machine to run 1 or more tests at the same time so that tests complete faster. Which means, all the 1000 tests will complete in the time it takes for 20 tests or less.
Thanks.
It looks like you want the load distribution mode, followed by multiple invocations of the --tx argument:
py.test --dist=load --tx socket=192.168.1.110:8888 --tx socket=192.168.1.111:8888 --tx socket=192.168.1.112:8888 --rsyncdir mypkg mypkg
I'm sure you've looked at CPU usage of the python processes when running the tests. If you are doing what what I expect you are doing (running an integration test suite against a single instance of a network service with high response times), your test suite isn't CPU bound but is actually I/O bound. For this type of workload, CPU usage may appear high, but actually includes the amount of time the test runner spent waiting for a response from the system under test.
The biggest problem I've encountered when parallelizing that type of test suite is that the order tests complete sometimes matters, and when run in parallel tests finish in a different order when they run in series just due to variation in response times, causing intermittent and difficult to troubleshoot test failures.
If that doesn't happen with multiple cores on a single machine, that's a good sign your plan will work. That having been said, because there is operational overhead involved with keeping any pool of hosts around - patching with updates, dealing with configuration, provisioning, and networking, not to mention other unexpected issues, I suggest you try something different.
I think you should consider refactoring your test code to use asynchronous IO instead of setting up the test grid. When you do this correctly, multiple tests will be able to run on one core at the same time. Your sysadmin (which may be you!) will thank you.
Related
I've had some issues with tests timing out randomly. Usually on CircleCI, but sometimes locally. Based on Kent Dodds suggestion to write fewer longer tests I now have more tests with multiple clicks & multiple network requests (mocking fetch too). Theses tests seem to timeout. Just recently CircleCI added a Resources tab to the pipeline for some interesting metrics. When the tests timeout, the 4GB ram clearly gets to 100% for extended time, and the test fails. On a passed test, the ram stays mostly below 100%.
Failed test (4GB):
Passed test (4GB):
Updated Resource_class to 8GB
I tried a single experiment to update my circleci config so that the resource_class gets updated to large/8GB. Test passed and even better CPU usage %.
So, does React Testing Library take up a lot of horsepower?
Is our default 4GB RAM docker image ok?
I have a test suite that I run with
python3 -mpytest --log-cli-level=DEBUG ...
on the build server. The live logs are useful to troubleshoot if the tests get stuck or are slow for some reason (the tests use external resources).
To speed things up, it is possible to run them with e.g.
python3 -mpytest -n 4 --log-cli-level=DEBUG ...
to have four parallel test runners. Speedup is almost linear with number of processes, which is great, but unfortunately the parent process swallows all live logs. I get the captured logs in case of a test failure, but I need the live logs as well to understand what is going on in real time. I understand that the output from all four parallel runs will be intermixed and that is fine. The purpose is for the committer to just check the build server output and know roughly what is going on.
I am currently using pytest-xdist, but use none of the more advanced features from it (just the multiprocessing).
I'm running a rundeck server to delegate a simple script to 5.8k other linux servers.
The very simple script is bellow
!/bin/bash
A=$(hostname)
echo $A
When i run the same job with a smaller number of targets (4089 nodes)
the comands work fine
I tried looking at my service.log page and its not incrementing anything
Any ideas on how to be able to run on all the 5.8k nodes? And where should i look for errors?
Rundeck does not have limits to nodes, certainly depends on how many executions you want to run, how much ram, how many processors and disk space.
Maybe you need to increase the Java heap size:
https://rundeck.org/docs/administration/maintenance/tuning-rundeck.html#java-heap-size
And how to adapt this to your SSH plugin:
https://rundeck.org/docs/administration/maintenance/tuning-rundeck.html#built-in-ssh-plugins
so, I'm new in the tests world. srsr.
I've somy spec files and I running it 4 instances for about 10 spec files.
I would like to know if is a good idea to create a instance to run each file?
I know that if I have 10 files, doing it is ok.
But if I have 30 files?
setup 30 instancies, one for each.
is it good idea ?
thanks guys!
By instance I assume you are talking about running your tests in parallel. Running tests in parallel is meant to speed up your the time it takes to execute your test suite. How many tests you can reliably run in parallel depends on your setup. At some point, if you have to many tests running in parallel tests will begin to timeout. If 30 instances for 30 tests will reliably run on your machine, then that is what you should do. But it defeats the purpose if tests are timing out from to much stuff going on.
I would like to run multiple instances of a randomized algorithm. For performance reason, I'd like to distribute the tasks on several machines.
Typically, I run my program as follows:
./main < input.txt > output.txt
and it takes about 30 minutes to return a solution.
I would like to run as many instances of this as possible, and ideally not change the code of the program. My questions are:
1 - What online services offer computing resources that would suit my need?
2 - Practically, how should I launch remotely all the processes, get notified of the termination, and then aggregate the results (basically, pick up the best solution). Is there a simple framework that I could use or should I look into ssh-based scripting?
1 - What online services offer computing resources that would suit my need?
Amazon EC2.
2 - Practically, how should I launch remotely all the processes, get notified of the termination, and then aggregate the results (basically, pick up the best solution). Is there a simple framework that I could use or should I look into ssh-based scripting?
Amazon EC2 has an API for launching virtual machines. Once they're launched, you can indeed use ssh to control jobs, and I would recommend this solution. I would expect that other softwares for distributed job management exist, but they aren't likely to be any simpler to configure than ssh.