How to have playwright workers execute separate logical paths in an NUnit test? - nunit

I have a Playwright test which I'm running via the following command -
dotnet test -- NUnit.NumberOfTestWorkers=2
From what I can gather, this will execute the same test in parallel with 2 workers. I'm curious if there's any way to have each worker go down a separate logical path, perhaps depending upon a worker id or something similar? Eg:
if (workerId == 1)
//do something
else if (workerId == 2) //do something else
What would be the best way to do this?
As to why I want this, I have a Blazor Server app which is a chat room, and I want to test the text updating from separate users (which would be represented by different worker ids, for example). I'd also like the solution to be scalable, eg: I can enter 5000 or so workers to test large scalability in the chat room.

You appear to have misunderstood what the NumberOfTestWorkers setting does. It simply tells NUnit how many separate test workers to set up. It does not have any impact on how NUnit allocates tests among it's workers, when running in parallel. And it does not cause an individual test to run more than once.
In general, the kind of load testing you are trying to do isn't directly supported by NUnit. You would have to build something of your own, possibly using NUnit or try a framework intended for that kind of testing.
A long time ago, there was something called pnunit, but I don't believe it is kept up to date any longer. See https://docs.plasticscm.com/technical-articles/pnunit-parallel-nunit

Related

NUnits: Start automation tests parallel in a defined order (Selenium)

Im writing automation tests for our website using NUnit and Selenium.
We have 2 different users (aminInt and hsuInt) and 3 features which need to be tested (in the example below TestA, TestB, TestC)
In my example below, there are in total 6 automation tests in the test explorer as each feature is being tested with both users.
Everything works. Each test is getting its own webDriver and all the tests are independent.
Now i want to start the tests in parallel.
I already tried everything i could find online. Tried the different parameter in parallelizable, but i cant get it right.
I would like to start 2 test at a time.
For example:
First test:
TestA adminInt
TestA hsuInt
after both tests above are done, it should start:
Second Test:
TestB adminInt
TestB hsuInt
If your goal is to save time and your tests are truly independent, then trying to control the order of execution isn't what you want. Essentially, it's NUnit's job to figure out how to run in parallel. Your job is merely to tell NUnit whether the tests you wrote are capable of running in parallel.
To tell NUnit which tests may be run in parallel, use the [Non]ParallelizableAttribute. If you place attributes on the fixtures, their different meaning is as follows...
[NonParallelizable] means that the fixture is not capable of running in parallel with any other fixtures. That's the default if you don't specify any attribute.
[Parallelizable] means that the fixture is capable of running in parallel with other fixtures in your test.
[Parallelizable(ParallelScope.All)] means that both the fixture and all the individual tests under that fixture are capable of running in parallel.
[Parallelizable(ParallelScope.Children] means that the fixture is not capable of running in parallel with other fixtures but the test methods under it may run in parallel with one another.
I stressed capable above because that's what you should focus on. Don't use the attribute with the expectation that NUnit will run some tests together with other specific tests because there is no NUnit feature to do that. Keep your focus on what you can control... writing independent, parallelizable tests and telling NUnit which ones they are.
If you find that NUnit starts too many test threads at one time, you can place a LevelOfParallelism attribute in your AssemblyInfo.cs. However, keep in mind that NUnit defaults depending on the number of cores available. So what works for you on your development machine may not give the best performance in your CI builds. My advice is to let NUnit use its defaults for most things until you see a reason to override them.
Thanks for the answers.
I found a solution for how to start just 2 tests in parallel.
With this parameter in front of the class, only 2 tests will start in parallel.
//[assembly: LevelOfParallelism(2)] You need this parameter only inside of one class. So if you have different classes which have their own tests, add this parameter to only one class and it will run all tests in parallel as long as you have the fixture command too.
[TestFixture("userA")]
[TestFixture("userB")]
To start the test for 2 different users for example.

Are Dart/Flutter tests executed synchronously or asynchronously

could someone explain please how Flutter/Dart tests are executed using test runner?
Are the tests executed synchronously or asynchronously?
Does the testing framework execute every single test synchronously, meaning that only a single test and test suite is executed at any single time?
Or does the testing framework only execute a single test at a time within a test suite, but are able to execute multiple test suites at the same time?
Or testing framework run all tests and test suites completely independent of each other at the same time, completely asynchronously?
This is important because it has a direct impact on the way we are or aren't able to structure our tests, especially when it comes to the set up and tear downs of tests, and the way we assert functionality is working correctly.
Thanks!
In general, dart test will execute many tests in parallel (the parallelism level varies based on CPU core count), but you can disable this with a command line flag.
You should not write tests with any inter-dependence (i.e. one test should not rely on some global state set up by another test). For example, you may find that because your laptop has a different CPU configuration to your CI server, your tests might pass locally but fail in CI due to different ordering.
If you have some setup logic that is very expensive, and needs to be reused between multiple tests, you can use setUpAll() to run some code once before every test in a test group, however this is still discouraged. Personally, I prefer to just join the tests into one long test, to keep all tests self-contained.
This has some advantages. For example, you can use --total-shards and --shard-index to parallelize tests in CI (by creating multiple jobs to each run a different subset of the test suite).
You can also randomize the order of your tests with --test-randomize-ordering-seed, to make sure you aren't accidentally setting up such dependences between tests that might invalidate their results (i.e. perhaps test2 only passes if it happens after test1, randomizing the ordering will catch this).
TLDR
Many tests run in parallel. Try to keep tests self-contained, with no dependence on the order of tests. Extract setup logic into functions and pass it into setUp. If you really really need the performance, you can try setUpAll but avoid it if possible.

Running Parallel Jobs and getting the aggregated results

I had a quick question about the workflow plugin. I'm trying to see if the plugin will be able to satisfy my use case:
We have a jenkins job that will build our app
We want to spin off a suite of test jobs that will perform various tests on the newly built app (unit, integration, etc). These will need to be run in parallel and we want to run them on more than one jenkins node for performance reasons
We'll take the aggregated output from all our test processes from step 2 and be able to decide whether or not we should deploy (everything's passed) or not
I was curious as to whether or not I'd be able to accomplish this within the plugin and if so if you had any tips/pointers to a start.
Thanks!
You can certainly run nodes inside parallel branches. If one branch fails, the parallel step as a whole fails. If you want the build to succeed, but behave differently depending on test results, you can capture them directly as Groovy variables in various ways.
If you are using JUnitArchiver, currently it does not provide a simple means of exposing the test results directly to the Pipeline script (JENKINS-26276), though if you just want to tell if there are some failures or none, you can inspect currentBuild.status.
If you have JUnit-format test results and wish to automatically split them amongst various nodes (especially helpful in case you have a large pool of machines and it would be unmaintainable to manually divide your tests), see this demo of the Parallel Test Executor plugin’s splitTests step.

selenium tests failing randomly when run in parallel - pytest-django

When I run my selenium tests (django StaticLiveServerTestCase tests using selenium webdriver), I get random failures when running my tests in parallel using pytest-xdist.
Sometimes my full test suite will pass, and other times it will not.
There are two tests in my test suite that seem to fail most often. All of my tests load data from a fixture, but these two that fail create new objects to test specific edge cases. After they create the objects, I have my logged in client make a get to the URL for the page under test.
Failure modes:
1) The objects that were created during my two tests sometimes will not show up, and I'll get a NoSuchElementException.
2) The object will show up, but the values will be incorrect (they will render as n/a instead of as a number that I assign at object creation).
I'm new to parallelizing my test builds. My debugging has been fairly rudimentary so far. Any help would be appreciated, whether that be through debugging techniques or otherwise!
Seems like this has something to do with the database transaction i use to manipulate the state of the data before accessing the app with the webdriver, not finishing before the webdriver connection is made. The webdriver reads from the old state of the database.
I just need to figure out how to make sure the old connection is finished.

Getting the current Experiment instance at runtime

I'm running JUnit 4 with AnyLogic. In one of my tests, I need access to the Experiment running the test. Is there any clean way to access it at runtime? E.g., is there a static method along the lines of Experiment.getRunningExperiment()?
There isn't a static method that I know of (and, if there was, it might be complicated by multi-run experiments which permit parallel execution, although perhaps not since there's still a single Experiment, though there'd be thread-safety issues).
However, you can use getEngine().getExperiment() from within a model. You probably need to explain more about your usage context. If you're using AnyLogic Pro and exporting the model to run standalone, then you should have access to the experiment instance anyway (as in the help "Running the model from outside without UI").
Are you trying to run JUnit tests from within an Experiment? If so, what's your general design? Obviously JUnit doesn't sit as well in that scenario since it 'expects' to be instantiating and running the thing to be tested. For my automated tests (where I can't export it standalone because I don't use AnyLogic Pro), I judged that it was easier to avoid JUnit (it's just a framework after all) and implement the tests 'directly' (by having my model components write outputs and, at the end of the run in the Experiment, having the Experiment compare the outputs to pre-prepared expected ones and flag if the test was passed or failed). With AnyLogic Pro, you could still export standalone and use JUnit to run the 'already-a-test' Experiments (with the JUnit test checking the Experiment for a testPassed Boolean being set at the end or whatever).
The fact that you want to get running experiments suggests that you're potentially doing this whilst runs are potentially executing. If so, could you explain a bit about your requirements?