How can a pytest dynamically pick the tests for serial run and parallel run using xdist? - pytest

In my pytest directory structure, I have to pick a few cases to run in parallel manner such that the resource that device uses will not conflict with other device.
For example, all tests with same marker - #pytest.mark.device1 should run in sequence.
all tests different marker #pytest.mark.device1, #pytest.mark.device2, #pytest.mark.device should run in parallel.
This is to avoid same network device being used by different tests and causing failures. Hence tests using different devices may run in parallel.
Hence the requirement is -
collect all cases with same device marker.
create a parallel run set with different device markers.
run them in parallel
Create the next set which can run in parallel with different devices
run them in parallel again...
and so on...
I somehow feel that it can be possible using hooks
Can anyone suggest some solution ?

Related

How to run tests in series in flutter

Is it possible to force one or more tests in a test suite to run separately and serially in dart/flutter?
Note: I'm not looking to run a single test via CLI filters. I want to be able to run my full test suite as part of my CI flow.
Example
The test runner ava is able to do this in javascript via the serial modifier:
test.serial('passes serially', t => {
t.pass();
});
https://github.com/avajs/ava/blob/main/docs/01-writing-tests.md#running-tests-serially
Context
I'm using a third-party library to communicate and authenticate with my backend server. This library makes use of a singleton to make accessing the current user "easier". However, it makes integration testing in parallel impossible since the testing process can only mimic a single user at a time, which in turn makes tests interfere with each other.
If you run "flutter test" in your CI all your tests in the project will be run in serial. No need to do something extra for that.

Can Nx create more finely-grained tasks than one per package?

We're in the process of adopting Nx Cloud to speed up CI in our monorepo. It seems that when we run nx affected for a given target, it will create at most one task per 'affected' package. That is, we won't get tasks to build/test/lint/e2e just part of a package.
For some apps, E2E tests run very long. We'd like to use Nx DTE to efficiently load balance the tests across many Nx Cloud agents, even when only one app is 'affected'.
Is there an example anywhere showing how to do this?

At which point during the first boot are tasks in pkg_postinst_ontarget performed?

I'm designing a device which would need to perform a number of setup activities at first boot and I'm trying to figure out the best way to do it. One of the tools at my disposal seems to be fantastically incompletely documented pkg_postinst_ontarget.
One of the activities I need to perform depends on an SD card being successfully mounted. Would pkg_postinst_ontarget get executed after all fstab mounting activities have completed?
The yocto build places the post-installation scripts in /etc/ipk-postinsts if you are using ipk packages. Then, those scripts are typically run by systemd on target: the run-postinsts.service unit runs /usr/sbin/run-postinsts which runs and deletes all the scripts stored in /etc/ipk-postinsts. Hence, the scripts are run once at the first startup but disappear after they have been executed.

Attach Current Build to Test

I'm playing around with Microsoft Test Manager 2013 (though it appears it is just MTM2012) to try and get a better understanding of test cases and test suites as I want to use this at work. So I was hoping that I could run a test suite on a build which gets included in this test suite. That is what I WANT to do, but it could very well be wrong. So maybe a better scope of what I'm doing at work might lend to a better answer.
My company makes tablet PC's. I write programs for those tablets. For sake of argument lets just say there are 5 tablets, that run a similar array of OS's. Tablet1,2,3 and 4 can run WinXP, WinXP embedded, Win7, and Win7 Embeded, and Tablet5 can run Win7, Win7 Embedded, and Win8 embedded. Lets say i'm making a Display test program. Naturally this display test will run differently on each tablet, but the program it self is supposed to be able to handle that along with not being able to worry about OS. So I wrote out a very simple test. Open Program, try to open again, verify only 1 instance, check display, close program.
I figured it would be good to make a Test Suite called "Complete Display Program Test" and put 5 sub test suites to that for each tablet. Then moved the 5 test cases to a single test suite. I configured all test cases to only have the correct tablet/OS configuration. Queued a build and waited for it to finish. I then attached that build to the main test suite. I then clicked on run a test for tablet 1 but I didn't see the build attached to the test runner. I've looked around a little bit to see why or how and haven't found anything. Question is is how do I do that? Or if you are scratching your head and wondering why in the world I am doing it this way then by all means suggest another way. This is the second time I have ever looked into MTM so I might not be doing it right.
Thank you for your time.
When running manual tests from MTM you will not see the build you are using in Test Runner.
But if you complete the test and set the test outcome you will be able to check which build you've ran the test against.
Just double-click on the test or select "View Results" to display test results:
This column is not visible by default. You will have to right-click on the column row and select the column "Buld number" to be displayed.
You will also be able to see the build number in "Analyse Test Runs" area:
The things are slightly different if you are running automated test.
Consider following approach:
Automate your Test Cases
See How to: Associate an Automated Test with a Test Case for details.
Create a Build Definition building your application under test AND assemblies containing your tests.
I strongly recommend build application you want to test and test assemblies using in the same Build Definition. (You will see why a little bit later).
Run this build definition and deploy the latest version of the application to the environment where you want run the tests.
This is very important to understand: if you run automated tests the tests assemblies only would be deployed automatically to the environment.
It's your job to deploy the right version of the application you are going to test.
Now you can run tests from MTM.
You can do it the way described by #AndrewClear in the comment to this answer: "choose "Run with Options" when you're beginning a test run" and select the latest build.
Now test assemblies containing tests which are using to automate Test Cases will be deployed automatically to the test environment and the tests will be executed.
That is the point you should recognize why is it so important to build application and tests with a single Build Definition: since the build number you've just selected when starting the tests will be stored along with the test results on TFS you will later know what version of you application you were testing (assuming you deployed the right version, of course).
You could go a little bit further if you want even more automation (This is the way I'm currently running automated tests)
Use Deploy-Build-Test template (this is a good place to start reading about Setting Up Automated Build-Deploy-Test Workflows).
Using this approach you will be able to automate deployment of the application you want to test.

Tools or plugins for making parallelize NUnit tests in TeamCity

i'am looking for tool which will make a parallelizing NUnit test in TeamCity! i wanna parallelize my tests (which goes near 12 hours) on some PC, not one, which plugin or tool can i use for it?!
If you have several unit test projects, you could create one build configuration per agent which will be running the unit tests. Each build step would run a subset of the unit tests. You could then tie the builds together through dependent builds. Here is an example of an open source project using it (you can visually picture how many agents can be simultaneously running build steps.