Can Nx create more finely-grained tasks than one per package? - nrwl-nx

We're in the process of adopting Nx Cloud to speed up CI in our monorepo. It seems that when we run nx affected for a given target, it will create at most one task per 'affected' package. That is, we won't get tasks to build/test/lint/e2e just part of a package.
For some apps, E2E tests run very long. We'd like to use Nx DTE to efficiently load balance the tests across many Nx Cloud agents, even when only one app is 'affected'.
Is there an example anywhere showing how to do this?

Related

How to run tests in series in flutter

Is it possible to force one or more tests in a test suite to run separately and serially in dart/flutter?
Note: I'm not looking to run a single test via CLI filters. I want to be able to run my full test suite as part of my CI flow.
Example
The test runner ava is able to do this in javascript via the serial modifier:
test.serial('passes serially', t => {
t.pass();
});
https://github.com/avajs/ava/blob/main/docs/01-writing-tests.md#running-tests-serially
Context
I'm using a third-party library to communicate and authenticate with my backend server. This library makes use of a singleton to make accessing the current user "easier". However, it makes integration testing in parallel impossible since the testing process can only mimic a single user at a time, which in turn makes tests interfere with each other.
If you run "flutter test" in your CI all your tests in the project will be run in serial. No need to do something extra for that.

How can a pytest dynamically pick the tests for serial run and parallel run using xdist?

In my pytest directory structure, I have to pick a few cases to run in parallel manner such that the resource that device uses will not conflict with other device.
For example, all tests with same marker - #pytest.mark.device1 should run in sequence.
all tests different marker #pytest.mark.device1, #pytest.mark.device2, #pytest.mark.device should run in parallel.
This is to avoid same network device being used by different tests and causing failures. Hence tests using different devices may run in parallel.
Hence the requirement is -
collect all cases with same device marker.
create a parallel run set with different device markers.
run them in parallel
Create the next set which can run in parallel with different devices
run them in parallel again...
and so on...
I somehow feel that it can be possible using hooks
Can anyone suggest some solution ?

Store results of unit test run into variables

I have a TeamCity build configuration that builds a C# project, runs some unit tests, and then does some extra things. My question is: Can I get information about my unit test run stored into build configuration variables (i.e. how many tests were run, how many were successful, how many failed, how many were skipped) so that I can then check these variables in a PowerShell script in later build steps and perform different actions depending on how many tests have passed?
AFAIK the best way is to ask these information directly to teamcity server using its REST API (pay attention, maybe the build locator could be a little be tricly to be found, if the build is still running).
By other hand, you can parse your NUnit test result file (or files if you run more than one NUnit test runner step in your build) inside your build agent machine.

Scla/Java library not installing on execution of Databricks Notebook

At work I have a Scala Databricks Notebook that uses many libraries imports, both from Maven and from some JAR files. The issue I have is that when I plan jobs on this Notebook, it sometimes fails (completely randomly but mostly 1 time over 10 runs) because it executes the cells before all libraries are installed. Thus the job fails and I have to go launch it manually. Such comportment from this Databricks' product is far from being professional as I can't use it in production because it sometimes fails.
I tried to put a Thread.Sleep() of 1 minute or so before all my imports, but it does not change anything. For Python there's the dbutils.library.installPyPI("library-name") but there's no such thing for Scala in the Dbutils documentation.
So does anyone have had the same issue and if so, how did you solve it ?
Thank you !
Simply put for prod scheduled jobs use New Job Cluster and avoid All Purpose Cluster.
New Job Clusters are dedicated clusters created and started when you run a task and terminate immediately after the task completes. In production, Databricks recommends using new clusters so that each task runs in a fully isolated environment.
In the UI, when setting up your notebook job select a New Job Cluster and afterwards add all the dependent libraries to the job.
The pricing is different for New Job Cluster. I would say it ends up cheaper.
Note: Use Databricks pools to reduce cluster start and auto-scaling times (if it's an issue to begin with).

Tools or plugins for making parallelize NUnit tests in TeamCity

i'am looking for tool which will make a parallelizing NUnit test in TeamCity! i wanna parallelize my tests (which goes near 12 hours) on some PC, not one, which plugin or tool can i use for it?!
If you have several unit test projects, you could create one build configuration per agent which will be running the unit tests. Each build step would run a subset of the unit tests. You could then tie the builds together through dependent builds. Here is an example of an open source project using it (you can visually picture how many agents can be simultaneously running build steps.