How do I set up a multi-stage test pipeline in sbt? - scala

Specifically, for a Scalatra project, but the question probably applies to most.
For example, I typically want to run:
unit tests
code quality checks (coverage, duplication, complexity, jsLint!)
integration tests (not too many!)
acceptance tests (usually a "pre-checkin" subset)
regression tests (basically the same as acceptance tests, but a bigger set)
performance tests
I want to run different subsets of these by context - i.e. after a simple code change I might just run the first three; before checking in I might want to run a bigger set, and the Continuous Integration server might have a "fast" and a "slow" build that have even bigger sets.
The basic sbt docs seem to assume a single "test" target - is there a recommended way to implement multiple test phases like this?

You may want to look at this blog about using integrated testing with SBT and Hudson:
http://henkelmann.eu/2010/11/14/sbt_hudson_with_test_integration
Then, to add your own actions you can use this page:
http://code.google.com/p/simple-build-tool/wiki/CustomActions
Basically, though, you will probably want to add a new action for each of your testing steps, in order to get the particular events you want to happen.

Related

Are Dart/Flutter tests executed synchronously or asynchronously

could someone explain please how Flutter/Dart tests are executed using test runner?
Are the tests executed synchronously or asynchronously?
Does the testing framework execute every single test synchronously, meaning that only a single test and test suite is executed at any single time?
Or does the testing framework only execute a single test at a time within a test suite, but are able to execute multiple test suites at the same time?
Or testing framework run all tests and test suites completely independent of each other at the same time, completely asynchronously?
This is important because it has a direct impact on the way we are or aren't able to structure our tests, especially when it comes to the set up and tear downs of tests, and the way we assert functionality is working correctly.
Thanks!
In general, dart test will execute many tests in parallel (the parallelism level varies based on CPU core count), but you can disable this with a command line flag.
You should not write tests with any inter-dependence (i.e. one test should not rely on some global state set up by another test). For example, you may find that because your laptop has a different CPU configuration to your CI server, your tests might pass locally but fail in CI due to different ordering.
If you have some setup logic that is very expensive, and needs to be reused between multiple tests, you can use setUpAll() to run some code once before every test in a test group, however this is still discouraged. Personally, I prefer to just join the tests into one long test, to keep all tests self-contained.
This has some advantages. For example, you can use --total-shards and --shard-index to parallelize tests in CI (by creating multiple jobs to each run a different subset of the test suite).
You can also randomize the order of your tests with --test-randomize-ordering-seed, to make sure you aren't accidentally setting up such dependences between tests that might invalidate their results (i.e. perhaps test2 only passes if it happens after test1, randomizing the ordering will catch this).
TLDR
Many tests run in parallel. Try to keep tests self-contained, with no dependence on the order of tests. Extract setup logic into functions and pass it into setUp. If you really really need the performance, you can try setUpAll but avoid it if possible.

Run NUnit Test After All Others Are Complete

I have a situation (detailed below) in which I want to run one NUnit test after all the other tests have completed. I know that I can use the order attribute to start my tests in a certain order but in this case:
I want to attribute (or otherwise change) only one test out of several hundred.
I want this test to run last, not first.
I want this test to run after all other tests have completed, not after they've started.
I have experimented with OneTimeTearDown, but ideally this would run as a regular, named test and appear that way in the test results.
(Why)
I have several hundred named, hand-crafted tests that run against different folders of json test files. Non-programmers add files to these folders from time to time. The purpose of this final test is to introspect those folders and compare the contents on disk with the files for which a test has already been executed (these are recorded by each test). If this indicates that there are untested files that, itself, constitutes a test failure.
It's an interesting question. Basically you want a meta-test... one that tests how well you are testing. For that reason, it's logical for it to actually be a test.
Unfortunately, NUnit only supports this sort of "run after everything" in a OneTimeTearDown. Now, you can Perform assertions in a OneTimeTearDown, but any failures are treated as errors, i.e. unexpected exceptions. For some purposes, this may be workable, but it doesn't look quite the same as a failure.
If I were doing this, I think I'd make it a separate analytical step in my script, after the tests had been run.

Running Parallel Jobs and getting the aggregated results

I had a quick question about the workflow plugin. I'm trying to see if the plugin will be able to satisfy my use case:
We have a jenkins job that will build our app
We want to spin off a suite of test jobs that will perform various tests on the newly built app (unit, integration, etc). These will need to be run in parallel and we want to run them on more than one jenkins node for performance reasons
We'll take the aggregated output from all our test processes from step 2 and be able to decide whether or not we should deploy (everything's passed) or not
I was curious as to whether or not I'd be able to accomplish this within the plugin and if so if you had any tips/pointers to a start.
Thanks!
You can certainly run nodes inside parallel branches. If one branch fails, the parallel step as a whole fails. If you want the build to succeed, but behave differently depending on test results, you can capture them directly as Groovy variables in various ways.
If you are using JUnitArchiver, currently it does not provide a simple means of exposing the test results directly to the Pipeline script (JENKINS-26276), though if you just want to tell if there are some failures or none, you can inspect currentBuild.status.
If you have JUnit-format test results and wish to automatically split them amongst various nodes (especially helpful in case you have a large pool of machines and it would be unmaintainable to manually divide your tests), see this demo of the Parallel Test Executor plugin’s splitTests step.

Getting the current Experiment instance at runtime

I'm running JUnit 4 with AnyLogic. In one of my tests, I need access to the Experiment running the test. Is there any clean way to access it at runtime? E.g., is there a static method along the lines of Experiment.getRunningExperiment()?
There isn't a static method that I know of (and, if there was, it might be complicated by multi-run experiments which permit parallel execution, although perhaps not since there's still a single Experiment, though there'd be thread-safety issues).
However, you can use getEngine().getExperiment() from within a model. You probably need to explain more about your usage context. If you're using AnyLogic Pro and exporting the model to run standalone, then you should have access to the experiment instance anyway (as in the help "Running the model from outside without UI").
Are you trying to run JUnit tests from within an Experiment? If so, what's your general design? Obviously JUnit doesn't sit as well in that scenario since it 'expects' to be instantiating and running the thing to be tested. For my automated tests (where I can't export it standalone because I don't use AnyLogic Pro), I judged that it was easier to avoid JUnit (it's just a framework after all) and implement the tests 'directly' (by having my model components write outputs and, at the end of the run in the Experiment, having the Experiment compare the outputs to pre-prepared expected ones and flag if the test was passed or failed). With AnyLogic Pro, you could still export standalone and use JUnit to run the 'already-a-test' Experiments (with the JUnit test checking the Experiment for a testPassed Boolean being set at the end or whatever).
The fact that you want to get running experiments suggests that you're potentially doing this whilst runs are potentially executing. If so, could you explain a bit about your requirements?

Dynamic test cases

We are using NUnit to run our integration tests. One of tests should always do the same, but take different input parameters. Unfortunately, we cannot use [TestCase] attribute, because our test cases are stored in an external storage. We have dynamic test cases which could be added, removed, or disabled (not removed) by our QA engineers. The QA people do not have ability to add [TestCase] attributes into our C# code. All they can do is to add them into the storage.
My goal is to read test cases from the storage into memory, run the test with all enabled test cases, report if a test case is failed. I cannot use "foreach" statement because if test case #1 is failed, then rest of the test cases will not be run at all. We already have build server (CruiseControl.net) where generated NUnit reports are shown, therefore I would like to continue using NUnit.
Could you point to a way how can I achieve my goal?
Thank you.
You can use [TestCaseSource("PropertyName")\] which specifies a property (or method etc) to load data from.
For example, I have a test case in Noda Time which uses all the BCL time zones - and that could change over time, of course (and is different on Mono), without me changing the code at all.
Just make your property/member load the test data into a collection, and you're away.
(I happen to have always used properties, but it sounds like it should work fine with methods too.)