"Inverted" Timeout functionality in NUnit tests - nunit

I'm looking for a way to write a unit test using NUnit so that if current test takes more than X miliseconds to complete it should terminate and report it as skipped / success, but not failed. This would basically be the opposite of the NUnit Timeout attribute.
A bit of context to the problem: I have some unit tests that invoke remote servers and check their responses. If there is a network issue I don't want the test to fail, I would only want to have that test fail if it was able to get a response and that response was incorrect. The same time if a response doesn't arrive I'd like to move on and skip the test.
(I realize that this approach might result in some errors not being reported, however in m situation I'm looking for no false negatives being reported (e.g. not having a test that succeeds once and fails another time based on network connectivity which I don't intend to test). )

Wouldn't it be better to mock the invocation of the remote servers?
Using mocking, you will be able to isolate your unit from its dependencies, and precisely test your the unit itself, how it deals with responses etc, and not worry about the servers at all.
Indeed, you can use mocking to purposely create incorrect responses to see how your unit deals with them, check for any expected exceptions and so on.
Have a look at Moq.

Related

Are Dart/Flutter tests executed synchronously or asynchronously

could someone explain please how Flutter/Dart tests are executed using test runner?
Are the tests executed synchronously or asynchronously?
Does the testing framework execute every single test synchronously, meaning that only a single test and test suite is executed at any single time?
Or does the testing framework only execute a single test at a time within a test suite, but are able to execute multiple test suites at the same time?
Or testing framework run all tests and test suites completely independent of each other at the same time, completely asynchronously?
This is important because it has a direct impact on the way we are or aren't able to structure our tests, especially when it comes to the set up and tear downs of tests, and the way we assert functionality is working correctly.
Thanks!
In general, dart test will execute many tests in parallel (the parallelism level varies based on CPU core count), but you can disable this with a command line flag.
You should not write tests with any inter-dependence (i.e. one test should not rely on some global state set up by another test). For example, you may find that because your laptop has a different CPU configuration to your CI server, your tests might pass locally but fail in CI due to different ordering.
If you have some setup logic that is very expensive, and needs to be reused between multiple tests, you can use setUpAll() to run some code once before every test in a test group, however this is still discouraged. Personally, I prefer to just join the tests into one long test, to keep all tests self-contained.
This has some advantages. For example, you can use --total-shards and --shard-index to parallelize tests in CI (by creating multiple jobs to each run a different subset of the test suite).
You can also randomize the order of your tests with --test-randomize-ordering-seed, to make sure you aren't accidentally setting up such dependences between tests that might invalidate their results (i.e. perhaps test2 only passes if it happens after test1, randomizing the ordering will catch this).
TLDR
Many tests run in parallel. Try to keep tests self-contained, with no dependence on the order of tests. Extract setup logic into functions and pass it into setUp. If you really really need the performance, you can try setUpAll but avoid it if possible.

End to end tests in PyTest

I use PyTest to write my unit-tests and I absolutely love it. Recently - a need for end-to-end/acceptance tests has shown up, and since I had bad experience with acceptance tests framework, I decided to do some researching if it's possible to write end to end tests in PyTest.
I won't get into too much details about the system in test, but what the applications do is to receive 3 messages from a customer (in JSON format), sprinkle some business layer magic on top of it, and then output 7 messages back to the customer. Here's couple of questions about structure and test design:
The setup part should create the 3 JSON messages, and send those to the system in test. I'm not sure fixtures is the proper way to handle it, but fixtures for me is a way to return an object with a state. So I would assume my setup is the same thing just on a bigger scope. So let's said I have a fixture named setup (module scope) that does those multiple actions needed for the test to work (creating the 3 JSON message and dispatching them). My instinct tells me I shouldn't have more than one setup fixture per test file/test class, however, I'm not sure how many tests I should have. I can make it more 'unit' like, and have 7 tests. Each will consume one message, and verify the message data is correct. Or - since 3 messages output 7 messages, and there is a direct connection between the setup and the results, I should use a single test that verify all 7 message in it. This will make my test method more complex because asserting separate values from the returned JSON is probably a bad idea - if the first message will fail, I will not be able to know if the 6 remaining message are OK or not (It's of-course easier to know what went wrong when you see the whole picture). So for a single test to work properly I will have to write a method that compare all 7 messages with expected results, and then raise a single assert with information about which of the 7 messages failed and why. So while verifying 7 messages feels right in testing context, It's more complex and does not follow the 'test a single thing and be simple`.
The setup create an entity called random_test_entity (along many other entities). The test need that information for asserting. So the setup fixture can either return a dict with all those values I will need later on in my test, OR - I can create another fixture which return a dict with values that both the setup fixture and the test will consume. Problem here is that I need to share data and state between my fixtures and test. And because I have no smart way of doing it, my fixture return data that is not strictly connected to the setup which feels strange. A fixture returning a list of values feel strange for me, but also splitting the setup fixture into multiple fixtures just so I can share data.
I'm using Ptest git repository as my bible on how to write unit-tests. I learned a lot about test design from it. Is there any source I can use to learn how to properly write end to end tests?
Thanks everyone!

Dependencies between tests in scala

I want to test a complex workflow, both by unit-testing its components, and running an integration test for the whole thing, without running sub-components twice unnecessarily.
For example, routine c processes the results of a and b. I would like to have the following test suites:
Unit test 1: Running a and validating the results
Unit test 2: Running b and validating the results
Nightly integration test: Running a and b, validating their results, then running c and validating its results, without re-running a and b, but re-using the outputs
Running each component takes some time, so the obvious solution of "just run everything every time" is not practical.
The code is in Scala, but I don't care which test framework to use - scalatest, specs2, even TestNG, all are fine, though I would prefer a Scala-ish solution. Thanks!
"Reusing the ouputs" means your unit tests would have produced some artifact, which the nightly integration test will consume. This is probably a bad practice that will end up hurting you more often than not. Unless those tests truly take a large amount of time, i'd repeat them, or find better ways to verify their output (DRYer tests, don't test the same everywhere)

selenium tests failing randomly when run in parallel - pytest-django

When I run my selenium tests (django StaticLiveServerTestCase tests using selenium webdriver), I get random failures when running my tests in parallel using pytest-xdist.
Sometimes my full test suite will pass, and other times it will not.
There are two tests in my test suite that seem to fail most often. All of my tests load data from a fixture, but these two that fail create new objects to test specific edge cases. After they create the objects, I have my logged in client make a get to the URL for the page under test.
Failure modes:
1) The objects that were created during my two tests sometimes will not show up, and I'll get a NoSuchElementException.
2) The object will show up, but the values will be incorrect (they will render as n/a instead of as a number that I assign at object creation).
I'm new to parallelizing my test builds. My debugging has been fairly rudimentary so far. Any help would be appreciated, whether that be through debugging techniques or otherwise!
Seems like this has something to do with the database transaction i use to manipulate the state of the data before accessing the app with the webdriver, not finishing before the webdriver connection is made. The webdriver reads from the old state of the database.
I just need to figure out how to make sure the old connection is finished.

CC.Net Build Server -- how to find the NUnit test that does not complete at all?

I have a test suite of ~ 1500 tests and they generally run and finish within 'reasonable time'.
Recently, however, I've changed parts of the code to use threads -- and now my builds fail from time to time by simply timing out. I imagine that a thread refuses to die and the build waits until reaching the maximum build time.
My problem is how to detect which test is causing the problem?
Can I activate some logging that shows me that a test has started/finished? I can of course be done by inserting code in every single test method - or just the fixtures, but that is A LOT of work that I'd rather avoid.
I'd suggest upgrading to NUnit 2.5 and decorating your tests with Timeout attribute, specifying maximum per-test run time. For example, you can put this in your AssemblyInfo.cs:
[assembly: NUnit.Framework.Timeout(100)]
NUnit will now launch each test in a separate thread and cancell it if it exceeds its time slot. However, this might be costly, so it's probably better to identify long-running tests and then remove assembly-level attribute in favor of test-fixture time slots. You can also override this on individual tests, assigning them more time to run.
This way you move the timeout/hang detection from CruiseControl.Net to NUnit and get information inside the report on the tasks that did not complete properly. Unfortunately there's no way for CC.Net to get this information when it has to kill the process because of timeout.