End to end tests in PyTest - pytest

I use PyTest to write my unit-tests and I absolutely love it. Recently - a need for end-to-end/acceptance tests has shown up, and since I had bad experience with acceptance tests framework, I decided to do some researching if it's possible to write end to end tests in PyTest.
I won't get into too much details about the system in test, but what the applications do is to receive 3 messages from a customer (in JSON format), sprinkle some business layer magic on top of it, and then output 7 messages back to the customer. Here's couple of questions about structure and test design:
The setup part should create the 3 JSON messages, and send those to the system in test. I'm not sure fixtures is the proper way to handle it, but fixtures for me is a way to return an object with a state. So I would assume my setup is the same thing just on a bigger scope. So let's said I have a fixture named setup (module scope) that does those multiple actions needed for the test to work (creating the 3 JSON message and dispatching them). My instinct tells me I shouldn't have more than one setup fixture per test file/test class, however, I'm not sure how many tests I should have. I can make it more 'unit' like, and have 7 tests. Each will consume one message, and verify the message data is correct. Or - since 3 messages output 7 messages, and there is a direct connection between the setup and the results, I should use a single test that verify all 7 message in it. This will make my test method more complex because asserting separate values from the returned JSON is probably a bad idea - if the first message will fail, I will not be able to know if the 6 remaining message are OK or not (It's of-course easier to know what went wrong when you see the whole picture). So for a single test to work properly I will have to write a method that compare all 7 messages with expected results, and then raise a single assert with information about which of the 7 messages failed and why. So while verifying 7 messages feels right in testing context, It's more complex and does not follow the 'test a single thing and be simple`.
The setup create an entity called random_test_entity (along many other entities). The test need that information for asserting. So the setup fixture can either return a dict with all those values I will need later on in my test, OR - I can create another fixture which return a dict with values that both the setup fixture and the test will consume. Problem here is that I need to share data and state between my fixtures and test. And because I have no smart way of doing it, my fixture return data that is not strictly connected to the setup which feels strange. A fixture returning a list of values feel strange for me, but also splitting the setup fixture into multiple fixtures just so I can share data.
I'm using Ptest git repository as my bible on how to write unit-tests. I learned a lot about test design from it. Is there any source I can use to learn how to properly write end to end tests?
Thanks everyone!

Related

How to have playwright workers execute separate logical paths in an NUnit test?

I have a Playwright test which I'm running via the following command -
dotnet test -- NUnit.NumberOfTestWorkers=2
From what I can gather, this will execute the same test in parallel with 2 workers. I'm curious if there's any way to have each worker go down a separate logical path, perhaps depending upon a worker id or something similar? Eg:
if (workerId == 1)
//do something
else if (workerId == 2) //do something else
What would be the best way to do this?
As to why I want this, I have a Blazor Server app which is a chat room, and I want to test the text updating from separate users (which would be represented by different worker ids, for example). I'd also like the solution to be scalable, eg: I can enter 5000 or so workers to test large scalability in the chat room.
You appear to have misunderstood what the NumberOfTestWorkers setting does. It simply tells NUnit how many separate test workers to set up. It does not have any impact on how NUnit allocates tests among it's workers, when running in parallel. And it does not cause an individual test to run more than once.
In general, the kind of load testing you are trying to do isn't directly supported by NUnit. You would have to build something of your own, possibly using NUnit or try a framework intended for that kind of testing.
A long time ago, there was something called pnunit, but I don't believe it is kept up to date any longer. See https://docs.plasticscm.com/technical-articles/pnunit-parallel-nunit

Protractor-Cucumber order of tests execution

I'm writing automatic tests for my application using Protractor-Cucumber framework.
I have several feature files with multiple scenarios each and I want to manage the order of their execution using Cucumber tags.
Obviously, all the scenarios fall into the "FullRegression" category, but I also want to mark some of them with the "SmokeTest" tag to be run daily.
The problem is that those "Smoke" scenarios are scattered all over the features and they need to be executed in particular order to work properly.
For example, I want to run scenarios 2 and 3 from Feature2, then run scenario 1 and 2 from Feature1 and then run scenario 5 from Feature3.
Is it possible to do that using Cucumber tags? I've tried it but it didn't work as I expected. The only other idea I have is to create special "SmokeTest.feature" file but then I would need to repeat a lot of scenarios inside it.
Appreciate any help.
Cucumber is designed so that all scenarios are independent of each other and cannot be connected together. Each scenario starts from scratch, clearing out the session, emptying the database etc.. This is standard practice for all the major test frameworks (idempotence)
So there is no easy way to do what you want, and more importantly doing what you want has no meaning.
Now some people take great pains to work around this (particularly some Cucumber users), so maybe thats happened with your test suite, but again this is a really bad thing to do.
If you want to combine scenarios you should write new scenarios that use the steps of the scenarios you want to combine e.g.
Scenario: Foo
When I foo
end
Scenario: Bar
When I bar
end
# This is the one you would write
Scenario: Foo bar
When I foo
And I bar
end

selenium tests failing randomly when run in parallel - pytest-django

When I run my selenium tests (django StaticLiveServerTestCase tests using selenium webdriver), I get random failures when running my tests in parallel using pytest-xdist.
Sometimes my full test suite will pass, and other times it will not.
There are two tests in my test suite that seem to fail most often. All of my tests load data from a fixture, but these two that fail create new objects to test specific edge cases. After they create the objects, I have my logged in client make a get to the URL for the page under test.
Failure modes:
1) The objects that were created during my two tests sometimes will not show up, and I'll get a NoSuchElementException.
2) The object will show up, but the values will be incorrect (they will render as n/a instead of as a number that I assign at object creation).
I'm new to parallelizing my test builds. My debugging has been fairly rudimentary so far. Any help would be appreciated, whether that be through debugging techniques or otherwise!
Seems like this has something to do with the database transaction i use to manipulate the state of the data before accessing the app with the webdriver, not finishing before the webdriver connection is made. The webdriver reads from the old state of the database.
I just need to figure out how to make sure the old connection is finished.

Dynamic test cases

We are using NUnit to run our integration tests. One of tests should always do the same, but take different input parameters. Unfortunately, we cannot use [TestCase] attribute, because our test cases are stored in an external storage. We have dynamic test cases which could be added, removed, or disabled (not removed) by our QA engineers. The QA people do not have ability to add [TestCase] attributes into our C# code. All they can do is to add them into the storage.
My goal is to read test cases from the storage into memory, run the test with all enabled test cases, report if a test case is failed. I cannot use "foreach" statement because if test case #1 is failed, then rest of the test cases will not be run at all. We already have build server (CruiseControl.net) where generated NUnit reports are shown, therefore I would like to continue using NUnit.
Could you point to a way how can I achieve my goal?
Thank you.
You can use [TestCaseSource("PropertyName")\] which specifies a property (or method etc) to load data from.
For example, I have a test case in Noda Time which uses all the BCL time zones - and that could change over time, of course (and is different on Mono), without me changing the code at all.
Just make your property/member load the test data into a collection, and you're away.
(I happen to have always used properties, but it sounds like it should work fine with methods too.)

"Inverted" Timeout functionality in NUnit tests

I'm looking for a way to write a unit test using NUnit so that if current test takes more than X miliseconds to complete it should terminate and report it as skipped / success, but not failed. This would basically be the opposite of the NUnit Timeout attribute.
A bit of context to the problem: I have some unit tests that invoke remote servers and check their responses. If there is a network issue I don't want the test to fail, I would only want to have that test fail if it was able to get a response and that response was incorrect. The same time if a response doesn't arrive I'd like to move on and skip the test.
(I realize that this approach might result in some errors not being reported, however in m situation I'm looking for no false negatives being reported (e.g. not having a test that succeeds once and fails another time based on network connectivity which I don't intend to test). )
Wouldn't it be better to mock the invocation of the remote servers?
Using mocking, you will be able to isolate your unit from its dependencies, and precisely test your the unit itself, how it deals with responses etc, and not worry about the servers at all.
Indeed, you can use mocking to purposely create incorrect responses to see how your unit deals with them, check for any expected exceptions and so on.
Have a look at Moq.