Parallel and Sequential execution of test cases in single test suite in ReadyAPI project - ready-api

I have a scenario in which I have 3 test cases(Warm Up, Store Entities, Fetch Entities) in 1 test suite(Server warm up request). In this, First I have to run Warm Up request and then Store Entities, Fetch Entities both in parallel.
Please help and let me know how I can do it.
Please see the attached image for the reference.
Thank you in advance for your help.
enter image description here

In your test suite:
Disable your Warm up.
Set the test suite test run mode to parallel.
Create a Test Suite setup script that contains:
testSuite.getTestCaseByName("Warm Up") .run((com.eviware.soapui.support.types.StringToObjectMap)context, false)
There is some additional detail on my blog.

Related

How to have playwright workers execute separate logical paths in an NUnit test?

I have a Playwright test which I'm running via the following command -
dotnet test -- NUnit.NumberOfTestWorkers=2
From what I can gather, this will execute the same test in parallel with 2 workers. I'm curious if there's any way to have each worker go down a separate logical path, perhaps depending upon a worker id or something similar? Eg:
if (workerId == 1)
//do something
else if (workerId == 2) //do something else
What would be the best way to do this?
As to why I want this, I have a Blazor Server app which is a chat room, and I want to test the text updating from separate users (which would be represented by different worker ids, for example). I'd also like the solution to be scalable, eg: I can enter 5000 or so workers to test large scalability in the chat room.
You appear to have misunderstood what the NumberOfTestWorkers setting does. It simply tells NUnit how many separate test workers to set up. It does not have any impact on how NUnit allocates tests among it's workers, when running in parallel. And it does not cause an individual test to run more than once.
In general, the kind of load testing you are trying to do isn't directly supported by NUnit. You would have to build something of your own, possibly using NUnit or try a framework intended for that kind of testing.
A long time ago, there was something called pnunit, but I don't believe it is kept up to date any longer. See https://docs.plasticscm.com/technical-articles/pnunit-parallel-nunit

Azure DevOps - Planning of manual testing in Test Plans - how to define planned execution date for each Test case

At our project we use Azure DevOps Test Plans for manual testing. We do not use pipelines. We have each Test plan for testing of one iteration - approx. 1 month testing for each test plan, but eg. SIT or UAT will take longer. I would like to see, when each test case (or test suite) is going to be tested, but there is no attribute for this.
I would also like to have reporting based on that (how many test cases should have been run by today and how many were really run)?
Can anyone help how to approach that?
Thanks
As a simple way, you can use iterations to plan target periods for your test activities. If you want to have a custom attribute, you can edit your process template (Customize a project using an inherited process):
Select Add new field:
As an example, assign the exiting field to the test case:
You can add it to the test suite and test case:
Test case:
Test suite:

Understanding JobLauncherTestUtils

I am currently getting to understand jobLauncherTestUtils. I have read about it from multiple resources such as following:
https://docs.spring.io/spring-batch/docs/current/api/org/springframework/batch/test/JobLauncherTestUtils.html
https://livebook.manning.com/concept/spring/joblaunchertestutils
I wanted to understand when we call jobLauncherTestUtils.launchJob(), what does it mean by end-to-end testing of job. Does it actually launch the job. If so, then what's the point of testing the job without mocks? If not so, then how does it actually tests a job?
I wanted to understand when we call jobLauncherTestUtils.launchJob(), what does it mean by end-to-end testing of job.
End-to-End testing means testing the job as a black box based on the specification of its input and output. For example, let's assume your batch job is expected to read data from a database table and write it to a flat file.
And end-to-end test would:
Populate a test database with some sample records
Run your job
Assert that the output file contains the expected records
Without individually testing the inner steps of this job, you are testing its functionality from end (input) to end (output).
JobLauncherTestUtils is a utility class that allows you to run an entire job like this. It also allows you to test a single step from a job in isolation if you want.
Does it actually launch the job.
Yes, the job will be run as if it was run outside a test. JobLauncherTestUtils is just an utility class that uses a regular JobLauncher behind the scene. You can run your job in unit tests without this utility class.
If so, then what's the point of testing the job without mocks?
The point of testing a job without mocks is to ensure the job is working as expected with real resources it depends on or interact with. You can always mock a database or a message broker in your tests, but the mocking code could be buggy and does not reflect the real behaviour of a database or a message broker.

End to end tests in PyTest

I use PyTest to write my unit-tests and I absolutely love it. Recently - a need for end-to-end/acceptance tests has shown up, and since I had bad experience with acceptance tests framework, I decided to do some researching if it's possible to write end to end tests in PyTest.
I won't get into too much details about the system in test, but what the applications do is to receive 3 messages from a customer (in JSON format), sprinkle some business layer magic on top of it, and then output 7 messages back to the customer. Here's couple of questions about structure and test design:
The setup part should create the 3 JSON messages, and send those to the system in test. I'm not sure fixtures is the proper way to handle it, but fixtures for me is a way to return an object with a state. So I would assume my setup is the same thing just on a bigger scope. So let's said I have a fixture named setup (module scope) that does those multiple actions needed for the test to work (creating the 3 JSON message and dispatching them). My instinct tells me I shouldn't have more than one setup fixture per test file/test class, however, I'm not sure how many tests I should have. I can make it more 'unit' like, and have 7 tests. Each will consume one message, and verify the message data is correct. Or - since 3 messages output 7 messages, and there is a direct connection between the setup and the results, I should use a single test that verify all 7 message in it. This will make my test method more complex because asserting separate values from the returned JSON is probably a bad idea - if the first message will fail, I will not be able to know if the 6 remaining message are OK or not (It's of-course easier to know what went wrong when you see the whole picture). So for a single test to work properly I will have to write a method that compare all 7 messages with expected results, and then raise a single assert with information about which of the 7 messages failed and why. So while verifying 7 messages feels right in testing context, It's more complex and does not follow the 'test a single thing and be simple`.
The setup create an entity called random_test_entity (along many other entities). The test need that information for asserting. So the setup fixture can either return a dict with all those values I will need later on in my test, OR - I can create another fixture which return a dict with values that both the setup fixture and the test will consume. Problem here is that I need to share data and state between my fixtures and test. And because I have no smart way of doing it, my fixture return data that is not strictly connected to the setup which feels strange. A fixture returning a list of values feel strange for me, but also splitting the setup fixture into multiple fixtures just so I can share data.
I'm using Ptest git repository as my bible on how to write unit-tests. I learned a lot about test design from it. Is there any source I can use to learn how to properly write end to end tests?
Thanks everyone!

selenium tests failing randomly when run in parallel - pytest-django

When I run my selenium tests (django StaticLiveServerTestCase tests using selenium webdriver), I get random failures when running my tests in parallel using pytest-xdist.
Sometimes my full test suite will pass, and other times it will not.
There are two tests in my test suite that seem to fail most often. All of my tests load data from a fixture, but these two that fail create new objects to test specific edge cases. After they create the objects, I have my logged in client make a get to the URL for the page under test.
Failure modes:
1) The objects that were created during my two tests sometimes will not show up, and I'll get a NoSuchElementException.
2) The object will show up, but the values will be incorrect (they will render as n/a instead of as a number that I assign at object creation).
I'm new to parallelizing my test builds. My debugging has been fairly rudimentary so far. Any help would be appreciated, whether that be through debugging techniques or otherwise!
Seems like this has something to do with the database transaction i use to manipulate the state of the data before accessing the app with the webdriver, not finishing before the webdriver connection is made. The webdriver reads from the old state of the database.
I just need to figure out how to make sure the old connection is finished.