Protractor-Cucumber order of tests execution - protractor

I'm writing automatic tests for my application using Protractor-Cucumber framework.
I have several feature files with multiple scenarios each and I want to manage the order of their execution using Cucumber tags.
Obviously, all the scenarios fall into the "FullRegression" category, but I also want to mark some of them with the "SmokeTest" tag to be run daily.
The problem is that those "Smoke" scenarios are scattered all over the features and they need to be executed in particular order to work properly.
For example, I want to run scenarios 2 and 3 from Feature2, then run scenario 1 and 2 from Feature1 and then run scenario 5 from Feature3.
Is it possible to do that using Cucumber tags? I've tried it but it didn't work as I expected. The only other idea I have is to create special "SmokeTest.feature" file but then I would need to repeat a lot of scenarios inside it.
Appreciate any help.

Cucumber is designed so that all scenarios are independent of each other and cannot be connected together. Each scenario starts from scratch, clearing out the session, emptying the database etc.. This is standard practice for all the major test frameworks (idempotence)
So there is no easy way to do what you want, and more importantly doing what you want has no meaning.
Now some people take great pains to work around this (particularly some Cucumber users), so maybe thats happened with your test suite, but again this is a really bad thing to do.
If you want to combine scenarios you should write new scenarios that use the steps of the scenarios you want to combine e.g.
Scenario: Foo
When I foo
end
Scenario: Bar
When I bar
end
# This is the one you would write
Scenario: Foo bar
When I foo
And I bar
end

Related

How to have playwright workers execute separate logical paths in an NUnit test?

I have a Playwright test which I'm running via the following command -
dotnet test -- NUnit.NumberOfTestWorkers=2
From what I can gather, this will execute the same test in parallel with 2 workers. I'm curious if there's any way to have each worker go down a separate logical path, perhaps depending upon a worker id or something similar? Eg:
if (workerId == 1)
//do something
else if (workerId == 2) //do something else
What would be the best way to do this?
As to why I want this, I have a Blazor Server app which is a chat room, and I want to test the text updating from separate users (which would be represented by different worker ids, for example). I'd also like the solution to be scalable, eg: I can enter 5000 or so workers to test large scalability in the chat room.
You appear to have misunderstood what the NumberOfTestWorkers setting does. It simply tells NUnit how many separate test workers to set up. It does not have any impact on how NUnit allocates tests among it's workers, when running in parallel. And it does not cause an individual test to run more than once.
In general, the kind of load testing you are trying to do isn't directly supported by NUnit. You would have to build something of your own, possibly using NUnit or try a framework intended for that kind of testing.
A long time ago, there was something called pnunit, but I don't believe it is kept up to date any longer. See https://docs.plasticscm.com/technical-articles/pnunit-parallel-nunit

End to end tests in PyTest

I use PyTest to write my unit-tests and I absolutely love it. Recently - a need for end-to-end/acceptance tests has shown up, and since I had bad experience with acceptance tests framework, I decided to do some researching if it's possible to write end to end tests in PyTest.
I won't get into too much details about the system in test, but what the applications do is to receive 3 messages from a customer (in JSON format), sprinkle some business layer magic on top of it, and then output 7 messages back to the customer. Here's couple of questions about structure and test design:
The setup part should create the 3 JSON messages, and send those to the system in test. I'm not sure fixtures is the proper way to handle it, but fixtures for me is a way to return an object with a state. So I would assume my setup is the same thing just on a bigger scope. So let's said I have a fixture named setup (module scope) that does those multiple actions needed for the test to work (creating the 3 JSON message and dispatching them). My instinct tells me I shouldn't have more than one setup fixture per test file/test class, however, I'm not sure how many tests I should have. I can make it more 'unit' like, and have 7 tests. Each will consume one message, and verify the message data is correct. Or - since 3 messages output 7 messages, and there is a direct connection between the setup and the results, I should use a single test that verify all 7 message in it. This will make my test method more complex because asserting separate values from the returned JSON is probably a bad idea - if the first message will fail, I will not be able to know if the 6 remaining message are OK or not (It's of-course easier to know what went wrong when you see the whole picture). So for a single test to work properly I will have to write a method that compare all 7 messages with expected results, and then raise a single assert with information about which of the 7 messages failed and why. So while verifying 7 messages feels right in testing context, It's more complex and does not follow the 'test a single thing and be simple`.
The setup create an entity called random_test_entity (along many other entities). The test need that information for asserting. So the setup fixture can either return a dict with all those values I will need later on in my test, OR - I can create another fixture which return a dict with values that both the setup fixture and the test will consume. Problem here is that I need to share data and state between my fixtures and test. And because I have no smart way of doing it, my fixture return data that is not strictly connected to the setup which feels strange. A fixture returning a list of values feel strange for me, but also splitting the setup fixture into multiple fixtures just so I can share data.
I'm using Ptest git repository as my bible on how to write unit-tests. I learned a lot about test design from it. Is there any source I can use to learn how to properly write end to end tests?
Thanks everyone!

How do I structure my project folder in eclipse for Cucumber project which has sprint wise delivery

I am trying to create an automation framework using cucumber and trying to replicate a real time scenario (sprint wise delivery).
How do I structure my folders/source folders/packages in eclipse? Below is the structure which I am about to follow but I am not quite convinced if it is right.
I am trying to structure in such a way that when I give the command
"mvn test -Dcucumber.options="src\test\resources\sprint1\features", then it should run all the features under sprint1, similarly for sprint2 and so on.
Any suggestions or inputs would be helpful.
P.S: Since I am new to cucumber, a detailed explaination on the folder structure for real time sprint wise delivery would be much appreciated.
Thanks :)
I would not consider the file structure you are thinking of.
The reason is that after a while, it doesn't matter when a feature was added to the system. So organizing features based on time is a bad idea.
If you still need to be able to run the features for a specific sprint, consider using tags instead. That would allow you only to run the features connected to the sprint you are interested in.
I would not to that either, because after a while it doesn't matter which sprint a piece of functionality was added. It should still pass all executions, even if it is 27 sprints old.
If this organization is bad, how should you do it instead?
This is a question where a lot of people have a lot of opinions and the debate can get very heated.
My take is that it is interesting to make sure that the code is easy to use. With that I mean easy to navigate and understand for a new developer. If you want, think of usability in any other product.
Given this, I would organize the features after functional areas in different packages. A package for each area, one for viewing products, one for ordering products, one for paying etc.
I would also try to take a step further and organize the source code in a similar way.
But I would never organize using a temporal approach as you are thinking of.
You should not organize your tests as per the sprint because a particular sprint will end on a particular time. If you want to run some feature files together for temporary basis(till the time sprint is not over), you can add tags on the top in the feature files.
For example:
You have following 2 feature files:
src/test/resources/sprint1/file1.feature
src/test/resources/sprint1/file2.feature
Just add "#sprint1" on top of each feature as shown below:
//1. file1.feature
#sprint1
Feature: sprint1 : features : file1
Scenario: Some scenario desc..
Given ....
When ....
Then ....
//2. file2.feature
#sprint1
Feature: sprint1 : features : file1
Scenario: Some scenario desc..
Given ....
When ....
Then ....
Now to run these both files you need to execute the following code in your command prompt:
cucumber --tags #sprint1
By executing this command, all the files which contains "#sprint1" tag will run. After the sprint is over, you can delete this extra tag from feature files

Getting the current Experiment instance at runtime

I'm running JUnit 4 with AnyLogic. In one of my tests, I need access to the Experiment running the test. Is there any clean way to access it at runtime? E.g., is there a static method along the lines of Experiment.getRunningExperiment()?
There isn't a static method that I know of (and, if there was, it might be complicated by multi-run experiments which permit parallel execution, although perhaps not since there's still a single Experiment, though there'd be thread-safety issues).
However, you can use getEngine().getExperiment() from within a model. You probably need to explain more about your usage context. If you're using AnyLogic Pro and exporting the model to run standalone, then you should have access to the experiment instance anyway (as in the help "Running the model from outside without UI").
Are you trying to run JUnit tests from within an Experiment? If so, what's your general design? Obviously JUnit doesn't sit as well in that scenario since it 'expects' to be instantiating and running the thing to be tested. For my automated tests (where I can't export it standalone because I don't use AnyLogic Pro), I judged that it was easier to avoid JUnit (it's just a framework after all) and implement the tests 'directly' (by having my model components write outputs and, at the end of the run in the Experiment, having the Experiment compare the outputs to pre-prepared expected ones and flag if the test was passed or failed). With AnyLogic Pro, you could still export standalone and use JUnit to run the 'already-a-test' Experiments (with the JUnit test checking the Experiment for a testPassed Boolean being set at the end or whatever).
The fact that you want to get running experiments suggests that you're potentially doing this whilst runs are potentially executing. If so, could you explain a bit about your requirements?

Dynamic test cases

We are using NUnit to run our integration tests. One of tests should always do the same, but take different input parameters. Unfortunately, we cannot use [TestCase] attribute, because our test cases are stored in an external storage. We have dynamic test cases which could be added, removed, or disabled (not removed) by our QA engineers. The QA people do not have ability to add [TestCase] attributes into our C# code. All they can do is to add them into the storage.
My goal is to read test cases from the storage into memory, run the test with all enabled test cases, report if a test case is failed. I cannot use "foreach" statement because if test case #1 is failed, then rest of the test cases will not be run at all. We already have build server (CruiseControl.net) where generated NUnit reports are shown, therefore I would like to continue using NUnit.
Could you point to a way how can I achieve my goal?
Thank you.
You can use [TestCaseSource("PropertyName")\] which specifies a property (or method etc) to load data from.
For example, I have a test case in Noda Time which uses all the BCL time zones - and that could change over time, of course (and is different on Mono), without me changing the code at all.
Just make your property/member load the test data into a collection, and you're away.
(I happen to have always used properties, but it sounds like it should work fine with methods too.)