#BeforeScenario / #AfterScenario to Specific Scenario in Test Story by using Given - jbehave

I am a newbie to JBheave and Hive frameworks.
While exploring Q&A repositories, I happen to see the following phrase from one of right Answer to a Question,-
writing a JBehave story
That's what I've seen - and the data object should be setup/cleared
with a #BeforeScenario/#AfterScenario method.
At present I am in the process of writing Test Stories. Yet not get into Steps further.
From the JBehave product website, I get the following sample Test Story. I have Question considering the phrase which I plugged out from the Q&A repo of StackOverFlow.
A story is a collection of scenarios
Narrative:
In order to communicate effectively to the business some functionality
As a development team
I want to use Behaviour-Driven Development
Lifecycle:
Before:
Given a step that is executed before each scenario
After:
Outcome: ANY
Given a step that is executed after each scenario regardless of outcome
Outcome: SUCCESS
Given a step that is executed after each successful scenario
Outcome: FAILURE
Given a step that is executed after each failed scenario
Scenario: A scenario is a collection of executable steps of different type
Given step represents a precondition to an event
When step represents the occurrence of the event
Then step represents the outcome of the event
Scenario: Another scenario exploring different combination of events
Given a [precondition]
When a negative event occurs
Then a the outcome should [be-captured]
Examples:
|precondition|be-captured|
|abc|be captured |
|xyz|not be captured|
I could see the pretty same just as like #BeforeScenario/#AfterScenario over here.
I do have Question here. Is I could write Given before and after to specific Scenario: in a Test Story.
And is that Scenario: output is open to consecutive Scenario:'s in the Test Story.

There is a few differences between #BeforeScenario/#AfterScenario annotations and Lifecycle:Before/After steps
A java method annotated with #BeforeScenario or #AfterScenario is called for all executed scenarios in all stories, while a Lifecycle-Before or -After step will be executed only for scenarios from this one, concrete story.
#AfterScenario method is executed always, regardless of a result of the scenario. Lifecycle After steps can be called always (using Outcome: ANY clause), only on failures (using Outcome: Failure clause) or only on success (using Outcome: SUCCESS clause)
You cannot pass any parameters from a scenario (story) to #BeforeScenario and #AfterScenario java methods, while Lifecycle steps can have parameters, like any other ordinary steps, for example:
Lifecycle:
Before:
Given a step that is executed before each scenario with some parameter = 2

JBehave is for Data Mining. And it is uses Test Driven Development, TDD. We call that as Steps. BDD - Behavior Driven Development, that yields the Mining capability of that framework injected towards any Higher-Level language.
Answering the Question,- In a test story, if we put Scenario in the mid of two then statements, it clears the buffers as it is a new scenario. That way Given clause datasets is applied as-is, rather implied. That way Given clause values are taken forward. For new Scenario only Lifecycle prerequisites which is been set is only applied on before and after respectively.

Related

Test methods are not executed when associating same test method to multiple test cases in VSTest running XUnit tests

We have an ASP.Net application and UI tests written with xUnit. Test plans are in VSTS, and in some cases same xUnit test method is associated to multiple test cases.
An azure build pipeline executes these tests using VSTest. The problem is when multiple test cases are associated with a single method seems only one of them is executed. E.g. The test cases in below screen shot are associated to same method and only one is executed.
We tried both 'Test assemblies' & 'Test Plan' option in Vstest, but results are same.
As per below link, it is not possible in xUnit to run the same test method multiple times in the same test session.
https://developercommunity.visualstudio.com/content/problem/995269/executing-multiple-test-cases-from-testplan-which.html?childToView=995554#comment-995554
Some solutions I can think of are,
Creating dummy test methods for all test cases and maintain one to one, test method to test case mapping. Where one method will have testing logic, while other methods will just assert true.
Create multiple test methods, where only one method will contain the implementation. Other methods will just call the test method which contains the implementation.
Please suggest if there is any better solution to the problem.
Thanks in advance!
Test methods are not executed when associating same test method to multiple test cases in VSTest running XUnit tests
As we know:
By default, each test class is a unique test collection. Tests within
the same test class will not run in parallel against each other.
So, the response we got from Azure Devops Developer Community forum and xunit is XUnit does not allow running one test multiple times in the same session.
I personally think that the two workrounds you proposed are correct. You can use one of the two methods to solve this problem. You are already on the right way.
Hope this helps.

Azure Devops Regression test case management

We have a very large application with nearly 2K testcases for regression. Our process is multiple sprints of work towards a single release. So, we use a dedicated regression test plan.
My question is how to manage regression runs? Right now, we clone the Master Regression suite or prior regression suite. This allows us to preserve the previous regression results. But this method creates new unique test cases, which doesn't keep associated bugs.
If we reset all the tests in the current suite, I know the previous runs could be seen at the test case level. However, I can't figure out how to call up historical aggregate results, for a previous run.
How should DevOps be used for managing repeat test runs?
How should DevOps be used for managing repeat test runs?
To repeat test, we could to insert parameters in test steps:
Create a parameter by typing a name preceded by "#" in the actions and expected results of test steps
You could check this document Repeat a test with different data for some more details.
For the historical aggregate results, there is an user voice about it on the Developer Community.
Hope this helps.

building audit trail functionality

Following is a use case in a workflow system
Work order enters into a system. Work order will have a target which goes through different workflow states before completing a work order.
Say Work order for a target Vehicle came into a system - workflow for this work oder involves 2 tasks say
a)wash vehicle
b)inspect vehicle
Say wash vehicle workflow task changes vehicle attribute from "not washed" to "washed". And say "inspect vehicle" workflow task changes vehicle attribute "not inspected" to "inspection done"
If user is pulling work order data user will always see latest vehicle data (in this example assuming both workflow tasks are completed user will see value "washed" and "inspection done". However when user pulls ONLY workflow Task Wash Vehicle data -> user will see "washed" -Though second task was done, workflow Task 1 will only see that that it modified. Getting data for Workflow Task 2 will see both "washed" and "inspection done"
This involves milstoning (audit trail) of data; One approach is as shown below image - where when workflow task modifies data it'll update version number, modified_ts and maintain that version number in it's own data row (via a JOIN table as depicted below). Basically this is nothing but maintaining a reference to a history record for workflow task data so when pulling workflow task data it knows which history record to pull back. please ignore parent_id and other notes, noise in a below picture. it's not relevant for this question.
I am thinking event sourcing will also be another alternative design - however don't want to apply event sourcing(or any other similar solution) as a whole sale solution but only for this particular use case (affecting only 3 or so tables where audit trail matters). I am trying to evaluate if CQRS/Event sourcing is a right fit as a partial solution (again only limited to 3-4 tables which need to preserve history/audit trail data) or ES/CQRS will be an overkill? any other thoughts?
P.S. Though this isn't related to Scala - Scala is a platform we are using hence tagging it to see if there are language specific solutions that can help. tagging Akka for finding out if ES/CQRS via Akka persistence is an option or not. Postgresql is a db - And DB triggers is not a solution I am looking for.

Is there any test framework for Drools mutation testing

I'm looking for test framework which will perform mutation testing for code written in Drools, i.e. it should check if there're tests that fails when one of the rules is removed from KnowledgeBase. It's needed for confidence that every rule is covered by tests
Try droolsassert. You can specify a 'snapshot' of rules you want to be triggered for predefined scenario.

How to make a Sequential Http get calls from locust

In Locust Load test Enviroment tasks are defined and are called randomly.
But if i want a task to be performed just after a specific task. Then how do i do it?
for ex: after every 'X' url call i want 'Y' url to be called based on the response of 'X'.
In my experience, I found that it's better to model Locust tasks as completely independent of each other, and each of them covering a user scenario or behavior (eg. customer logs in, searches for a book and adds it to the cart). This is mostly because that's a closer simulation of the user's behavior.
Have you tried just having the multiple requests on the same task, and just if / else based on your responses? This slide from Carl Byström's talk follows said approach.
You just have to make a sequential gets or posts. When you define your task do something like this:
#task(10)
def my_task(l):
l.client.get('/X')
l.client.get('/Y')
There's an option to create a custom task set inherited from TaskSequence class.
Then you should add seq_task decorators to all task set methods to run its tasks sequentially.
https://docs.locust.io/en/latest/writing-a-locustfile.html#tasksequence-class