Need Help on Page Object Model Implementation - protractor

I am in a process of implementing Page object Model, I have one query regarding it, please see below:
I have created page files which is having locators and methods for the page, I have spec file in which I am doing the assertions by calling these methods. My question is that for one page I have over 100 test cases, now should I create single assertion file for single tests or should I create 100 assertion file for 100 test.
Please let me know what is the best way to manage it.
Regards,
Manan

I think it makes the most sense to group tests into files by functionality. It's hard to run only some tests from a file, so split out any groups of tests you think you might want to run independently. Are some of them suitable for a quick smoke test suite? Maybe those should be in a separate file.

You shouldn't need to create a new file for neither every assertion nor test case. I am confused by your question because in my understanding, the assertion is part of the test case, and test+assertion are part of the same function (assertion being the end goal of the test).
Regarding the Page Object Model: The important part of the pattern is ensuring the separation of page/DOM detail from test flow (i.e. tests should possess no knowledge of the DOM, but instead rely on page objects to act on actual pages).

Related

React Testing Library - existence / assertion best practice

When testing for the existence of an element, is it recommended to always assert as in:
expect(await screen.findByTestId('spinner')).toBeVisible();
Or is it sufficient (recommended) to just wait for the element:
await screen.findByTestId('spinner');
Note: the spinner is added using React Hooks and that is why I am await'ing them.
I thought a previous version of RTL recommended not specifically asserting when not required, but I can't find any references to that now.
There are certain Testing Library queries as getBy or findBy that have an implicit "expect". What happens is that they would throw an error in case the element is not found, so as you say, you can either wrap it in an expect or just do a simple query and if the test passes, that means the query didn't throw any errors.
You can see the explicit assertion as a feeling that you are checking something in that specific line. That way you can have more differentiate sections (Arrange, Act, Assert or "render, do things, expect things") that look almost the same in every test no matter what they are expecting.
Testing Library tends to recommend best practices when doing queries as they are built for a reason and each query can give you better results (for example you could test more things doing a queryByRole with a name option than doing a simple queryByName. However, this question would be a matter of "styling" in your code, and they can only say you could go with the "always write an expect" option if you want all your tests to be more uniform.

#karate How to pass parameter to a feature file in gatling simulation class?

Let's consider a scenario, we have to run the performance test for "create an account api" which takes input as header/path param "Auth token" and input data like user account name . So for above scenario we have 2 feature file as,
to run performance test for POST http://baseUrl/auth_param/create/input_data
1. One feature(e.g: generateAuth.feature) file which will have the auth
token
2. Second feature(createAccount.feature) file which take parameter as a
auth token, input data.
Here is my simulation class,
class <MyClass> extends Simulation {
before {
println("Simulation is about to start!")
}
val generateAuthTest = scenario("generateAuth").exec(karateFeature("classpath:path/generateAuth.feature"))
val createAccountTest = scenario("test").exec(karateFeature("classpath:path/createAccount.feature"))
setUp(
createAccountTest.inject(rampUsers(1) over (10 seconds))).maxDuration(1 minutes)
after {
println("Simulation is finished!")
}
}
Here, can i read auth from generateAuth.feature file which is input for createAccount.feature file, so that i can pass as a parameter?
Please suggest me how to pass parameters to createAccount.feature while calling in karateFeature method.
Let me put a requirement here,
let's say we have some feature files for CRUD operations on a particular data. Here how i go to write functional scenario,
I will create new feature file to write a scenario
just use CRUD files to test a SINGLE flow.
Now if i go for Performance test cases on individual operation, i feel there are 2 ways,
Create new 4 performance test feature files (one for each CRUD
method) and call these CRUD feature files in the respective test
feature file. Finally we just call test feature files in the
respective gatling simulation class.
**(In this case, I will end up with creating more test feature files as well simulation classes for
performance, which I want to avoid) **
Just call CRUD files in the respective gatling simulation class and
pass the required parameters to them.(In this case , we just need to create only 4 simulation
classes and run them on the basic of operation like create,read,delete and so on)
Here just wanted to know 2nd way of performance test, is it achievable or not in karate and if yes please let me know how?
Summary- I think its achievable using 3rd feature file (extra) for
individual use case but I do not want to make an extra feature file
for each case so that I can avoid maintenance work and can take
advantage of re-usability of existing feature file from functional
test to performance test.
Just use the normal Karate concepts such as karate-config.js
You can easily switch environments by setting the karate.env system property.
For example:
mvn test -DargLine="-Dkarate.env=e2e"
EDIT: after you edited your question, it is clear you have a SINGLE flow you want to test. please use a SINGLE feature. I suggest you move the generateAuth into the Background of the feature. Also refer to the docs on callSingle() for advanced options.
If you are expecting 2 feature files to magically share data that is not possible and not needed if you structure your tests correctly.
If you really really need this, please create a Java singleton and access it from each feature. Totally don't recommend this though.
EDIT: In Karate 0.9.0 onwards, you can call a single scenario within a feature if it has a tag:
classpath:animals/cats/create.feature#sometagname

Protractor - how to reuse the same spec file for different tests

In my Protractor conf.js file, I'd like to re-use the same spec files multiple times; however, it's seems to not be possible.
Some background:
We are reading test cases from a JSON file, launching reports, then testing grid results and various DOM elements.
All reports have the same format. The primary differences lie in the report titles, data columns, actual data results, etc.
So in my conf.js file, ideally I'd like to re-use the same spec files multiple times - but my understanding is that I cannot do this.
For example, my spec array:
specs: [
'spec/report1-spec.js',
'spec/report-grid-details-spec.js',
'spec/report2-spec.js',
'spec/report-grid-details-spec.js',
'spec/report3-spec.js',
'spec/report-grid-details-spec.js',
]
I've read this post (http://ramt.in/how-to-run-identical-jasmine-specs-multiple-times-with-protractor/ ) where you can move your spec files into a node module, but 1) I don't want to move all specs files there, and 2) it doesn't work anyway when I move even one spec file into a module export file.
If I can't do it, then I'll just move my report-grid-details-spec.js code into a common page object file and call it whenever it's needed.
Just wondering if anyone out there has found a solution to this need to re-use spec files multiple times in one conf.js configuration.
Thank you,
Bob
If I can't do it, then I'll just move my report-grid-details-spec.js code into a common page object file and call it whenever it's needed.
This would probably be the easiest way to approach the problem. Though, I like the idea of putting specs into modules - it is a plus to reusability overall.
The thing is, jasmine does not allow executing the same test in a single test run. And, from what I understand, there is no easy way to change the behavior.
One of the possible workarounds is to completely restart protractor and, hence, recreate the jasmine testing environment so that the next report-grid-details-spec.js would run in a new jasmine environment - this is something that protractor-flake project uses to retry the failing tests (it basically restarts protractor through command-line passing the failing specs as a comma-separated list to the specs argument, source).

How to add JBehave BeforeStory AfterStory in a story file

In JBehave, BeforeStory or AfterStory annotation can be added in Step level. So if there are multiple Steps classes with BeforeStory annotation, all those BeforeStory annotated methods will be executed before the each and every story start (which is not needed).
In JUnit we can add or leave BeforeClass or Before as needed in test classes separately. So what I need is a way to add BeforeStory or AfterStory in a story level just like JUnit test class.
Is there a way to add BeforeStory as a lifecycle in a story file ? or any alternative solution ?
Following example shows only adding Before in the lifecycle.
http://jbehave.org/reference/stable/story-syntax.html
Lifecycle:
Before:
Given a step that is executed before each scenario
After:
Outcome: ANY
Given a step that is executed after each scenario regardless of outcome
Outcome: SUCCESS
Given a step that is executed after each successful scenario
Outcome: FAILURE
Given a step that is executed after each failed scenario
Thanks.
In my opinion jbehave does not have such a facility as of now. But try the following - in the class file execute a Given to initialize users and in the end of the story scenarios define another Given to release resources. I know this approach breaks the BDD paradigm of whole and business defined scenarios, but atleast is a work around.
To ensure it executes first and last you could use (priority=1) for initUsers() and (priority=n) for cleanup().
As you have quoted from the documentation, Before and After steps are executed after each scenario and not step as was stated. There is no facility to have a 'before story' step defined in a story file.
An alternative to define something in a story file is to annotate method with #BeforeStory as described in: http://jbehave.org/reference/stable/annotations.html

Windows Workflow - IfElse branch

I am trying to use Windows Workflow and have a model that looks similar to the image in the link below:
After each of the Send Activities (GetSomthing, GetSomthingElse, GetSomeMoreStuff) the same custom activity is being called (LogSomthingBadHappened).
While it might not look so bad in this picture in my real model the custom activity is a SequenceActivty, has quite a few nodes, and when its repeated 3 times starts to make the workflow look very ugly.
I would like to do something like this:
Can the IfElse branches be merged like this?
Should I be using a State Machine work flow instead (haven't figured these out yet)?
Use a FaultHandler on the workflow and throw a specific exception type that the handler will catch. Not the most graceful, but I think it should work.
In sequential workflows all steps must appear in a specific order, and the execution path is regulated exclusively by control structures (IF, WHILE).Altering the execution path in the way you describe would be like using a GOTO statement in imperative code, which we know leads to unnecessary complexity.
If the activities contained in the SequenceActivity that you need to execute at different stages of your workflow are exacty the same, you could embed them in a custom activity. This way it is easier to manage them since they are contained in a single logical unit.In imperative code, this would be like refactoring out a portion of duplicated code into a method, which is then invoked multiple places.
Another alternative that might work is to put your LogSomthingBadHappened activity into a custom workflow and include that several time. Several things to watch out for: Subworkflow are executed asynchronously, if the LogSomthingBadHappened activity needs state information from the main workflow, copying it to the sub workflow might be hard.
I have not tried this, so it might not even work.
I think the answer by gbanfill points to the right direction.
To generalize, I define the problem as:
Is there a way to define a group of activities that will be executed in several places of a workflow?
Further requirements are:
The group of activities should be defined in XAML only ie no code.
Type of input to this group will, of course, be fixed but actual values should depend on call (like calling a function).
Maybe the way to do it is define sub-workflows and build a custom activity that would instantiate the sub-workflow and wait for it to complete before continuing.
This custom activity should have at least two parameters: the sub-workflow id and input parameters.