How to make a systematic report in protractor - protractor

I am working on protractor for testing an angularjs application. I am also able to fetch the report but I want to mention some more points and details about the test execution. For example Its model name, Test case name, Severity, Priority, Where the test failed if it gets fails etc. Where should I add all this points so that I can be able to fetch a complete detailed report.Currently I am able to get the report I have attached here.
Please help me in getting the solution as I am new to protractor. Thanks a lot in advance.

Jasmine framework does specs reporting, not Protractor in e2e testing. You can either leverage some of the popular ones listed below or need to create your own using custom reporter.
https://www.npmjs.com/package/jasmine-spec-reporter
https://www.npmjs.com/package/protractor-html-reporter
https://www.npmjs.com/package/protractor-beautiful-reporter
http://jasmine.github.io/2.1/custom_reporter.html
Or you can try with allure plugin here http://allure.qatools.ru/

I also advice to use allure report. It is easy to setup and has a good documentation. Just want to mention that there is Allure 2 is ready. Take a look at Git Hub and integration for JS

Related

How can I generate HTML reports based on cucumber in Cypress not the JSON reports

Implementing the Cucumber usage into the Cypress for Tests but I don't understand how to generate HTML reports for the end test result describing the passed and failed tests as well as scenarios.
Any help would be appreciated.
First, you should read this official cy.docs
and I think you have two options:
mochawesome.html
allure2 report more details here

How to get All test cycle from ZAPI rest call

I am trying to use this API to manage my automation test cases in Jira-Zephyr.
I am trying to get all the test cycle from my project, So as per the ZAPI technical doc I used
http://localhost:8080/jira/rest/zapi/latest/cycle?projectId=10002&versionId=10100
But it is not working, It says that it is a dead link.
I googled for search all the issue from project, So I got the below one
http://localhost:8080/rest/api/2/search?jql=project=10002
and it is working fine for me.
So as per this link I changed the above link
It is also not working.
How I can Find out all the test cycle present in the project?
this works for me with a local install of Jira + zapi, zephry add-ons
http://localhost:8080/rest/zapi/latest/cycle?projectId={{projectId}}&versionId={{versionId}}
try like this, def clientTestID = new RESTClient("https://XXXXXXXXXX.net/rest/zapi/latest/zql/executeSearch?maxRecords=9999&zqlQuery=cycleId=XXXXX")
This will work for sure if headers are properly defined

Using allure with JBehave and Lettuce

We have 2 fairly large automation projects going, both using BDD. One is in Lettuce for a desktop app, the other is for a website using JBehave (we are just getting started with the web project).
We have tried using Thucydides for reporting for our JBehave project, and started implementing tests using that. However, we ran into Allure and it looks a lot better and lets us use standard JBehave framework without relying on someone's code that has its own unknown-to-us issues. Luckily, we found Allure early enough.
2 questions:
1) We spent 2 days trying to make Allure work with JBehave, but the only example on GitHub isn't working well (all scenarios are reported together without breakdown by individual stories or scenarios). Also, JBehave doesn't have an #AfterStep decorator and it's a requirement for us to save screenshots after each step, successful or not. Thucydides for all its faults took care of that. Does Allure have something similar? If not, then at least is there a working example of how to make it report stories and scenarios correctly when run from JBehave?
2) I haven't tried yet, but doesn't look like there is an adapter for Lettuce (Python). Can someone recommend a way to produce Allure reports from Lettuce?
Thanks a lot!!
Allure doesn't support JBehave and Lettuce yet. But you can implement such adapters by yourself.
First step you need to read the following section https://github.com/allure-framework/allure-core/wiki#development in documentation. Then if you are ready to contribute you should follow the following instructions:
JBehave
We already have Java adaptor. So all you need is add allure-java-adaptor-api module as dependency and then implement JBehave listener.
Lettuce
There are the same. You can use allure-python bindings and all you need is implement Lettuce handlers. Python team are going to move bindings (aka allure-python-adaptor-api) to separate module, you can force it by comment in https://github.com/allure-framework/allure-python/issues/63
So, if you have any questions/suggestions you can also use our gitter chat room (https://gitter.im/allure-framework/allure-core) or our mailing list (allure#yandex-team.ru)
Hope it helps.
To achieve integration between JBehave and Allure you can create your own implementation of org.jbehave.core.reporters.StoryReporter. From the methods in this interface you can fire the Allure events that correspondent to the JBehave abstractions. In our implementation we fire TestSuite*Events from e.g. StoryReporter#beforeStory() and TestCase*Events from e.g.StoryReporter#afterScenario().
The caveat is that for some JBehave events you have to fire multiple Allure events. For example for a failed jbehave step we fire the following allure events:
public class AllureStoryReporter implements StoryReporter {
...
#Override
public void notPerformed(final String step) {
getLifecycle().fire(new StepStartedEvent(step).withTitle(step));
getLifecycle().fire(new StepCanceledEvent());
getLifecycle().fire(new StepFinishedEvent());
}
}
Of course the created reporter needs to be registered to be used during reporting by JBehave.
This results in comprehensive Allure reports.
At least for JBehave, the support was added since then

JBehave how to fail all stories

I dont know why, but JBehave does not take in consideration failures in the given stories. If there is a failure in a givenstory, it will not perform the rest of the steps of that story, but it will execute the rest of the given stories. Here is a example:
GivenStories: stories/web/pmv/Story1.story,
stories/web/pmv/Story2.story,
stories/web/pmv/Story3.story,
stories/web/pmv/Story4.story,
stories/web/pmv/Story5.story
When the user do something
Then something happens
For instance, if the Story2.story fails, I was expecting that the rest of the given stories and the last 2 steps were not executed. But they are.
Anyone knows why is that?
How can I fail all stories if one single step or story fails?
I noticed as well that the reports statistics just reflect the last given story and the following steps. Is this correct? Why?
I have the following configuration:
configuredEmbedder().embedderControls()
.doGenerateViewAfterStories(true)
.doIgnoreFailureInStories(false)
.doIgnoreFailureInView(false)
.useThreads(2)
.useStoryTimeoutInSecs(60);
MostUsefulConfiguration:
.useStoryControls(
new StoryControls()
.doDryRun(false)
.doSkipScenariosAfterFailure(true)
.doResetStateBeforeScenario(false))
When i added the config doResetStateBeforeScenario(false), the following steps after the failure, even the ones inside the givestories were not performed. But in yet, the statistics show no error, because it was not last given story or the steps on the main story. In the end the maven build had no errors, in yet there failures in the test.
Any thoughts?
OK. After some searching around I managed to find that this issue was fixed in JBehave 3.8.
JIRA link: http://jira.codehaus.org/browse/JBEHAVE-841
I updated to latest jbehave version and this works fine.

adding service to addthis

I want to add my custom sharing service to an addthis widget.
According to the AddThis documentation the service must meet the specification of the oexchange format.
I've created everything and hosted it on my server. When trying to test it from the oexchange harness test, it fails.
There are two files generated from oexchnage
.well-known/host-meta
Target XRD File
I don't know about these two files. I generated them according the oexchange, but I'm still unable to test it with oexchange.
If anyone has been able to implement this, then please could you steer me in the right direction?
Thanks
Well i found the the one solution for the .xrd file and i has been test it successfully with the oexchange test harness but still not able to test the .wellknown/ host-meta .