Protractor/cucumberjs rerunning failed tests/cucumber features/specs - protractor

Given that automated UI tests sometimes fail due to flakiness, an ability to rerun only the failed tests becomes incredibly useful in a framework like protractor.
Unfortunately, as of 09/13/2016, there's no way to rerun failed tests with protractor.
How do you guys rerun your failed tests? Ideally, I'd like suggestions/ideas from people using the javascript implementation of cucumber, cucumberJs.
There's protractor-flake that was developped by Nick Tomlin to address this problem but that module doesn't always work when dealing with multicapabilities where you're trying to run your tests in parallel.
A. How do you guys rerun your failed tests? Ideally, I'd like suggestions/ideas from people using the javascript implementation of cucumber, cucumberJs.
There's protractor-flake that was developped by Nick Tomlin to address this problem but that module doesn't always work when dealing with multicapabilities where you're trying to run your tests in parallel.
There's protractor-flake that was developped by Nick Tomlin to address this problem but that module doesn't always work when dealing with multicapabilities where you're trying to run your tests in parallel.
This question: How to rerun the failed scenarios using Cucumber? almost answered the question; problem is: how do I use that command (cucumber -f rerun --out rerun.txt) to rerun my tests AND run protractor in parallel? That command might only work when you're not parallelizing your protractor tests;
B. How would you use that cucumber command to run your tests in parallel?
Please answer question A and B above, thanks again!

So far I have found the following tool, protractor-flake, that will rerun failed protractor tests:
***Github***: https://github.com/NickTomlin/protractor-flake
***NPM***: https://www.npmjs.com/package/protractor-flake

Related

How to skip dependent feature file when parent feature file fails in protractor cucumber

I have 5 test scenarios in my 5 different feature files.
TC-1
TC-2
TC-3
TC-4
TC-5
TC-3&TC-4 are dependent test cases when test scenario TC-3 failed automatically TC-4 should skip and TC-5 should execute how we can achieve this in cucumber any suggestions
Thanks in advance.
I never worked with cucumber closely, but I found out it's not possible in jasmine and pytest. So I assume this is how all test frameworks work.
The problem is that both of these^ build a queue of tests to execute before the browser started. And you can't modify it based on a runtime results.
see this answer for jasmine, and see if you can apply this approach to cucumber Nested it in Protractor/Jasmine

Can I use Cucumber with Selenium Grid to run the scripts on different node at the same time?

I have searched for same but faced with failure.
Is there any other tool which can be utilised effectively to run the scripts on multiple node?
Posible duplicate of : How to execute cucumber test cases in parallel using Grid?
For me, TestNG is good to parallelize your tests and here is a question and answer about it :
How to run the cucumber test parallelly_Junit/TestNg
Cucumber 4 provides native support to run scenarios in paralllel. You dont need cucumber -Jvm.
you have to put number of threads in the runner file (--thread = 2) and just pass the selenium hub to your selenium driver during initialization.

Running NCover from code

Is it possible to run NCover automatically from code instead of running NCover manually or via command line?
Here is the scenario, I have written a few tests, I execute all the tests and after the tests are completed, NCover should run automatically for that particular test project and store the coverage report as an XML in a location.
Is this possible to do? Kindly help.
Running NCover from the command line was the only option with NC3. When we updated NC4 the default works like this --> you create a project, the NCover service watches for a process to start that meets the match rules defined in the project, and then collects coverage on it.
This doc may be of some help: http://www.ncover.com/support/docs/desktop/user-guide/coverage_scenarios/how_do_i_collect_data_from_nunit
If you have more questions, please reach out to us at support#ncover.com.

OpenCover without running unit tests

Is it possible to run opencover without running unit tests?
I have the TestResults.xml from NUnit and want to pass this to OpenCover without running the unit tests again.
Is this possible?
Q1. Is it possible to run opencover without running unit tests?
OpenCover can run against most .NET application that can be launched from the commandline. With a little effort you can get it to run against a service like IIS.
Q2. I have the TestResults.xml from NUnit and want to pass this to
OpenCover without running the unit tests again. Is this possible?
No, it will not be able to do what you want as the information in the TestResults.xml is about tests (pass/fail) and is not enough to determine what code was actually executed by those tests.
Just run your tests with OpenCover using the nunit-console.exe as the target - instructions exist in the documentation provided to help you.
I do not know OpenCover but from what I guess about dotCover it needs to go alongside the unit tests as they progress through your code line by line. Code coverage is then determined by what percentage of your code has been visited.

Continue running NUnit after failures

I am running nunit-console from a CI configured in TeamCity to run tests from various assemblies. Once one of the TestFixtures has a failing test, then the test execution will stop.
Currently i am able to see the first tests that failed, but am unaware if there are more testfixtures that might fail down the line.
I would like to get a summary that lists the failing tests and test fixtures, without all the details of the exceptions thrown.
Anyone have any ideas?
Thanks.
NUnit should run all of the unit tests in the specified assembly, regardless of the number of test failures. The first thing I would check is the raw xml output from the unit test run. You may find that the tests are being executed, but the build server is failing to display all of the results. If that is the case, there may be a faulty xslt that needs to be modified.
Another thing to try is running all of the tests on your box using the command-line tool, and see if it runs all of the tests. If they run on your box but not the server, you may have a configuration problem on the build box.
Yet another possibility is that the failure is a critical one (failure to load an assembly perhaps) which is causing NUnit itself to error out.