Way to obtain the list of test methods from the excluded group - annotations

My setup is like below:
I run my TestNG test with excludedGroups Maven Surefire Plugin parameter set to failing. So, test methods which are known to be failing are excluded from the test suite.
I want to obtain the list of those test methods.
I did not find a straightforward solution for this.
Does anyone know how to do it? Whether it can be done using capabilities of Java, Annotations, TestNG or Maven...

Couple of ways : Implement Isuitelistener->onStart method. Use the suite.getExcludedMethods to get a list of all excluded methods calculated. You can implement ITestListener as well and use context.getExcludedMethods/Groups for the list too.

Related

Is there a way to find included Specflow scopes at a BeforeTestRun level?

I'm working with multiple features and scenarios and am looking for a way to find out what scopes are included in a test run at the time to test run start, if that's possible.
There's a large-ish subset (category) of our tests that require a setup that takes 5-10 seconds--currently we're using a BeforeFeature to optimize this setup as much as we can but we have several features (but not all) under the same scope. We'd like to run this setup only when that category of tests of tests is included in the test run.
in pseudo code it would essentially be
[BeforeTestRun]
If test run includes scenarios/features with tag "AdvancedSetup"
AdvancedSetup();
In SpecFlow this information is not available.
But perhaps your test runner has this information available.
FYI: Tags are translated to TestCategories.
NUnit allows use of a higher-level setup that applies to a namespace. You access this by creating a SetUpFixture. If SpecFlow gives you a way to map features into specific namespaces, you could use this.

Running a suite of pytest tests on multiple objects

As a small part of a much larger set of tests, I have a suite of test functions I want to run on each of a list of of objects. Basically, I have a set of plugins, and a set of "plugin tests".
Naively, I can just make a list of test functions that take a plugin argument, and a list of plugins, and have a test where I call all of the former on all of the latter. But ideally, each test/plugin combo would appear as an individual test in the results.
Is there already a nicer/standardized way of doing something like this in pytest?
Check out pytest's documentation on parametrization (https://pytest.org/latest/parametrize.html).
It's a mechanism for running the same test a number of times with different parameters -- it sounds like just what you want. It generates tests that run individually, and they have nice output and reporting.

How to get eclipse debugger to skip $$EnhancerByCGLIB$$ methods?

I want to skip over methods generated by CGlib I have added $$EnhancerByCGLIB$$ to my eclipse step filters but still eclipse does not skip these, here is my configuration. How do I configure the step filter to get to skip any CGLib enhanced methods?
Adding CGLIB and checking Step through filter seems to have fixed the problem for me. below are the filters that I am using, it turned out I had to filter EnchancedByCGLIB and FastClassByCGLIB.
While I have no experience w.r.t. CGLib, I'd suggest adding a class filter, not a package filter to Eclipse.

JBehave Sentance "API" Generator available

I'm trying to provide my QA team a list of available sentences in JBehave based on methods annotated with Given, When, Then, and Alias. As follows:
Then $userName is logged in.
Then user should be taken to the "$pageTitle"
I recently wrote a simple script to do this. Before I put more work into it I wanted to be sure there wasn't something better out there.
For one there is the Eclipse integration for JBehave, which offers code completion, thus providing all steps directly from the code ( http://jbehave.org/eclipse-integration.html ). Note that it doesn't go through dependent .jars though - only what it can find in the source tree.
i.e, enter "Given", hit Ctrl+Space and get all the available given steps.
But there has also been some work parsing the run results with a "Story Navigator" ( http://paulhammant.com/blog/introducing-story-navigator.html ), which offers a listing of the steps. But I'm not sure whether it can list unused steps; Furthermore this one seems more like a proof of concept to me (I wasn't able to make proper use of it).

nCover With many class libraries

So I have my project and it is set up like this:
MyProject
MyProject.Module1
MyProject.Module1.Tests
MyProject.Module2
MyProject.Module2.Tests
What I want is the code coverage number for the entire project.
I am using nCover... what is the best way to do this? For example would I have to rearrange the project and have everything put into MyProject.Tests?
It depends on how you're testing. Most test frameworks will let you run tests for multiple assemblies as separate arguments. If you can't run them all together, you can always use NCover's merge feature. Check out http://docs.ncover.com/ref/2-0/ncoverexplorer-console/merging-coverage-data/.