Comparing two dotCover coverage reports to find intersection? - ndepend

I've got a bunch of C# code that's covered by both unit tests and system tests. I'd like to find those parts of the code that are covered by both, by only the unit tests and by only the system tests.
I've can generate coverage reports for the two sets (unit tests vs system tests) by using JetBrains dotCover.
How do I compare these two coverage reports?
I've got NDepend, if that helps.

Roger, with NDepend you can still import several DotCover coverage xml files (with the right DotCover XML for NDepend setting).
I'd like to find those parts of the code that are covered by both
Use the Merge Option AND as shown on the screenshot below. This will help, it will tell you which method are covered by both tests sets.
If you need to zoom at line by line level covered by both tests sets, unless NCover has tooling for that, you'll need to programatically do the merge of the two coverage files by yourself (it should be not that hard).

Related

Is there a way to find included Specflow scopes at a BeforeTestRun level?

I'm working with multiple features and scenarios and am looking for a way to find out what scopes are included in a test run at the time to test run start, if that's possible.
There's a large-ish subset (category) of our tests that require a setup that takes 5-10 seconds--currently we're using a BeforeFeature to optimize this setup as much as we can but we have several features (but not all) under the same scope. We'd like to run this setup only when that category of tests of tests is included in the test run.
in pseudo code it would essentially be
[BeforeTestRun]
If test run includes scenarios/features with tag "AdvancedSetup"
AdvancedSetup();
In SpecFlow this information is not available.
But perhaps your test runner has this information available.
FYI: Tags are translated to TestCategories.
NUnit allows use of a higher-level setup that applies to a namespace. You access this by creating a SetUpFixture. If SpecFlow gives you a way to map features into specific namespaces, you could use this.

How do Atom's 'spec' files work?

I'm making a package for Atom, and Travis CI keeps telling me my build failed.
Update: I created a blank spec file and now my builds are passing.
You can see my package here: https://travis-ci.org/frayment/language-jazz
The console is telling me:
sh: line 105: ./spec: No such file or directory
Missing spec folder! Please consider adding a test suite in
I went looking around at Atom packages on GitHub for 'spec' files and they seem to be CoffeeScript based, but I can't understand what on earth they contain. There isn't much documentation on the subject, so:
What is a 'spec' file, and what do I put in it?
Help is very appreciated.
The ./spec directory should contain one or more Jasmine Specifications for the Atom Package you are developing, for example, this spec is taken from the Atom documentation:
describe "when a test is written", ->
it "has some expectations that should pass", ->
expect("apples").toEqual("apples")
expect("oranges").not.toEqual("apples")
One of the biggest challenges with Open Source software is maintaining quality when a large number of individual contributors are providing code, one solution to this is providing a high level of test coverage:
Like most aspects of programming, testing requires thoughtfulness. TDD is a very useful, but certainly not sufficient, tool to help you get good tests. If you are testing thoughtfully and well, I would expect a coverage percentage in the upper 80s or 90s. I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing.
In Atom's case, all of the specifications are added to the ./spec folder and must end with -spec.coffee, so for example if you were creating a package named awesome and your code sat within /awesome.coffee then you spec would be ./spec/awesome.coffee. Your spec should exercise the key areas of your code to give you confidence when committing pull requests to your master branch.
I have a couple of packages on Atom.io and both of these have tests included with them, you are welcome to use these as concrete examples of how Jasmine 1.3 tests can be written to support the functionality of your packages. Equally the majority of packages on Atom.io also have a set of tests that you can draw upon to build your own test suite.

Running a suite of pytest tests on multiple objects

As a small part of a much larger set of tests, I have a suite of test functions I want to run on each of a list of of objects. Basically, I have a set of plugins, and a set of "plugin tests".
Naively, I can just make a list of test functions that take a plugin argument, and a list of plugins, and have a test where I call all of the former on all of the latter. But ideally, each test/plugin combo would appear as an individual test in the results.
Is there already a nicer/standardized way of doing something like this in pytest?
Check out pytest's documentation on parametrization (https://pytest.org/latest/parametrize.html).
It's a mechanism for running the same test a number of times with different parameters -- it sounds like just what you want. It generates tests that run individually, and they have nice output and reporting.

Create NUnit tests analogous to a MbUnit TestSuiteFixture

Is it possible to achieve the same (or close to the same) functionality as a MbUnit TestSuiteFixture using NUnit? I need to generate sets of dynamic test cases and MbUnit's TestSuiteFixture is the only thing I have found so far that comes close to solving this need.

nCover With many class libraries

So I have my project and it is set up like this:
MyProject
MyProject.Module1
MyProject.Module1.Tests
MyProject.Module2
MyProject.Module2.Tests
What I want is the code coverage number for the entire project.
I am using nCover... what is the best way to do this? For example would I have to rearrange the project and have everything put into MyProject.Tests?
It depends on how you're testing. Most test frameworks will let you run tests for multiple assemblies as separate arguments. If you can't run them all together, you can always use NCover's merge feature. Check out http://docs.ncover.com/ref/2-0/ncoverexplorer-console/merging-coverage-data/.