After pouring over the NUnit 3 documentation on Test Selection Language and trying a few combinations, I still cannot figure out how to run all the tests within a specific namespace.
The most obvious attempt being:
nunit3-console.exe --where "test == 'MyNamespace.Subnamespace'" Tests.dll
Unfortunately, this reports zero matching tests, although using the --explore options I can see many tests within that namespace.
Do I need to use regular expression/wildcards to accomplish this? The NUnit docs hint otherwise, but given this doesn't work maybe I do.
It seems the following works:
nunit3-console.exe --where "test =~ 'MyNamespace.Subnamespace'" Tests.dll
Note the squiggle =~ is a regex operator.
This is a bit of a surprise because the only example mentioning namespaces in the documentation uses the == syntax which, given my original experimentation, would not have any effect.
Related
Is there a single argument alternative to the double verbose options for pytest runs? It's a little deceptive looking seeming like a merge error or a benign redundant typo when it shows up in shell script source.
I'm not sure the evolution of the verbosity inputs, but its showing up in our repo from tips like the following that pytest gives upon failures.
...Full output truncated (19 lines hidden), use '-vv' to show
Would be nice if there was something like --verbose2 or something of the sort.
Turns out there is such an option, it's under verbosity.
--verbosity=VERBOSE
and tracing around the codebase seems to imply that 0,1,2 are valid values. There doesn't appear to be any documentation on that point however.
Looking at the command argument definition I can also see the behavior and pattern they're utilizing, which is the count action which sheds new light on redundant arguments; I didn't even realize that was a pattern.
group._addoption(
"-v",
"--verbose",
action="count",
default=0,
dest="verbose",
help="increase verbosity.",
)
So it would seem that
"-vv" === "--verbose --verbose" === "--verbosity=2"
The NUnit console runner only prints to the console when a test causes something to be printed.
Is there a way to make it print every test name? Something similar to mocha, shown below.
You're looking for the --labels option.
There are various options for the level of output, on, off or all - and with v3.6, which will be released soon, there will also be before and after.
To match your picture, on is probably suitable.
--labels=on
I am trying to figure out the expression syntax for py.test selection using the '-k' option.
I have seen the examples, but I am unclear of what the syntax options are when using the 'k' tag.
I am trying to scan the py.test source code, but so far no luck.
Can anyone give me pointers on what the syntax is for py.test test selection (-k)?
Mmm.. it's not well documented mainly because it's a bit confused and not that well defined. You can use 'and', 'or' and 'not' to match strings in a test name and/or its markers. At heart, it's an eval.
For the moment (until the syntax is hopefully improved) my advice is to:
Use --collectonly to confirm that your -k selects what you want before executing tests
Add markers to tests as needed to further distinguish them.
I am playing around with Devel::Cover to see how well our test suite is actually testing our codebase. I run all of our tests using -MDevel::Cover nothing seems to fail or crash, but the HTML output of the coverage table has entries like these for all of our Modules:
The number of BEGINs listed seems to match the number of use Module::X statements in the source file, but really clutters the HTML output. Is there any way to disable this feature? I don't see any mention of it in the tutorial or the Github issue tracker.
The reason for this is that "use" is "exactly equivalent to"
BEGIN { require Module; Module->import( LIST ); }
(See perldoc -f use.)
And then "BEGIN" is basically the same as "sub BEGIN" - you can put the "sub" there if you want to. See perldoc perlmod.
So what you really do have is a subroutine, and that is what Devel::Cover is reporting.
Like many parts of Devel::Cover, the details of perl's implementation, or at least the semantics, are leaking through. There is no way to stop this, though I would be amenable to changes in this area.
I am running test files on a Perl module I have written and I would like to have a condition where some tests are only run after others have been run successfully (but still keeping them in separate files).
I was looking at Test::Builder, but I don't think it caters for cross-file testing.
Just to explain why I want to do this; I have a test file for every subroutine in my module. Some of these subroutines are passed large hashes from other subroutines which are difficult to replicate for testing purposes.
So instead of spending a few hours trying to hard-code a testable hash, I would like to be passed one from the code like it would be, only after the subroutine where that hash has been generated is tested.
I hope that makes sense! I could write a script to just run the tests in a certain order, but thought that there may well already be a feature in a Perl testing module that I haven't seen. When it comes to using the module, I ideally want to be able to run the tests without having to fiddle about with the 'make test' bit.
The Perl test harness seems to run test files in alphabetical order. This is why so many CPAN distributions have test files that start with numbers - so that the author can control the order of test execution. That will break if, for example, someone runs those tests with prove -s.
In general a test file should be seen as a completely separate unit of testing. You should be able to run all of your tests in any order without any of them affecting any of the others. If two tests rely on each other then they should be in the same file.
You haven't explained why you're so keen for these tests to be in separate files. Perhaps that's the assumption that you should be questioning.