How to make NUnit Console Runner print test names - nunit

The NUnit console runner only prints to the console when a test causes something to be printed.
Is there a way to make it print every test name? Something similar to mocha, shown below.

You're looking for the --labels option.
There are various options for the level of output, on, off or all - and with v3.6, which will be released soon, there will also be before and after.
To match your picture, on is probably suitable.
--labels=on

Related

Script is printing "select" and "unselect" in the editor console:

In Maya, when running a Python script, the script editor is reporting back every time the script select and unselects something. It is messy and I would rather have that happen under the hood. Is there a way for the script editor to not report those commands, and can I have my script turn the option off in the editor before running?
Yes, I am the author of the script. I believe it to be a Maya issue. It will report certain actions and you can't turn it off.
That sounds a little odd... do you happen to have echo all commands turned on in the script editor?
The only other thing I can think of is that the script may be explicitly running print statements - do you have access to the source code so you can comment out any print lines?
Edit:
If you wrote it, maybe you could tweak the code to no longer require selection. There are really very few Maya commands that actually require selection - they work with selection, but also allow you to explicitly provide node names... most commands also return node names that you can capture into a variable.
If you are no longer selecting/deselecting things in your script, it shouldn't clutter your script editor output with those prints anymore.

Run all tests in namespace using Nunit3-console.exe

After pouring over the NUnit 3 documentation on Test Selection Language and trying a few combinations, I still cannot figure out how to run all the tests within a specific namespace.
The most obvious attempt being:
nunit3-console.exe --where "test == 'MyNamespace.Subnamespace'" Tests.dll
Unfortunately, this reports zero matching tests, although using the --explore options I can see many tests within that namespace.
Do I need to use regular expression/wildcards to accomplish this? The NUnit docs hint otherwise, but given this doesn't work maybe I do.
It seems the following works:
nunit3-console.exe --where "test =~ 'MyNamespace.Subnamespace'" Tests.dll
Note the squiggle =~ is a regex operator.
This is a bit of a surprise because the only example mentioning namespaces in the documentation uses the == syntax which, given my original experimentation, would not have any effect.

Perl: list the lines being run

I am debugging big perl program. To minimize debugging I would like only to debug the lines actually run.
Is there a tool which I can run my program under, that will give me a program only containing the lines actually being used (e.g. by commenting out the rest)?
You can use the Perl Debugger (http://perldoc.perl.org/perldebug.html).
Decent beginner's write up: http://www.thegeekstuff.com/2010/05/perl-debugger/
It sounds like you might want a coverage tool, like Devel::Cover, which indicates the lines which are being executed during a run of the code.
For example, where you currently execute your code as
perl yourprog args
instead you run
perl −MDevel::Cover yourprog args
cover
This will generate an HTML report showing the code which is actually executed.

Erratic Behavior in console for logging versus print messages

I am finding seemingly random ordering of print versus python logging outputs to the pydev console.
So for example here is my code:
import logging
logging.warning('Watch out!') # will print a message to the console
print "foo"
logging.info('I told you so') # will not print anything
logging.warning("Event:")
logging.warning("told ya")
Note the print line around "foo". What is bizzare is each time I run this the order of output with respect to foo changes! That is foo appears one time at the top another time in the middle another time at the end and so forth.
This is unfortunate in a less toy context when I want to know the sequence in which these output events occured in the code (e.g. when tracing/logging etc). Any suggestions as to what might be going on? Latest pydev, python 2.7.6
Thanks!
p.s. As a side-effect (probably) this is making an eclipse tool out there "grep console" behave oddly). But the problem I describe above is independent of that.

In Perl can I create a test in one file that only runs after a test in another file has run?

I am running test files on a Perl module I have written and I would like to have a condition where some tests are only run after others have been run successfully (but still keeping them in separate files).
I was looking at Test::Builder, but I don't think it caters for cross-file testing.
Just to explain why I want to do this; I have a test file for every subroutine in my module. Some of these subroutines are passed large hashes from other subroutines which are difficult to replicate for testing purposes.
So instead of spending a few hours trying to hard-code a testable hash, I would like to be passed one from the code like it would be, only after the subroutine where that hash has been generated is tested.
I hope that makes sense! I could write a script to just run the tests in a certain order, but thought that there may well already be a feature in a Perl testing module that I haven't seen. When it comes to using the module, I ideally want to be able to run the tests without having to fiddle about with the 'make test' bit.
The Perl test harness seems to run test files in alphabetical order. This is why so many CPAN distributions have test files that start with numbers - so that the author can control the order of test execution. That will break if, for example, someone runs those tests with prove -s.
In general a test file should be seen as a completely separate unit of testing. You should be able to run all of your tests in any order without any of them affecting any of the others. If two tests rely on each other then they should be in the same file.
You haven't explained why you're so keen for these tests to be in separate files. Perhaps that's the assumption that you should be questioning.