NUnit console - when attempting to redirect output /err seems to have no effect - nunit

I've got an Nunit project with some tests and they all run as expected. Now I want to incorporate the running of these scripts automatically.
I'm trying to use some of the redirect options so I can separate the test output, but whatever combination I use, all I seem to get is the standard TestResult.xml. I can use /out:AnnotherOut.txt OK, but I'm really interested in capturing the error output using /err:TestErrors.txt.
Command line is:
(NunitConsole App) /nologo /framework:net-4.0 MyTestProject.nunit /include=Integration /err=TestErrors.txt

Related

Using the --glob_ignores flag of SVN_Load_Dirs.PL

I am attempting to use the SVN_Load_Dirs.PL script file (https://svn.apache.org/repos/asf/subversion/trunk/contrib/client-side/svn_load_dirs/) to attempt to merge a platform drop.
However, I can't get the --glob_ignores flag to behave as I'd expect, and I know so little perl that I can't dig into the script to understand why. The format I am using is:
--glob_ignores=*.jazzignore
Where I want to ignore all .jazzignore files (although I am fine with anything with "jazzignore" in either the extension or name being ignored. I've looked for examples but can't find any actual usage of this flag anywhere. What I am looking for is a way to ignore all .jazzignore files and a few entire directories (like jazz5 for an example)
I assumed the flag would then be --glob_ignores="*.jazzignore *.jazz5" but that doesn't appear to be working.
It turns out that the script was failing silently because I was running it from a windows cmd (and therefore never getting to the flag option). Apparently there are some issues running it this way and it was designed to be run from a linux command line. After switching it works perfectly.

Running a specific browser NUnit test using the command line

I am trying to run the firefox only test using the command line for NUnit but not sure how to. The code is as follow.
[TestFixture(typeof(FirefoxDriver))]
[TestFixture(typeof(ChromeDriver))]
[TestFixture(typeof(InternetExplorerDriver))]
public class TestCases<TWebDriver> where TWebDriver : IWebDriver, new()
{}
No problem with NUnit GUI since it is separated, however requirements forces us to run it with the command line. Thanks.
I'm looking into like: nunit.console.exe /fixture:"value" /xml:.. etc.. or any other implementation
Thanks!
The best way I've found to do this is to specify a category with the driver such as: [TestFixture(typeof(string), "Chrome")]
Then you can specify the category on the console arguments with nunit-console.exe foo.dll /include:Chrome

Perl debugger on test modules

I'm running into problems testing a new addition to a module. (Specifically - the ~ operator seems to be not working in Math::Complex for this new feature only.) It's too bizarre to be what it appears but the ideal scheme would be to add the -d option on the top line of the .t program.
Well, I was quickly disabused of that idea! It does not invoke the debugger.
If I wanted to use the debugger, I'd need to create an edit of the .t program that:
Uses (the use command) the module directly. not in the form of
BEGIN { use_ok('My::Module') };
Does not "use Test::More;"
A few other edits that cause gluteal pains
The problem with doing that is that any changes I make in the edited test program I still need to transfer back to the true test program use in "make test". Error prone as best.
I am already using "make test TEST_VERBOSE=1" so that my stdio output shows up. But there's GOT to be a simpler way to invoke the debugger on the .t
Thanks for ideas here.
-- JS
use_ok tests are great, but you should have them in test files of their own, not test files that also test other things.
I'm not sure why you would need to avoid Test::More or use_ok to run the debugger, though. What does happen when you try your test directly:
perl -d -Mblib t/yourtestfile.t?
If all else fails, you can try using Enbugger in your test script.

Where the TeamCity service messages should be written?

I'm a beginner on TeamCity, so forgive my dump question.
For some reason the coverage reporting for my solution is not working. So, to run the tests I run nunit-console in a command line step and then use the xml output file in a build feature of type [XML report processing]. Test results appear on the TeamCity GUI but no coverage statistics.
It seems to be that there a way to configure the tests reporting manually https://confluence.jetbrains.com/display/TCD8/Manually+Configuring+Reporting+Coverage but I don't know where to put these service messages:
teamcity[dotNetCoverage ='' ='' ...]
Just write them to standard output. It is captured by TeamCity and service messages from it will be processed.
Pay attention, however, to the syntax. Service message should begin with ##
As Oleg already stated you can dump them in standard output
Console.WriteLine(...) from C#
echo from command prompt or powershell,
...
Here is an example http://log.ld.si/2014/10/20/build-log-in-teamcity-using-psake
There is a psake helper module, https://github.com/psake/psake-contrib/wiki/teamcity.psm1 and source is available on https://github.com/psake/psake-contrib/blob/master/teamcity.psm1 (you can freely use this from powershell as well)
It has already implemented alot of Service Messages

Output selenium test result as html after running perl script

I am currently looking for a way to output the test result nicely after running selenium perl script.
The htmlSuite command from running selenium server outputs a nice html format result page, but I don't know how to do that in perl script.
Problem is, I have it setup so that Selenium is being run 24/7 on a virtual machine workstation(Windows 7), where anyone one can run tests on. Therefore I can't use htmlSuite to run the test because the server will close after the test is finished.
Is there a command argument or perl script method to make selenium server output results on html or other nice format other than printing it on the command line?
Or is there a better way to do this?
If your script is output TAP (that's what Test::More would put out), then you can use the Test::Harness family of modules to parse that TAP and use it to generate an HTML report.
How nice is nice? Under Hudson/Jenkins this gives graphs and a tabular report of tests run:
prove --timer --formatter=TAP::Formatter::JUnit large_test.t >junit.xml