NUnit: How get test failed (instead of ignored) if TestFixtureSetUp failed - nunit

We use NUnit for unit tests and TeamCity as our CI server. After each commit all tests are executing. If some tests failed then e-mail notifications are sent.
All went well but today I noticed that many tests were ignored. Also I saw message which described the reason:
TestFixtureSetUp failed in MyApplicationTests
I was confused why these tests were ignored but not failed. My concern is that developers think all is going well but actualy tests were not run (ignored).
Question: how configure NUnit to fail tests (instead of ignore) if TestFixtureSetUp failed?
Maybe we can configure TeamCity to send e-mail notifications if tests are ignored. But it is not what I want because we have some tests marked with Ignore attribute. So notification will be send each time and becomes useless.

TeamCity cannot filter this event and report it differently to you. There seems to be no way of programmatically failing all the tests in a fixture from the TestFixtureSetUp callback.
So, in my opinion, you have no chance but closely monitoring the Ignores in your build results. There seems to be no automatic way of distinguishing them from the tests you actually are ignoring.
As a side note, whenever I or my colleagues marked tests with the Ignore attribute (in my career) it was never temporary. Always permanent. We should use the Ignore flag very carefully, in my opinion.

Related

Required variables at queue time

When running our Release build (which ultimately labels and versions a changeset), I want the variables to be supplied at queueing time. For example 1.0.23 below:
Is there any way to set these variables as required in order to execute the build?
This new "vNext" build platform is incredibly difficult to Google for.
The best I have come up with thus far is to add a task as the first step in the first phase of the build that checks the required variables are set. If any are not, it fails the build.
I use PowerShell for this:
if ([string]::IsNullOrWhitespace($env:Major)) { throw "Major not set" }
This is not ideal, as the build still has to wait to get scheduled on an agent, sync sources, &c. before the validation code runs and fails the build. But, it's still better than building everything just to have, say, packaging (step 14/15) fail because the version wasn't set.
I've opened a feature request on the VSTS UserVoice page asking for "required queue variables".

NUnit tests being aborted randomly (Involves ServiceStack & RavenDB)

NUnit tests being aborted randomly (Involves ServiceStack & RavenDB)
We have a project where we use ServiceStack and RavenDB. Testing is done using NUnit.
When running the tests individually everything works fine.
When running more than one test a few will do their thing (pass/fail) but very often one of the tests will be aborted and all subsequent tests will not be run.
Which test aborts is seemingly random. The more tests that are being run the higher the chance that one will be aborted.
The test that gets aborted does seem to be able to run through all its actions though seeing from the test log.
Unfortunately I'm not able to give more info besides the following files which show the way our tests are set up.
IntegrationBaseTest.cs (Base test class)
GlobalSetupFixture.cs
AccountServiceTests.cs (Example file with tests)
test log (Log of aborted test, in this case DeleteAccount_DeletesAccount)
Result view of running all tests in AccountServiceTests.cs.
Which test gets aborted is completely random.
Does anyone have any idea of what I could try to fix this? :)
It turned out that when disabling the logging the tests ran normally without aborting.
I'm not sure what caused them to abort but I think it might be because the jetbrains taskrunner was running out of memory because of all the logs.

Continue running NUnit after failures

I am running nunit-console from a CI configured in TeamCity to run tests from various assemblies. Once one of the TestFixtures has a failing test, then the test execution will stop.
Currently i am able to see the first tests that failed, but am unaware if there are more testfixtures that might fail down the line.
I would like to get a summary that lists the failing tests and test fixtures, without all the details of the exceptions thrown.
Anyone have any ideas?
Thanks.
NUnit should run all of the unit tests in the specified assembly, regardless of the number of test failures. The first thing I would check is the raw xml output from the unit test run. You may find that the tests are being executed, but the build server is failing to display all of the results. If that is the case, there may be a faulty xslt that needs to be modified.
Another thing to try is running all of the tests on your box using the command-line tool, and see if it runs all of the tests. If they run on your box but not the server, you may have a configuration problem on the build box.
Yet another possibility is that the failure is a critical one (failure to load an assembly perhaps) which is causing NUnit itself to error out.

Tests fail sporadically using CruiseControl.NET with NUnit: error 800704a6

My partner and I have a suite of tests running nightly on a build server for our project. We use CruiseControl.NET to run the server, and the tests are written using WatiN and NUnit. We have CruiseControl.NET running as a service with access to interact with the desktop on a local system account. Every few times that we run a build, certain tests will fail with error messages such as the following:
Test: cfarmweb.tests.Views.GeneralRegressionTest.DuplicateUsernameTest
Type: Failure
Message: SetUp : System.Runtime.InteropServices.COMException : Creating an instance of the COM component with CLSID {0002DF01-0000-0000-C000-000000000046} from the IClassFactory failed due to the following error: 800704a6. TearDown : System.NullReferenceException : Object reference not set to an instance of an object.
at WatiN.Core.IE.CreateNewIEAndGoToUri(Uri uri, IDialogHandler logonDialogHandler, Boolean createInNewProcess)
at WatiN.Core.IE..ctor(String url)
at cfarmweb.tests.Navigator.SiteNavigator..ctor(String browserName, Boolean visible) in c:\ccworkdir\CFarm\builddir\cfarmweb.tests\Navigator\SiteNavigator.cs:line 35
at cfarmweb.tests.Views.GeneralRegressionTest.MakeNavigator() in c:\ccworkdir\CFarm\builddir\cfarmweb.tests\Views\GeneralRegressionTest.cs:line 34
--TearDown
at WatiN.Core.Browser.OnGetNativeDocument()
at WatiN.Core.DomContainer.get_NativeDocument()
at WatiN.Core.Document.ContainsText(String text)
at cfarmweb.tests.Navigator.SiteNavigator.HasText(String target) in c:\ccworkdir\CFarm\builddir\cfarmweb.tests\Navigator\SiteNavigator.cs:line 213
at cfarmweb.tests.Navigator.SiteNavigator.SignOut() in c:\ccworkdir\CFarm\builddir\cfarmweb.tests\Navigator\SiteNavigator.cs:line 110
at cfarmweb.tests.Views.GeneralRegressionTest.DisposeNavigator() in c:\ccworkdir\CFarm\builddir\cfarmweb.tests\Views\GeneralRegressionTest.cs:line 123
The success of the builds does not seem to be dependent on changes to the code itself, as we have had builds break or be fixed after changes to parts of the program that are unrelated to the tests.
We are both new to the field of software testing (and development in general), but nothing we've found online about this error seems to pertain to our situation. We've seen everything from a system reboot pending to compatibility issues with Internet Explorer 8 to JavaScript errors, but nothing we've tried has fixed the issues. One of the most difficult parts is that it's not consistently reproducible. How can we fix this problem?
Ben,
I had the exact same issue, surprisingly enough...I think I have the solution. It appears to be a threading issue. The [RequiresSTA] tag at the top of a test is meant to create each test to be single-threaded by implicitly placing the tag [STAThread] on each method. However, I am inclined to believe that these tags are not being called on the [SetUp] or [TearDown] methods, creating threading issues. I have hopefully resolved the issue by placing the [STAThread] tag on each method (including the [SetUp] and [TearDown] methods) explicitly. I will let you know of any further changes, but it's worth a shot.
There are some similar issues related to Watin and IE8.
Running Watin on TeamCity
failed due to the following error: 800704a6 while trying to read data from a text file in teamcity
https://serverfault.com/questions/179156/ie8-script-error-800704a6
From what i understand, make sure you windows is fully updated, make another restart just to make sure and check if problem persist.
If it does, try runnning ccnet not in service mode.
If still no good, try to play with internet options security to determine if this affect the problem.
HTH

Fail build via REST API with TeamCity, from another build config, after build is already finished

I have some integration tests that get kicked off by TeamCity on a successful build. I have had success using the TeamCity REST API in order to tag the build as passed or failed, but would actually like to mark the build status as passed or failed (in the same way builds are failed due to compilation or unit test failures).
The documentation for the REST API is pretty sparse. Is it just not possible to do this through the REST API or is it undocumented?
Clarification:
Current process is as follows:
"App" TC Build configuration actually builds the application and runs the unit tests.
"Test" TC Build configuration depends on "App" configuration completing successfully. If "App" builds successfully (no compile or unit test failures), "Test" configuration kicks off, which pulls down the build artifacts and runs the live integration tests on the application. Prior to these tests being run, "App" configuration has a status of passing, since it compiled successfully and there were no unit test failures.
What I am trying to do is to change "App" config status to failed, if the "Test" configuration failed. Currently I am merely tagging "App" as passed or failed, but the actual build status is always passing. Essentially I am trying to get the change log or history to show the red X icon for a failed build, rather than the green check mark.
"App" and "Test" are 2 separate TeamCity build configurations. Since they are separate, Build Script Interaction, as suggested by #sharma, will not do the trick, since Build Script Interaction can be used to fail/update the currently running build configuration, whereas I am trying to update/fail a separate and already completed build configuration.
Why do we have 2 separate configs and not just run the tests from the main build? Speed of course! The integration tests take up to 10 minutes to run, and we don't want to slow down the compile cycle just because the integration tests are running.
Actually you can change build status even after build finished with the following non-documented request (you need buildId of build you want to change):
curl -v --request POST "http://your-teamcity-url/ajax.html" -u login:password --data "comment=Your reason to fail build" --data "status=FAILURE" --data "changeBuildStatus=buildId"
You should be able to do it through build Interaction scripts.
UPDATE: Look here, It should have "reporting messages for build logs". If you have following message printed to console in which ever application build you are running. The teamcity build will fail and show as error. If you change status to Failure it will still fail. You have more information on the link I provided. An example message you may want to printout:
"##teamcity[message text='Exception text' errorDetails='stack trace' status='ERROR']"
Look at this
So the answer to my original question, is it possible to use REST API to mark build as failed from another build configuration, is that it is not possible.
Per TeamCity support: There is no way to change a build status after it has been finished. This is not a limitation of REST API, this is just not implemented feature of TeamCity.
Here is a related feature request in our tracker: http://youtrack.jetbrains.net/issue/TW-2529
(I upvoted #sharma's answer and comments, as they were definitely informative, but ultimately not a solution to my problem.)