NUnit tests being aborted randomly (Involves ServiceStack & RavenDB) - nunit

NUnit tests being aborted randomly (Involves ServiceStack & RavenDB)
We have a project where we use ServiceStack and RavenDB. Testing is done using NUnit.
When running the tests individually everything works fine.
When running more than one test a few will do their thing (pass/fail) but very often one of the tests will be aborted and all subsequent tests will not be run.
Which test aborts is seemingly random. The more tests that are being run the higher the chance that one will be aborted.
The test that gets aborted does seem to be able to run through all its actions though seeing from the test log.
Unfortunately I'm not able to give more info besides the following files which show the way our tests are set up.
IntegrationBaseTest.cs (Base test class)
GlobalSetupFixture.cs
AccountServiceTests.cs (Example file with tests)
test log (Log of aborted test, in this case DeleteAccount_DeletesAccount)
Result view of running all tests in AccountServiceTests.cs.
Which test gets aborted is completely random.
Does anyone have any idea of what I could try to fix this? :)

It turned out that when disabling the logging the tests ran normally without aborting.
I'm not sure what caused them to abort but I think it might be because the jetbrains taskrunner was running out of memory because of all the logs.

Related

Protractor/cucumberjs rerunning failed tests/cucumber features/specs

Given that automated UI tests sometimes fail due to flakiness, an ability to rerun only the failed tests becomes incredibly useful in a framework like protractor.
Unfortunately, as of 09/13/2016, there's no way to rerun failed tests with protractor.
How do you guys rerun your failed tests? Ideally, I'd like suggestions/ideas from people using the javascript implementation of cucumber, cucumberJs.
There's protractor-flake that was developped by Nick Tomlin to address this problem but that module doesn't always work when dealing with multicapabilities where you're trying to run your tests in parallel.
A. How do you guys rerun your failed tests? Ideally, I'd like suggestions/ideas from people using the javascript implementation of cucumber, cucumberJs.
There's protractor-flake that was developped by Nick Tomlin to address this problem but that module doesn't always work when dealing with multicapabilities where you're trying to run your tests in parallel.
There's protractor-flake that was developped by Nick Tomlin to address this problem but that module doesn't always work when dealing with multicapabilities where you're trying to run your tests in parallel.
This question: How to rerun the failed scenarios using Cucumber? almost answered the question; problem is: how do I use that command (cucumber -f rerun --out rerun.txt) to rerun my tests AND run protractor in parallel? That command might only work when you're not parallelizing your protractor tests;
B. How would you use that cucumber command to run your tests in parallel?
Please answer question A and B above, thanks again!
So far I have found the following tool, protractor-flake, that will rerun failed protractor tests:
***Github***: https://github.com/NickTomlin/protractor-flake
***NPM***: https://www.npmjs.com/package/protractor-flake

why the compilation is must in datastage before run?

Is it necessary to compile a job every time when we are trying to run it? I have not modified anything in job but still every time when i am trying to run it is asking to compile. why is it necessary? what is happening while compiling?
Compiling generates the binaries and scripts that run the job. Compiling before running a job is required. Once compiled, you can run the job over again and again without compiling if the job ran successfully. For Job aborts/failures, a recompile is required.
This is a huge weakness of DataStage. Recompiling unchanged code is an extreme nuisance at best. Product owners of DataStage should eliminate this curse of an ongoing problem.

NUnit: How get test failed (instead of ignored) if TestFixtureSetUp failed

We use NUnit for unit tests and TeamCity as our CI server. After each commit all tests are executing. If some tests failed then e-mail notifications are sent.
All went well but today I noticed that many tests were ignored. Also I saw message which described the reason:
TestFixtureSetUp failed in MyApplicationTests
I was confused why these tests were ignored but not failed. My concern is that developers think all is going well but actualy tests were not run (ignored).
Question: how configure NUnit to fail tests (instead of ignore) if TestFixtureSetUp failed?
Maybe we can configure TeamCity to send e-mail notifications if tests are ignored. But it is not what I want because we have some tests marked with Ignore attribute. So notification will be send each time and becomes useless.
TeamCity cannot filter this event and report it differently to you. There seems to be no way of programmatically failing all the tests in a fixture from the TestFixtureSetUp callback.
So, in my opinion, you have no chance but closely monitoring the Ignores in your build results. There seems to be no automatic way of distinguishing them from the tests you actually are ignoring.
As a side note, whenever I or my colleagues marked tests with the Ignore attribute (in my career) it was never temporary. Always permanent. We should use the Ignore flag very carefully, in my opinion.

Continue running NUnit after failures

I am running nunit-console from a CI configured in TeamCity to run tests from various assemblies. Once one of the TestFixtures has a failing test, then the test execution will stop.
Currently i am able to see the first tests that failed, but am unaware if there are more testfixtures that might fail down the line.
I would like to get a summary that lists the failing tests and test fixtures, without all the details of the exceptions thrown.
Anyone have any ideas?
Thanks.
NUnit should run all of the unit tests in the specified assembly, regardless of the number of test failures. The first thing I would check is the raw xml output from the unit test run. You may find that the tests are being executed, but the build server is failing to display all of the results. If that is the case, there may be a faulty xslt that needs to be modified.
Another thing to try is running all of the tests on your box using the command-line tool, and see if it runs all of the tests. If they run on your box but not the server, you may have a configuration problem on the build box.
Yet another possibility is that the failure is a critical one (failure to load an assembly perhaps) which is causing NUnit itself to error out.

XCode doesn't finish test build while at "Run Script" phase

When trying to build the unit tests created using the default XCode Unit Test bundle target, it looks like it's stuck on the "Run custom shell script 'Run Script'" phase.
I also notice a high cpu usage on process "otest" to the point where the fans kick in within seconds.
The only useful message I see when expanding the line is
/Developer/Tools/RunPlatformUnitTests.include:419: note: Running tests for architecture 'i386' (GC OFF)
Couldn't open shared capabilities memory GSCapabilities (No such file or directory)
The only option I have at that time is to stop the build.
Have to say I was running unit tests perfectly fine up to this moment but can't say for sure what I did to cause that.
That's on XCode 3.2.4
After updating to 3.2.5 now the run script does fail with an error
Test rig '/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator4.2.sdk/Developer/usr/bin/otest' exited abnormally with code 138 (it may have crashed).
Guess they problem is related?
Did find some answers on SO about how exception handling now works differently when using NSInvocation (which otest seems to use) but not really a solution to this.
I had this happen to me. I made it go away by scrapping my old testing target profile, creating a new one, and pointing all my tests to it. I was too frustrated to compare the profiles line by line to figure out what had changed.
This looks like an infinite loop to me. Try adding some NSLog statements and/or debugging your tests with gdb (by adding otest as a custom executable).
This happened to me after updating to Xcode 9 and using script for updating localizable strings file, a minor bug caused the script to never finish. After updating BartyCrouch, everything worked normally.
https://github.com/Flinesoft/BartyCrouch/issues/66