I use NUnit and TeamCity to run my tests.
Some tests (not all) has actions made in test class constructor. I call these actions "pre-actioins" for validations. So in one test class I have for example 5 validations (tests) and a set of pre-actions.
I noticed that if a suite of tests if failed on the stage of pre-actions executing then TeamCity doesn't display these tests in its report at all (not under any status).
In build log I see error like:
SetUp Error : {test_name} + error code.
What I expect from TeamCity is to report these tests at least as Ignored.
To compare running tests using TeamCity with running tests using Visual Studio in Visual Studio the result of the same failure condition will be failure for all the test suite. Failure error will be the same for all the tests.
So what I want is just to know if some of my tests were not run at all because if TeamCity doesn't include then in test results then I don't even know about problems!
Configs: TeamCity 10.0, NUnit 3.0.
Command line params: --result=TestResult.xml --workers=4 --teamcity
Update: results of tests executing in log looks like:
[13:03:48][Step 1/1] Test Run Summary
[13:03:48][Step 1/1] Overall result: Failed
[13:03:48][Step 1/1] Tests run: 82, Passed: 0, Errors: 82, Failures: 0, Inconclusive: 0
[13:03:48][Step 1/1] Not run: 0, Invalid: 0, Ignored: 0, Explicit: 0, Skipped: 0
[13:03:48][Step 1/1] Start time: 2016-09-08 09:56:33Z
[13:03:48][Step 1/1] End time: 2016-09-08 10:03:48Z
[13:03:48][Step 1/1] Duration: 434,948 seconds
So NUnit marks such tests even not as failed but as "erros". Still I want them in test results.
Your tests are errors because you are throwing an exception in the constructor. Since the test fixture can't be constructed, the test is not really being run as far as NUnit is concerned. The fact that it's an NUnit assertion failure causing the exception is irrelevant in the context of constructing the object.
We have always advised people to keep their constructors very simple because NUnit makes no guarantees about when and how often your object will be constructed. Using assertions in the constructor is an extreme violation of that principal and, in fact, I've never seen anyone do it before.
The OneTimeSetUp attribute is there if you want some thing to happen every time your test is run as opposed to constructed. NUnit does make guarantees about when that method will be executed. :-)
None of this tells me for sure why TC is not recognizing the error but I'm guessing it's because once the constructor fails, the tests are never actually run. NUnit itself compensates for that by reporting the tests as errors but TC would not necessarily do the same.
Related
I'm having a silly problem, with Nunit3TestAdapter version 3 under dotnet 5, I could see tests while they were passing, with the execution time detailed, the "Passed Test1" in the following transcript, as long as verbosity was set to at least normal:
$ dotnet test -v normal
[...]
NUnit Adapter 3.17.0.0: Test execution complete
Passed Test1 [21 ms]
Passed Test2 [< 1 ms]
Test Run Successful.
Total tests: 2
Passed: 2
I recently upgraded to dotnet 6 and nunit adapter 4.2.0, and now I'm unable to display the detailed output, even with the higher (detailed) verbosity:
$ dotnet test -v detailed
[...]
Test run for /tmp/nunit-repro/bin/Debug/net6.0/nunit-repro.dll (.NETCoreApp,Version=v6.0)
Microsoft (R) Test Execution Command Line Tool Version 17.0.0
Copyright (c) Microsoft Corporation. All rights reserved.
Starting test execution, please wait...
A total of 1 test files matched the specified pattern.
Passed! - Failed: 0, Passed: 2, Skipped: 0, Total: 2, Duration: 24 ms - /tmp/nunit-repro/bin/Debug/net6.0/nunit-repro.dll (net6.0)
I've been looking around for some time now and cannot find a relevant configuration option. Am I missing something?
Having integration test suites made of hundreds of tests and taking several minutes to pass, it's quite frustrating to have no visual progress whatsoever, not knowing if things are running or hanging.
Someone found the solution for me, add -l "console;verbosity=detailed":
dotnet test -l "console;verbosity=detailed"
I have 29 Simulink/Matlab Test. It has a lot of different reference models. Before running a 20 second simulation , it has to load all reference models and create a lot of simulation artifacts in a work folder. A lot of reference model are shared in-between test.
When running one test at a time, I have no issue, all simulation artifact are created and used to run the various simulation. Everything Passes.
When running it all via parallel processing. I have a issue.Some simulation artifact are not built or missing, hence my simulation fails even before running.But surprisingly, not all 29 of them fail. It actually random,last time it was 17, another time it was 22. And it even ran once with 0 fail.
Another note, I only have this issue when running it on a self-hosted computer on Azure-Pipelines for CI purposes.
I would like to fix this issue and reproduce stable test pass/fail results of one at a time run, but on parallel process run. How would I do that?
Error:
2020-11-03T03:16:27.1083996Z Making simulation target "Foo_src_sfun", ...
2020-11-03T03:16:27.1084227Z
2020-11-03T03:16:27.1084361Z
2020-11-03T03:16:27.1084502Z
2020-11-03T03:16:27.1084789Z Microsoft (R) Program Maintenance Utility Version 14.00.24210.0
2020-11-03T03:16:27.1085188Z Copyright (C) Microsoft Corporation. All rights reserved.
2020-11-03T03:16:27.1085441Z
2020-11-03T03:16:27.1085815Z NMAKE : fatal error U1052: file 'Foo_src_sfun.mak' not found
2020-11-03T03:16:27.1086175Z Stop.
2020-11-03T03:16:27.1089399Z ================================================================================
2020-11-03T03:16:27.1089936Z Error occurred in TestSim/testSim(File=test_FooTest1_slx) and it did not run to completion.
2020-11-03T03:16:27.1090308Z
2020-11-03T03:16:27.1090497Z ---------
2020-11-03T03:16:27.1090720Z Error ID:
2020-11-03T03:16:27.1090946Z ---------
2020-11-03T03:16:27.1091254Z 'Slvnv:simcoverage:SimulationFailed'
2020-11-03T03:16:27.1091481Z
2020-11-03T03:16:27.1091669Z --------------
2020-11-03T03:16:27.1091919Z Error Details:
2020-11-03T03:16:27.1092186Z --------------
2020-11-03T03:16:27.1092419Z Error using cvsim
2020-11-03T03:16:27.1092659Z Simulation failed
2020-11-03T03:16:27.1092864Z
2020-11-03T03:16:27.1093112Z Error in testRunner (line 145)
2020-11-03T03:16:27.1093477Z [cvdo, simOutRes] = cvsim(testObj,paramStruct) ;
2020-11-03T03:16:27.1093765Z
2020-11-03T03:16:27.1094034Z Error in TestSim/testSim (line 30)
2020-11-03T03:16:27.1094373Z [cvdo, simOutRes, ErrLog] = testRunner(File,20);
2020-11-03T03:16:27.1094638Z
2020-11-03T03:16:27.1094830Z Caused by:
2020-11-03T03:16:27.1095168Z Error using autobuild_kernel>autobuild_local (line 219)
2020-11-03T03:16:27.1095612Z Unable to create mex function 'Foo_src_sfun.mexw64'
2020-11-03T03:16:27.1096006Z required for simulation.
2020-11-03T03:16:27.1096427Z ================================================================================
Update:
I found that I have also another kind of error, leads pretty much to same result.
2020-11-03T03:18:36.1668328Z Making simulation target "Foo2_src_sfun", ...
2020-11-03T03:18:36.1668601Z
2020-11-03T03:18:36.1668735Z
2020-11-03T03:18:36.1669087Z 'Foo2_src_sfun.bat' is not recognized as an internal or external command,
2020-11-03T03:18:36.1669483Z operable program or batch file.
2020-11-03T03:18:36.1669685Z
2020-11-03T03:18:36.1669892Z >>Removing MiL paths...
2020-11-03T03:18:36.1670104Z >>Done
I made a runSingleTest() that I run before my parallel run. Before running it creates all required model reference mexw64 files in the **/work/sim_artifact folder.
Hence when the parallel run they don't need to create any new files, they either use whats already there or update the files.
I have been having no issue since that change. Just a longer run time because of that repetitive test.
My bamboo build NUnit Runner task is creating the following result file:
${bamboo.build.working.directory}\bld-output\${bamboo.ACM.AssemblyInformationalVersion}\ACCCMApplication\TestResult.xml
But my NUnit Parser bamboo task (which is the next running task) is failing logging
build 06-Nov-2018 06:48:46 Tests run: 1, Errors: 0, Failures: 0, Inconclusive: 0, Time: 2.6861236 seconds
build 06-Nov-2018 06:48:46 Not run: 0, Invalid: 0, Ignored: 0, Skipped: 0
build 06-Nov-2018 06:48:46
simple 06-Nov-2018 06:48:46 Parsing test results under D:\build-dir\ACM-NUNITINT-JOB1...
simple 06-Nov-2018 06:48:46 Failing task since test cases were expected but none were found.
I have tried the following options in the NUnit Parser task for NUnit Test Results File/Directory with no success. What is the correct way to format the path to this xml? :
${bamboo.build.working.directory}\bld-output\${bamboo.ACM.AssemblyInformationalVersion}\ACCCMApplication\TestResult.xml
**/bld-output/${bamboo.ACM.AssemblyInformationalVersion}/ACCCMApplication/TestResult.xml
**/test-reports/*.xml
${bamboo.build.working.directory}/bld-output/${bamboo.ACM.AssemblyInformationalVersion}/ACCCMApplication/TestResult.xml (with outside build checked)
In the end made it as simple as possible.
The NUnit Runner configuration Result Filename was left at the root directory (i.e TestResult.xml)
And the NUnit Parser configuration NUnit Test Results File was set to **/TestResult.xml
I'm having a problem where jasmine-node silently fails if unhandled exceptions happen in a test.
If I run a single file, everything is OK and I get the expected jasmine output:
./node_modules/jasmine-node/bin/jasmine-node spec/unit/accessControlSpec.js
Finished in 0.011 seconds
4 tests, 6 assertions, 0 failures, 0 skipped
But, if I run all specs in a folder, it fails silently.
./node_modules/jasmine-node/bin/jasmine-node spec/unit
Tried --verbose and --captureExceptions but no luck.
In this specific case, some code inside a test was calling a method that didn't exist.
So, turns out the problem is that I'm not calling the correct command because I didn't install jasmine-node globally.
The correct way is:
node ./node_modules/jasmine-node/lib/jasmine-node/cli.js ./spec/unit
This is further described here: Command Line Usage
We have a Scala project and we use SBT as its build tool.
our CI tool is TeamCity, and we build the project using the command line custom script option with the following command:
call %system.SBT_HOME%\bin\sbt clean package
The build process works fine when the build succeeds, however, when compilation fails - TeamCity thinks that the script exited with exitCode 0 and not 1 as expected, this cause TeamCity build to succeed although the compilation failed.
when we run the same commands on local cmd we see that the errorLevel is 1.
the relevant part of the build log:
[11:33:44][Step 1/3] [error] trait ConfigurationDomain JsonSupport extends CommonFormats {
[11:33:44][Step 1/3] [error] ^
[11:33:44][Step 1/3] [error] one error found
[11:33:45][Step 1/3] [error] (compile:compile) Compilation failed
[11:33:45][Step 1/3] [error] Total time: 12 s, completed Jan 9, 2014 11:33:45 AM
[11:33:45][Step 1/3] Process exited with code 0
how can we make TeamCity recognize the failure of the build?
Try explicitly exit with:
call %system.SBT_HOME%\bin\sbt clean package
echo the exit code is %errorlevel%
exit /b
If you can't get the process to output a non-zero exit code then you could use a build failure condition based on specific text in the build log. See this page for the documentation but in essence you can get the build to fail if it finds the text error found in the build log.