Specflow - Raise Build error when Ambiguous steps found - nunit

We are using Specflow3 with NUnit and using CI/CD pipelines to run automation tests.
When someone checks in a code which results in Ambiguous step definition, we were not able to catch it during design time. Is there any way to catch these during the build and fail the build in pipeline?

Sadly, this is not possible. The evaluation of the bindings is completely done at runtime and not at compile time.
You can only configure how the missing or pending steps are reported. If it is an error or an inconclusive warning.
The name of the property is missingOrPendingStepsOutcome - https://docs.specflow.org/projects/specflow/en/latest/Configuration/Configuration.html#runtime

Related

Eclipse Cucumber intelligence failed to show undefined step definition warning

Eclipse Cucumber failed to show the warning for undefined step definitions
enter image description here
Eclipse Cucumber failed to show the warning for undefined step definitions
What do you expect to happen? This is normal behavior. The Cucumber library doesn't run in the background to warn you that a test step in a feature file doesn't have the required step definition. Think about what you are asking. If the plugin did that during development, how often will it warn you? You can also have dozens of steps not fully implemented. Instead, the plugin depends on the developer to do his or her job and not create test steps that are not fully implemented. After all, you should not be pushing feature files that have not been validated with at least one successful run. So... what's the harm in giving you the warning when it is trying to execute the step?

Teamcity Gives error while running Nunit runner : Has no Text Fixture

I am trying to run some unit tests in teamcity build configuration. I am using NUnit.ConsoleRunner.3.6.1 . However It says "Has no TestFixture". But when I tried to run it in command prompt using same runner it completed successfuly. What are the possible reasons ?
Really, more info is needed to judge what your problem is. However, most errors of that type indicate that the nunit framework has not been deployed with the test assembly that uses it.

NDepend VSTS Build task run but do not produce any result

I have configured the NDepend VSTS Task as indicated but the dashboard stay empty and redirect me to the configuration doc.
I also checked the "Stop the Build when at least one Quality Gate fails" option and even if NDepend detect 2 gates failed, the build is still considered as successful.
Here is the NDepend task logs:
##[section]Starting: NDependTask
-------------------------------------------------------------
Task : NDepend Task
Description : NDepend Task
Version : 1.7.0
Author : NDEPEND
Help : Replace with markdown to show in help
-------------------------------------------------------------
Preparing task execution handler.
Executing the powershell script: d:\a_tasks\NDependTask_94137ea2-81f0-411a-9527-b1400d722332\1.7.0\ndepend.ps1
System.Management.Automation.ParameterBindingValidationException
Cannot validate argument on parameter 'Url'. The argument is null or empty.
Provide an argument that is not null or empty, and then try the command again.
No previous build analyzed by ndepend is found to compare with.
##[warning]The ndproj file is not defined, the default one will be used
D:\a_tasks\NDependTask_94137ea2-81f0-411a-9527-b1400d722332\1.7.0\Integration\VSTS\VSTSAnalyzer.exe /outputDirectory "d:\a\1\a" /sourceDirectory "d:\a\1\s" /excludePattern ".test." /identifier "default" /hub "https://laedit2.visualstudio.com/IASI/_apps/hub/ndepend.ndependextension.NDepend.Hub" /coverageDir "d:\a\1\TestResults;d:\a\1\s;d:\a\1\s" /stopBuild /errorCode 1
Run Analysis!
2 quality gates fail.
- 'Critical Rules Violated' value 2 rules greater than fail threshold 0 rules
- 'Debt Rating per Namespace' value 1 namespaces greater than fail threshold 0 namespaces
##[error]Unexpected exit code 1 returned from tool VSTSAnalyzer.exe
##[section]Finishing: NDependTask
Do I need to configure something else?
The Visual Studio integration of NDepend works perfectly with the same ndproj on my computer.
EDIT:
I use the trial version of the task.
The problem is reproducible with the following steps:
new console application (.net 4.5.2)
NDepend menu in Visual Studio / Attach new NDepend project to solution
publish the project to VSTS and create this build definition based on the Visual Studio proposed:
And the NDepend Build Task:
VSTS Build result despite the Unexpected exit code 1 returned from tool VSTSAnalyzer.exe:
Here are the project with ndproj and the build logs.
I have noticed the following exception:
System.Management.Automation.ParameterBindingValidationException
Cannot validate argument on parameter 'Url'. The argument is null or empty.
Provide an argument that is not null or empty, and then try the command again.
But I cannot find the 'Url' parameter anywhere in the NDepend task definition, so I don't know if it is related.
Following email exchange with the VSTS team from NDepend, It appears that there was issues in the NDepend VSTS Build Task but they have been fixed.
That said, it is worth to notice that if the "Stop the Build when at least one Quality Gate fails" option is checked and your project have some quality gates failed, the NDepend result won't be stored.
So if your project never had a successful build, the NDepend dashboard will redirect you to the "How-To" section.

NUnit: How get test failed (instead of ignored) if TestFixtureSetUp failed

We use NUnit for unit tests and TeamCity as our CI server. After each commit all tests are executing. If some tests failed then e-mail notifications are sent.
All went well but today I noticed that many tests were ignored. Also I saw message which described the reason:
TestFixtureSetUp failed in MyApplicationTests
I was confused why these tests were ignored but not failed. My concern is that developers think all is going well but actualy tests were not run (ignored).
Question: how configure NUnit to fail tests (instead of ignore) if TestFixtureSetUp failed?
Maybe we can configure TeamCity to send e-mail notifications if tests are ignored. But it is not what I want because we have some tests marked with Ignore attribute. So notification will be send each time and becomes useless.
TeamCity cannot filter this event and report it differently to you. There seems to be no way of programmatically failing all the tests in a fixture from the TestFixtureSetUp callback.
So, in my opinion, you have no chance but closely monitoring the Ignores in your build results. There seems to be no automatic way of distinguishing them from the tests you actually are ignoring.
As a side note, whenever I or my colleagues marked tests with the Ignore attribute (in my career) it was never temporary. Always permanent. We should use the Ignore flag very carefully, in my opinion.

Continue running NUnit after failures

I am running nunit-console from a CI configured in TeamCity to run tests from various assemblies. Once one of the TestFixtures has a failing test, then the test execution will stop.
Currently i am able to see the first tests that failed, but am unaware if there are more testfixtures that might fail down the line.
I would like to get a summary that lists the failing tests and test fixtures, without all the details of the exceptions thrown.
Anyone have any ideas?
Thanks.
NUnit should run all of the unit tests in the specified assembly, regardless of the number of test failures. The first thing I would check is the raw xml output from the unit test run. You may find that the tests are being executed, but the build server is failing to display all of the results. If that is the case, there may be a faulty xslt that needs to be modified.
Another thing to try is running all of the tests on your box using the command-line tool, and see if it runs all of the tests. If they run on your box but not the server, you may have a configuration problem on the build box.
Yet another possibility is that the failure is a critical one (failure to load an assembly perhaps) which is causing NUnit itself to error out.