Nunit command line has arg --worker to set LevelOfParallelism
We are running the test programmatically via NUnit Engine (https://docs.nunit.org/articles/nunit-engine/Getting-Started.html)
And I could not find a way to set "worker" in Test Engine or Test Runner
Maybe someone knows how to do this?
I've Google and debugged Test runner - could not find anything
UPDATE:
package = new TestPackage(arguments.Value.testDllPath); package.AddSetting(FrameworkPackageSettings.NumberOfTestWorkers, 8);
Short answer...
Add a setting to the TestPackage you use to get a runner from the engine named "NumberOfTestWorkers" with the value set to the number of workers you want the framework to use.
Details...
This is not actually part of the engine but part of the NUnit framework. The engine is generally framework-agnostic, that is, it doesn't assume you are running tests with the NUnit framework. However, it passes through any settings on the package, which it doesn't understand. It's up to the test framework in use to interpret them.
That's why you can't find the details in the engine documentation, although it could probably be improved by adding a paragraph like the above somewhere. :-)
Bear in mind that writing your own test runner using the engine directly is a somewhat advanced activity. I think the engine docs give you a lot of info about how to get started but you will also need to examine source code of existing open-source runners. For example, looking at the console runner itself, you could have seen this code (reformatted to fit):
if (options.NumberOfTestWorkersSpecified)
package.AddSetting(
FrameworkPackageSettings.NumberOfTestWorkers,
options.NumberOfTestWorkers);
The class FrameworkPackageSettings is part of the runner and includes settings used by the NUnit3 framework and exposed by the runner.
Good luck!
Related
I am working with the following
Specflow - 2.2.1
nUnit - 3.9.0
And i was hoping its possible to allow 2 scenarios within the same feature to run in parallel. The reason for this is to speed up the test suite.
The aim would be to get a few tests (2-3) running in parallel within the same feature, rather than running sequentially.
Does anyone know how this is possible?
I have added this to my assembly -
[assembly: Parallelizable(ParallelScope.Children)]
but i am now seeing the following errors
An item with the same key has already been added.
Object reference not set to an instance of an object.
Scenario level parallelism is not supported by SpecFlow yet.
You can read in their docs they ONLY allow you to use:
[assembly: Parallelizable(ParallelScope.Fixtures)]
There is an open ticket that describes all the issues why this is not supported yet and ways to solve it - https://github.com/SpecFlowOSS/SpecFlow/issues/1535
Please note that even though that NUnit added support for Instance-per-test-case feature in the new version(3.13) it doesn't solve the problem (I tried it with LifeCycle.InstancePerTestCase and still got some errors).
There is an open issue in SpecFlow repo for this problem: https://github.com/SpecFlowOSS/SpecFlow/issues/894
I'm currently moving one of our projects to DNX (.NET Core now) and I was forced to update to nunit3. Because of other considerations, we run compile the test project as a console app with its own entry point, basically self-hosting the NUnit runner.
I now need to report the results to TeamCity via the XML Reporter, which doesn't seem to parse Nunit3 TestResults.xml files.
Any advice on how to work around this?
The NUnit 3 console has the option to produce results formatted in the NUnit 2 style.
Use the option:
--result=[filename];format=nunit2
Docs: https://github.com/nunit/nunit/wiki/Console-Command-Line
To add to the answer above:
NUnitLite inherits the --result CLI parameter which seems to do the trick.
Another option, which I went for in the end is using the --teamcity CLI parameter:
dotnetbuild --project:<path to project directory> -- --teamcity
which will integrate with TC's service messages. This will also do real-time updates.
I tried to find this answer but hardly found it anywhere. I am doing the API testing, In process I need to call the rest API from my local machine. local machine contains the maven project and a framework to call respective rest API.
I need to check the code coverage of remote Rest API and form a report based on the code coverage. please help, how to do that?
Note: I found this link useful but it does not elaborate clearly on what to do?
http://eclemma.org/jacoco/trunk/doc/agent.html
you will probably do a bit of file copying around - depending on the way you run the tests.
JaCoCo runs as a java agent. So you usually add the javaagent parameter as mentioned in the docs you linked to the start script of you application server.
-javaagent:[yourpath/]jacocoagent.jar=[option1]=[value1],[option2]=[value2]
so it would look like:
java -javaagent: -jar myjar.jar
Using tomcat you can add the "-javaagent" part into JAVA_OPTS or CATALINA_OPTS environment variables. Should be similar for other servers.
this will create the jacoco*.exec files. you need to copy those back to your build or CI server to show its results (for ex if you use sonar you need those files before running the sonar reporter). Its important to just include the packages you're interested in.
You can also create one jacoco.exec file per test flavour (jacoco.exec for unit tests, jacoco-it.exec for integration tests, jacoco-at.exec for application tests).
And I would not mix coverage with performance testing - just to mention that too.
There are some examples on stackoverflow for JBoss
I'm playing around with Microsoft Test Manager 2013 (though it appears it is just MTM2012) to try and get a better understanding of test cases and test suites as I want to use this at work. So I was hoping that I could run a test suite on a build which gets included in this test suite. That is what I WANT to do, but it could very well be wrong. So maybe a better scope of what I'm doing at work might lend to a better answer.
My company makes tablet PC's. I write programs for those tablets. For sake of argument lets just say there are 5 tablets, that run a similar array of OS's. Tablet1,2,3 and 4 can run WinXP, WinXP embedded, Win7, and Win7 Embeded, and Tablet5 can run Win7, Win7 Embedded, and Win8 embedded. Lets say i'm making a Display test program. Naturally this display test will run differently on each tablet, but the program it self is supposed to be able to handle that along with not being able to worry about OS. So I wrote out a very simple test. Open Program, try to open again, verify only 1 instance, check display, close program.
I figured it would be good to make a Test Suite called "Complete Display Program Test" and put 5 sub test suites to that for each tablet. Then moved the 5 test cases to a single test suite. I configured all test cases to only have the correct tablet/OS configuration. Queued a build and waited for it to finish. I then attached that build to the main test suite. I then clicked on run a test for tablet 1 but I didn't see the build attached to the test runner. I've looked around a little bit to see why or how and haven't found anything. Question is is how do I do that? Or if you are scratching your head and wondering why in the world I am doing it this way then by all means suggest another way. This is the second time I have ever looked into MTM so I might not be doing it right.
Thank you for your time.
When running manual tests from MTM you will not see the build you are using in Test Runner.
But if you complete the test and set the test outcome you will be able to check which build you've ran the test against.
Just double-click on the test or select "View Results" to display test results:
This column is not visible by default. You will have to right-click on the column row and select the column "Buld number" to be displayed.
You will also be able to see the build number in "Analyse Test Runs" area:
The things are slightly different if you are running automated test.
Consider following approach:
Automate your Test Cases
See How to: Associate an Automated Test with a Test Case for details.
Create a Build Definition building your application under test AND assemblies containing your tests.
I strongly recommend build application you want to test and test assemblies using in the same Build Definition. (You will see why a little bit later).
Run this build definition and deploy the latest version of the application to the environment where you want run the tests.
This is very important to understand: if you run automated tests the tests assemblies only would be deployed automatically to the environment.
It's your job to deploy the right version of the application you are going to test.
Now you can run tests from MTM.
You can do it the way described by #AndrewClear in the comment to this answer: "choose "Run with Options" when you're beginning a test run" and select the latest build.
Now test assemblies containing tests which are using to automate Test Cases will be deployed automatically to the test environment and the tests will be executed.
That is the point you should recognize why is it so important to build application and tests with a single Build Definition: since the build number you've just selected when starting the tests will be stored along with the test results on TFS you will later know what version of you application you were testing (assuming you deployed the right version, of course).
You could go a little bit further if you want even more automation (This is the way I'm currently running automated tests)
Use Deploy-Build-Test template (this is a good place to start reading about Setting Up Automated Build-Deploy-Test Workflows).
Using this approach you will be able to automate deployment of the application you want to test.
I've written a console application that has a number of unit tests and I'm wanting to include it in my nant build script so that it will be run on our TeamCity CIS.
Unfortunately I'm not quite sure how to do that. The nant script has examples of current projects that have been added...they they all have to supply the assemblies that need to be tested. ie MyProject.dll But my console app doesn't have anything like that since it compiles into MyProject.exe
There must be a way to automate these tests since I'm able to run the unit tests from within Visual Studio without issue.
Does anyone know if and how this is possible?
The answer to this question is that you add the name of the executable in the same place you add the list of DLL assemblies. The set of unit tests is compiled into the executable instead of into a separate dll file.
Gishu is the one who should take credit for this answer...since he answered me via a comment...however, I'm wanting to mark this question as answered so I'm writing up the answer so others can benefit from the solution.
Gishu, if you ever come back to this question, please feel free to write up your comment as an answer and I'll change the accepted answer to yours.
What Test framework do you use for those tests? You've mentioned Visual Studio, I may guess it is mstest. TeamCity added support for MSTest starting from 4.0 for sln2008 build runner.
Could you please have a look to a full list of supported .NET unit test frameworks at
http://www.jetbrains.net/confluence/display/TCD4/.NET+Testing+Frameworks+Support
Any way, have a look to custom unit tests integration manual pages at
http://www.jetbrains.net/confluence/display/TCD4/Build+Script+Interaction+with+TeamCity
I've just noticed xUnit tag. xUnit supports TeamCity. Please refer to
http://www.codeplex.com/xunit/WorkItem/View.aspx?WorkItemId=4278
for more details.