I am trying to run MSTEST Performance Tests from the command line and I would like to change the Test Duration and Test Iterations values. I can not find any documentation on how to update the values and the only way I have come up with to update the value is updating the XML, which is not ideal.
Any help would be greatly appreciated.
Related
I am a bit new to testing world in flutter.
what I want to achieve is to determine my test coverage in flutter.
but I don't know any way to determine my test coverage ,
any help is appreciated.
Thanks.
Running the tests with
flutter test --coverage
should generate a file
/coverage/Icov.info
Which holds the information you need.
You can now extract infos from the file in various methods as described here
In a easy way, you can determine the test coverage threshold using this package https://pub.dev/packages/dlcov
usage example:
dlcov --lcov-gen="flutter test --coverage" --coverage=100
--lcov-gen Generate the lcov.info file
--coverage=100 To determine if the test coverage threshold is 100%
I'm using sbt-scoverage plugin for measure the code (statement) coverage in our project. Because of months of not worriying about the coverage and our tests we decided to set a threshold for having a minimum coverage percentage: if you are writing code at least try to leave the project with the same coverage percentage as when you've find it. e.g. if you've started your feature branch with a project having 63% of coverage you have, after finishing your feature, to leave the same coverage value.
With this we want to ensure a gradual adoption of better practices instead of setting a fixed coverage value (something like coverageMinimum := XX).
Having said that, I'm considering the possibility of storing the last value of the analysis in a file and then compare that with a new execution, triggered by the developer.
Another option that I'm considering is to retrieve this value from our SonarQube server based on the data stored there.
My question is: Is there a way to do a thing like this with sbt-scoverage? I've dug into the docs and their Google Groups forum but I can't find something about it.
Thanks in advance!
coverageMinimum setting value doesn't have to be constant, you can write any function dynamically returning it, eg:
coverageMinimum := {
val tmp = 2 + 4
10 * tmp // returns 60 :)
}
I need to update the following moses.ini to support incremental training, I followed the tutorial and I found that I must add this line in moses.ini file
PhraseDictionaryDynSuffixArray source=<path-to-source-corpus> target=<path-to-target-corpus> alignment=<path-to-alignments>
but no matter how I put it in moses.ini it just doesn't work and give errors when I try to start mt model
here is how I put it to the moses.ini
[ttable-file]
PhraseDictionaryDynSuffixArray source=<path-to-source-corpus> target=<path-to-target-corpus> alignment=<path-to-alignments>
then I set the appropriate paths, so can anyone help me with this ? thanks in advance
I am running some NUnit tests automatically when my nightly build completes. I have a console application which detects the new build, and then copies the built MSI's to a local folder, and deploys all of my components to a test server. After that, I have a bunch of tests in NUnit dll's that I run by executing "nunit-console.exe" using Process/ProcessStartInfo. My question is, how can programmatically I get the numbers for Total Success/Failed tests?
Did you consider using a continous integration server like CruiseControl.NET?
It builds and runs the tests for you and displays the results in a web page. If you just want a tool, let the nunit-console.exe output the results in XML and parse/transform it with an XSLT script like the ones coming from cruise control.
Here is an example of such an XSL file if you run the transformation on the direct output of nunit-console.exe then you will have to adapt the select statements and remove cruisecontrol.
However it sounds like you might be interested in continuous integration.
We had a similar requirement and what we did was to read into the Test Result XML file that is generated by NUnit.
XmlDocument testresultxmldoc = new XmlDocument();
testresultxmldoc.Load(this.nunitresultxmlfile);
XmlNode mainresultnode = testresultxmldoc.SelectSingleNode("test-results");
this.MachineName = mainresultnode.SelectSingleNode("environment").Attributes["machine-name"].Value;
int ignoredtests = Convert.ToInt16(mainresultnode.Attributes["ignored"].Value);
int errors = Convert.ToInt16(mainresultnode.Attributes["errors"].Value);
int failures = Convert.ToInt16(mainresultnode.Attributes["failures"].Value);
int totaltests = Convert.ToInt16(mainresultnode.Attributes["total"].Value);
int invalidtests = Convert.ToInt16(mainresultnode.Attributes["invalid"].Value);
int inconclusivetests = Convert.ToInt16(mainresultnode.Attributes["inconclusive"].Value);
We recently had a similar requirement, and wrote a small open source library to combine the results files into one aggregate set of results (as if you had run all of the tests with a single run of nunit-console).
You can find it at https://github.com/15below/NUnitMerger
I'll quote from the release notes for nunit 2.4.3:
The console runner now uses negative return codes for errors encountered in trying to run the test. Failures or errors in the test themselves give a positive return code equal to the number of such failures or errors.
(emphasis mine). The implication here is that, as is usual in bash, a return of 0 indicates success, and non-zero indicates failure or error (as above).
HTH
I've setup some Nunit tests for validating my statistical formulas within my .net v2 application, for company records i need to have a printed copy of this output. Is anyone aware of any commands in NUnit to automatically print the XML to default printer?
If printing isn't possible saving to a folder may work for us.
thanks in advance
The NUnit console automatically gives the results as xml. To state your own name on the xml file, this is what you need to do:
nunit-console /xml:someFileNameHere.xml yourFileWithNUnitTestsHere.dll