show flutter test coverage - flutter

I am a bit new to testing world in flutter.
what I want to achieve is to determine my test coverage in flutter.
but I don't know any way to determine my test coverage ,
any help is appreciated.
Thanks.

Running the tests with
flutter test --coverage
should generate a file
/coverage/Icov.info
Which holds the information you need.
You can now extract infos from the file in various methods as described here

In a easy way, you can determine the test coverage threshold using this package https://pub.dev/packages/dlcov
usage example:
dlcov --lcov-gen="flutter test --coverage" --coverage=100
--lcov-gen Generate the lcov.info file
--coverage=100 To determine if the test coverage threshold is 100%

Related

How to Customize HTMLReports of Gatling

I'm trying to customize HTMLReport of gatling to get results (and charts) between 2 specific time (I don't want all results only a time range.
How can I do that ? I didn't found a documentation about that.
Thanks
It's officially not possible to customize the report with regular version of Gatling.
To get another report there is no option but to get Gatling Frontline : https://gatling.io/gatling-frontline/

How to get the test case results from script?

I use matlab script to create test file(include test suite and test case) in test manager.And when I finished my test,I need use the results of test.If the test cases all passed then exit code is 0;If one of test cases failed then exit code is 1. I want to realize it in my script.
My matlab version is 2016b.
Below is my script:
try
%some code to create my test cases in test manager.I didn't post here.
ro = run(ts); %run the test suite
saveToFile(tf); %save the test file
% Get the results set object from Test Manager
result = sltest.testmanager.getResultSets;
% Export the results set object to a file
sltest.testmanager.exportResults(result,'C:\result.mldatx');
% Clear results from Test Manager
sltest.testmanager.clearResults;
% Close Test Manager
sltest.testmanager.close;
%-----This part is what I want to achieve my goal----
totalfailures = 0;
totalfailures = sum(vertcat(ro(:).Failed));
if totalfailures == 0
exit(0);
else
exit(1);
end
%----------but it couldn't work----------------------
catch e
disp(getReport(e,'extended'));
exit(1);
end
exit(totalfailures>0);
I check my exit status in Jenkins is 0,But I make a failed test in test file.So it supposed to be 1.
Thanks in advance for any help!
You can consider using the MATLAB Unit Test Framework to run tests and get the test results. This will give you a results object that you can easily query to control the exit code for your MATLAB. If you were to run your Simulink Test files thus:
import matlab.unittest.TestRunner
import matlab.unittest.TestSuite
import sltest.plugins.TestManagerResultsPlugin
try
suite = TestSuite.fromFolder('<path to folder with Simulink Tests>');
% Create a typical runner with text output
runner = TestRunner.withTextOutput();
% Add the Simulink Test Results plugin and direct its output to a file
sltestresults = fullfile(getenv('WORKSPACE'), 'sltestresults.mldatx');
runner.addPlugin(TestManagerResultsPlugin('ExportToFile', sltestresults));
% Run the tests
results = runner.run(suite);
display(results);
catch e
disp(getReport(e,'extended'));
exit(1);
end
exit(any([results.Failed]));
That should do the trick. You can modify this additionally to save off the testsuite or testcase as you like.
You can also consider using the matlab.unittest.plugins.TAPPlugin which integrates nicely with Jenkins to publish TAP format test results. There is MathWorks documentation available on all of the plugins and other APIs mentioned here. Here's a nice article telling you how to leverage the MATLAB Unit Test Framework to run Simulink Tests: https://www.mathworks.com/help/sltest/ug/tests-for-continuous-integration.html
Also, MathWorks has released a Jenkins MATLAB Plugin recently that might be helpful to you: https://plugins.jenkins.io/matlab
Hope this helps!
I think you need to check the log in Jenkins to see the error after running the job. Because in Jenkins we need to setup environment difference like the machine you run.

Variable code coverage threshold with sbt-scoverage

I'm using sbt-scoverage plugin for measure the code (statement) coverage in our project. Because of months of not worriying about the coverage and our tests we decided to set a threshold for having a minimum coverage percentage: if you are writing code at least try to leave the project with the same coverage percentage as when you've find it. e.g. if you've started your feature branch with a project having 63% of coverage you have, after finishing your feature, to leave the same coverage value.
With this we want to ensure a gradual adoption of better practices instead of setting a fixed coverage value (something like coverageMinimum := XX).
Having said that, I'm considering the possibility of storing the last value of the analysis in a file and then compare that with a new execution, triggered by the developer.
Another option that I'm considering is to retrieve this value from our SonarQube server based on the data stored there.
My question is: Is there a way to do a thing like this with sbt-scoverage? I've dug into the docs and their Google Groups forum but I can't find something about it.
Thanks in advance!
coverageMinimum setting value doesn't have to be constant, you can write any function dynamically returning it, eg:
coverageMinimum := {
val tmp = 2 + 4
10 * tmp // returns 60 :)
}

How to generate junit.xml using minil test?

I am using minilla for generating my module scaffold. While it works very well for me, I'd like to get generated junit.xml file with test results when running minil test.
I found that it is possible to specify tap_harness_args in minil.toml configuration file. I tried
[tap_harness_args]
formatter_class = "TAP::Formatter::JUnit"
which ensures that the tests output is formatted in JUnit format. That works well but the output is mixed up with minilla's other output so I can't easily redirect it to a file. Is there a way how to get only the test results to a file? Optimally, I'd still like to see the test results in TAP format in my terminal output at the same time (but I can live without it).
Just a guess: can't you use the a<file.tgz> option in HARNESS_OPTIONS in TAP::Harness::Env?

MSTest Command Line Options - Test Duration

I am trying to run MSTEST Performance Tests from the command line and I would like to change the Test Duration and Test Iterations values. I can not find any documentation on how to update the values and the only way I have come up with to update the value is updating the XML, which is not ideal.
Any help would be greatly appreciated.