How to get the test case results from script? - matlab

I use matlab script to create test file(include test suite and test case) in test manager.And when I finished my test,I need use the results of test.If the test cases all passed then exit code is 0;If one of test cases failed then exit code is 1. I want to realize it in my script.
My matlab version is 2016b.
Below is my script:
try
%some code to create my test cases in test manager.I didn't post here.
ro = run(ts); %run the test suite
saveToFile(tf); %save the test file
% Get the results set object from Test Manager
result = sltest.testmanager.getResultSets;
% Export the results set object to a file
sltest.testmanager.exportResults(result,'C:\result.mldatx');
% Clear results from Test Manager
sltest.testmanager.clearResults;
% Close Test Manager
sltest.testmanager.close;
%-----This part is what I want to achieve my goal----
totalfailures = 0;
totalfailures = sum(vertcat(ro(:).Failed));
if totalfailures == 0
exit(0);
else
exit(1);
end
%----------but it couldn't work----------------------
catch e
disp(getReport(e,'extended'));
exit(1);
end
exit(totalfailures>0);
I check my exit status in Jenkins is 0,But I make a failed test in test file.So it supposed to be 1.
Thanks in advance for any help!

You can consider using the MATLAB Unit Test Framework to run tests and get the test results. This will give you a results object that you can easily query to control the exit code for your MATLAB. If you were to run your Simulink Test files thus:
import matlab.unittest.TestRunner
import matlab.unittest.TestSuite
import sltest.plugins.TestManagerResultsPlugin
try
suite = TestSuite.fromFolder('<path to folder with Simulink Tests>');
% Create a typical runner with text output
runner = TestRunner.withTextOutput();
% Add the Simulink Test Results plugin and direct its output to a file
sltestresults = fullfile(getenv('WORKSPACE'), 'sltestresults.mldatx');
runner.addPlugin(TestManagerResultsPlugin('ExportToFile', sltestresults));
% Run the tests
results = runner.run(suite);
display(results);
catch e
disp(getReport(e,'extended'));
exit(1);
end
exit(any([results.Failed]));
That should do the trick. You can modify this additionally to save off the testsuite or testcase as you like.
You can also consider using the matlab.unittest.plugins.TAPPlugin which integrates nicely with Jenkins to publish TAP format test results. There is MathWorks documentation available on all of the plugins and other APIs mentioned here. Here's a nice article telling you how to leverage the MATLAB Unit Test Framework to run Simulink Tests: https://www.mathworks.com/help/sltest/ug/tests-for-continuous-integration.html
Also, MathWorks has released a Jenkins MATLAB Plugin recently that might be helpful to you: https://plugins.jenkins.io/matlab
Hope this helps!

I think you need to check the log in Jenkins to see the error after running the job. Because in Jenkins we need to setup environment difference like the machine you run.

Related

Simulink Design Verifier: Input argument #1 is an invalid cvdata object

I am trying to run a few tests on a very simple Simulink model on Matlab 2020a.
I have obtained test results by using the Test Manager app, which allows me to set up a test case.
The function I created is very simple, it just checks two boolean values and returns another boolean value in accordance to their own value, so I have not reported it here.
My procedure is as follows:
From Simulink Test Manager -> New Test File -> Test For Model Component -> Importing both Top Model and Component to create a Harness -> Using Design Verifier options (with the only changes from the default values being (1) Test Generation -> Model Coverage Objectives : MCDC ; and (2) Report -> Generate report of results) and IMPORTING Test harness inputs as a source -> Use component under test output as baseline -> saving data as an Excel sheet.
Tests are then generated and everything is working fine.
I then use a small python script to edit the Excel file, generating an Oracle with a structure like this:
time Var_A Var_B time Out1:1
AbsTol:0
type:boolean type:boolean Type:int8
Interp:zoh Interp:zoh Interp:zoh
0 0 1 0 0
0.4 1 1 0.4 1
0.8 0 0 0.8 TRUE
After this, I have to let Simulink write a PDF report of the project. To do so, I set up the following options:
From the test harness:
Inputs -> Include input data in test result; Stop simulation at last time point;
Baseline Criteria -> Include baseline data in test result;
Coverage Settings -> Record coverage for system under test; Record coverage for referenced models;
From the top level test folder:
Coverage Settings -> Record coverage for system under test; Record coverage for referenced models;
Coverage Metrics: Decision; Condition; MCDC;
Test File Options-> Close all open figures at the end of execution; Generate report after execution (with author and file path); Include Matlab version; Results for: All tests; Test Requirements; Plots of criteria and assessments; Simulation metadata; Error log and messages; Coverage results; File format PDF.
Then I let it run. The test manager tells me everything went fine, but for some reason, whenever it has to create a report, it throws me an error:
X_component_test: Input argument #1 is an invalid cvdata object. CVDATA objects become invalid when their associated models are closed or modified
Now, I am sure this worked fine before with much more complex components, but I have no idea what am I doing wrong here. Anyone got a clue?
In the end, the solution was much more simple than I thought. Just delete all .cv files and clean up your project's folder of all test files or unnecessary files. Matlab seems to have issues when there's way too many present.
Also the script had to be modified to remove that TRUE value and replace it with a 1.

show flutter test coverage

I am a bit new to testing world in flutter.
what I want to achieve is to determine my test coverage in flutter.
but I don't know any way to determine my test coverage ,
any help is appreciated.
Thanks.
Running the tests with
flutter test --coverage
should generate a file
/coverage/Icov.info
Which holds the information you need.
You can now extract infos from the file in various methods as described here
In a easy way, you can determine the test coverage threshold using this package https://pub.dev/packages/dlcov
usage example:
dlcov --lcov-gen="flutter test --coverage" --coverage=100
--lcov-gen Generate the lcov.info file
--coverage=100 To determine if the test coverage threshold is 100%

Referencing External Files in JModelica

I have a Modelica file that references c code during simulation through an external library *.a file.
For example:
model CallAdd
input Real FirstInput(start=0);
input Real SecondInput(start=0);
output Real FMUOutput(start=0);
function CAdd
input Real x(start=0);
input Real y(start=0);
output Real z(start=0);
external "C" annotation(Library = "CAdd", LibraryDirectory = "modelica://CallAdd");
end CAdd;
equation
FMUOutput = CAdd(FirstInput,SecondInput);
annotation(uses(Modelica(version = "3.2.1")));
end CallAdd;
When opening the Modelica model in OpenModelica the required files appear to be automatically loaded because it simulates and gives appropriate results.
However, when I try to compile the Modelica file with JModelica-SDK-1.12 I receive an error that the library *.a file could not be found.
So my question is: What is the proper way to reference additional files when using compile_fmu in JModelica?
With no success, I've tried:
# Import the compiler function
from pymodelica import compile_fmu
model_name = "CallAdd"
mo_file = "CallAdd.mo"
# Compile the model and save the return argument, for use later if wanted
my_fmu = compile_fmu(model_name, mo_file, target="cs",compiler_options = {'extra_lib_dirs':'C:/ToFolderContainingLib/'})
The strange thing is that when I was using JModelica-1.17 (non-SDK) the file compiled fine but the results didn't make sense. I was recommended to try the SDK version to see if it fixed my errors in my previous post here.
Try positioning the external library in sub-folder named as the platform your currently on. So in your example, I'd position the library (libCAdd.a) in sub-folder named linux64, as I'm on a 64bit Linux machine and then run the code.
If is a small piece of C code, as a last alternative you could try to include the C file directly in the Modelica code:
external "C" annotation(Include="
// the entire C code here
");
Hopefully the JModelica people will give you a better answer soon.
You could try to ask this on their website also:
http://www.jmodelica.org/forum

Simulink Coder: How to specify custom C files from script when generating C code?

I have a script that does some post processing of the initial code generated by Simulink coder. It creates an additional.c file that I wish to add into the build process. I want to add the file into the build process from a script, so I am following the docs here.
So my script looks like:
slbuild(gcs);
% generate additional.c file using files created by slbuild()
% ...
% now attempt to add additional.c to the build process as custom code
configs = getActiveConfigSet(gcs);
configs.set_param('SimUserSources', 'additional.c');
% now rebuild
slbuild(gcs)
I can verify that the config set is updated by:
checkConfigIsSet = configs.get_param('SimUserSources');
disp(checkConfigIsSet); % prints: additional.c
However, the coder does not appear to pick up this new config. When I click on the Configuration settings at the time of code generation: click to open section of the Code Generation report, I see the config value was not updated and additional.c was not compiled into the model.
What am I doing wrong please?
SimUserSources are for simulation builds used by blocks like MATLAB Function block and Stateflow. For code generation, you need to set the param "CustomSource". Try,
slbuild(gcs);
% generate additional.c file using files created by slbuild()
% ...
% now attempt to add additional.c to the build process as custom code
configs = getActiveConfigSet(gcs);
configs.set_param('CustomSource', 'additional.c');
% now rebuild
slbuild(gcs)

Programmatically Gathering NUnit results

I am running some NUnit tests automatically when my nightly build completes. I have a console application which detects the new build, and then copies the built MSI's to a local folder, and deploys all of my components to a test server. After that, I have a bunch of tests in NUnit dll's that I run by executing "nunit-console.exe" using Process/ProcessStartInfo. My question is, how can programmatically I get the numbers for Total Success/Failed tests?
Did you consider using a continous integration server like CruiseControl.NET?
It builds and runs the tests for you and displays the results in a web page. If you just want a tool, let the nunit-console.exe output the results in XML and parse/transform it with an XSLT script like the ones coming from cruise control.
Here is an example of such an XSL file if you run the transformation on the direct output of nunit-console.exe then you will have to adapt the select statements and remove cruisecontrol.
However it sounds like you might be interested in continuous integration.
We had a similar requirement and what we did was to read into the Test Result XML file that is generated by NUnit.
XmlDocument testresultxmldoc = new XmlDocument();
testresultxmldoc.Load(this.nunitresultxmlfile);
XmlNode mainresultnode = testresultxmldoc.SelectSingleNode("test-results");
this.MachineName = mainresultnode.SelectSingleNode("environment").Attributes["machine-name"].Value;
int ignoredtests = Convert.ToInt16(mainresultnode.Attributes["ignored"].Value);
int errors = Convert.ToInt16(mainresultnode.Attributes["errors"].Value);
int failures = Convert.ToInt16(mainresultnode.Attributes["failures"].Value);
int totaltests = Convert.ToInt16(mainresultnode.Attributes["total"].Value);
int invalidtests = Convert.ToInt16(mainresultnode.Attributes["invalid"].Value);
int inconclusivetests = Convert.ToInt16(mainresultnode.Attributes["inconclusive"].Value);
We recently had a similar requirement, and wrote a small open source library to combine the results files into one aggregate set of results (as if you had run all of the tests with a single run of nunit-console).
You can find it at https://github.com/15below/NUnitMerger
I'll quote from the release notes for nunit 2.4.3:
The console runner now uses negative return codes for errors encountered in trying to run the test. Failures or errors in the test themselves give a positive return code equal to the number of such failures or errors.
(emphasis mine). The implication here is that, as is usual in bash, a return of 0 indicates success, and non-zero indicates failure or error (as above).
HTH