Simulink Design Verifier: Input argument #1 is an invalid cvdata object - matlab

I am trying to run a few tests on a very simple Simulink model on Matlab 2020a.
I have obtained test results by using the Test Manager app, which allows me to set up a test case.
The function I created is very simple, it just checks two boolean values and returns another boolean value in accordance to their own value, so I have not reported it here.
My procedure is as follows:
From Simulink Test Manager -> New Test File -> Test For Model Component -> Importing both Top Model and Component to create a Harness -> Using Design Verifier options (with the only changes from the default values being (1) Test Generation -> Model Coverage Objectives : MCDC ; and (2) Report -> Generate report of results) and IMPORTING Test harness inputs as a source -> Use component under test output as baseline -> saving data as an Excel sheet.
Tests are then generated and everything is working fine.
I then use a small python script to edit the Excel file, generating an Oracle with a structure like this:
time Var_A Var_B time Out1:1
AbsTol:0
type:boolean type:boolean Type:int8
Interp:zoh Interp:zoh Interp:zoh
0 0 1 0 0
0.4 1 1 0.4 1
0.8 0 0 0.8 TRUE
After this, I have to let Simulink write a PDF report of the project. To do so, I set up the following options:
From the test harness:
Inputs -> Include input data in test result; Stop simulation at last time point;
Baseline Criteria -> Include baseline data in test result;
Coverage Settings -> Record coverage for system under test; Record coverage for referenced models;
From the top level test folder:
Coverage Settings -> Record coverage for system under test; Record coverage for referenced models;
Coverage Metrics: Decision; Condition; MCDC;
Test File Options-> Close all open figures at the end of execution; Generate report after execution (with author and file path); Include Matlab version; Results for: All tests; Test Requirements; Plots of criteria and assessments; Simulation metadata; Error log and messages; Coverage results; File format PDF.
Then I let it run. The test manager tells me everything went fine, but for some reason, whenever it has to create a report, it throws me an error:
X_component_test: Input argument #1 is an invalid cvdata object. CVDATA objects become invalid when their associated models are closed or modified
Now, I am sure this worked fine before with much more complex components, but I have no idea what am I doing wrong here. Anyone got a clue?

In the end, the solution was much more simple than I thought. Just delete all .cv files and clean up your project's folder of all test files or unnecessary files. Matlab seems to have issues when there's way too many present.
Also the script had to be modified to remove that TRUE value and replace it with a 1.

Related

Bdd Cucumber issues

I have newly started working on BDD Cucumber. I am using scala for writing test cases. I am trying to use Scenario Outline and passing parameters in step definitions. My code is as follows.
Scenario Outline: Data is parsed and persisted
Given Portal is running
When A data of <type> is received
Then The data of <type> with <Id> should be parsed and persisted
Examples:
| type | Id |
| Personal | 1 |
|Professional | 2 |
Now in my when condition I am trying to get these parameters as follows
When("""^A data of \"([^\"]*)\" is received$""") {
(type: String) =>
//My code
}
Now on running my code I am getting following error everytime.
io.cucumber.junit.UndefinedStepException: The step "A data of Personal is received" is undefined. You can implement it using the snippet(s) below:
When("""A data of Personal is received""") { () =>
// Write code here that turns the phrase above into concrete actions
throw new io.cucumber.scala.PendingException()
}
Though I have my code in when. Also If I don't use Scenario Outline then it works fine but I want to use Scenario Outline for my code.
I am using tags in my feature file to run my test cases. When I run my test cases with command sbt test #tag1, test cases executes fine but when all test cases are finished running on cmd I am getting following error:
[error] Expected ';'
[error] #tag1
I tried putting ";" after tag but still getting same error
What is this issue and how I can resolve it?
I have 4-5 feature files in my application. That means 4-5 tags. As of now the test case which I want to run I give path of feature file and "glue" it with step deinition in my Runner Class. How I can provide all the tags in my Runner class so that my application runs all the test cases one by one when started?
You are missing the double quotes around <type>:
When A data of "<type>" is received
Just some general advice.
When cuking keep things as simple as possible, focus on clarity and simplicity, do not worry about repetition.
Your task would be much simpler if you wrote 2 simple scenarios
Scenario: Personal data
Given Portal is running
When personal data is received
Then personal data should be persisted
Scenario: Professional data
...
Secondly don't use tags to run your features, you don't need tags yet.
You can cuke much more effectively if you avoid scenario outlines, regex's, tags, transforms etc. etc.. The main power in Cucumber is using natural language to express yourself clearly. Focus on that and keep it simple ...

How to get the test case results from script?

I use matlab script to create test file(include test suite and test case) in test manager.And when I finished my test,I need use the results of test.If the test cases all passed then exit code is 0;If one of test cases failed then exit code is 1. I want to realize it in my script.
My matlab version is 2016b.
Below is my script:
try
%some code to create my test cases in test manager.I didn't post here.
ro = run(ts); %run the test suite
saveToFile(tf); %save the test file
% Get the results set object from Test Manager
result = sltest.testmanager.getResultSets;
% Export the results set object to a file
sltest.testmanager.exportResults(result,'C:\result.mldatx');
% Clear results from Test Manager
sltest.testmanager.clearResults;
% Close Test Manager
sltest.testmanager.close;
%-----This part is what I want to achieve my goal----
totalfailures = 0;
totalfailures = sum(vertcat(ro(:).Failed));
if totalfailures == 0
exit(0);
else
exit(1);
end
%----------but it couldn't work----------------------
catch e
disp(getReport(e,'extended'));
exit(1);
end
exit(totalfailures>0);
I check my exit status in Jenkins is 0,But I make a failed test in test file.So it supposed to be 1.
Thanks in advance for any help!
You can consider using the MATLAB Unit Test Framework to run tests and get the test results. This will give you a results object that you can easily query to control the exit code for your MATLAB. If you were to run your Simulink Test files thus:
import matlab.unittest.TestRunner
import matlab.unittest.TestSuite
import sltest.plugins.TestManagerResultsPlugin
try
suite = TestSuite.fromFolder('<path to folder with Simulink Tests>');
% Create a typical runner with text output
runner = TestRunner.withTextOutput();
% Add the Simulink Test Results plugin and direct its output to a file
sltestresults = fullfile(getenv('WORKSPACE'), 'sltestresults.mldatx');
runner.addPlugin(TestManagerResultsPlugin('ExportToFile', sltestresults));
% Run the tests
results = runner.run(suite);
display(results);
catch e
disp(getReport(e,'extended'));
exit(1);
end
exit(any([results.Failed]));
That should do the trick. You can modify this additionally to save off the testsuite or testcase as you like.
You can also consider using the matlab.unittest.plugins.TAPPlugin which integrates nicely with Jenkins to publish TAP format test results. There is MathWorks documentation available on all of the plugins and other APIs mentioned here. Here's a nice article telling you how to leverage the MATLAB Unit Test Framework to run Simulink Tests: https://www.mathworks.com/help/sltest/ug/tests-for-continuous-integration.html
Also, MathWorks has released a Jenkins MATLAB Plugin recently that might be helpful to you: https://plugins.jenkins.io/matlab
Hope this helps!
I think you need to check the log in Jenkins to see the error after running the job. Because in Jenkins we need to setup environment difference like the machine you run.

Variable code coverage threshold with sbt-scoverage

I'm using sbt-scoverage plugin for measure the code (statement) coverage in our project. Because of months of not worriying about the coverage and our tests we decided to set a threshold for having a minimum coverage percentage: if you are writing code at least try to leave the project with the same coverage percentage as when you've find it. e.g. if you've started your feature branch with a project having 63% of coverage you have, after finishing your feature, to leave the same coverage value.
With this we want to ensure a gradual adoption of better practices instead of setting a fixed coverage value (something like coverageMinimum := XX).
Having said that, I'm considering the possibility of storing the last value of the analysis in a file and then compare that with a new execution, triggered by the developer.
Another option that I'm considering is to retrieve this value from our SonarQube server based on the data stored there.
My question is: Is there a way to do a thing like this with sbt-scoverage? I've dug into the docs and their Google Groups forum but I can't find something about it.
Thanks in advance!
coverageMinimum setting value doesn't have to be constant, you can write any function dynamically returning it, eg:
coverageMinimum := {
val tmp = 2 + 4
10 * tmp // returns 60 :)
}

Simulink Coder: How to specify custom C files from script when generating C code?

I have a script that does some post processing of the initial code generated by Simulink coder. It creates an additional.c file that I wish to add into the build process. I want to add the file into the build process from a script, so I am following the docs here.
So my script looks like:
slbuild(gcs);
% generate additional.c file using files created by slbuild()
% ...
% now attempt to add additional.c to the build process as custom code
configs = getActiveConfigSet(gcs);
configs.set_param('SimUserSources', 'additional.c');
% now rebuild
slbuild(gcs)
I can verify that the config set is updated by:
checkConfigIsSet = configs.get_param('SimUserSources');
disp(checkConfigIsSet); % prints: additional.c
However, the coder does not appear to pick up this new config. When I click on the Configuration settings at the time of code generation: click to open section of the Code Generation report, I see the config value was not updated and additional.c was not compiled into the model.
What am I doing wrong please?
SimUserSources are for simulation builds used by blocks like MATLAB Function block and Stateflow. For code generation, you need to set the param "CustomSource". Try,
slbuild(gcs);
% generate additional.c file using files created by slbuild()
% ...
% now attempt to add additional.c to the build process as custom code
configs = getActiveConfigSet(gcs);
configs.set_param('CustomSource', 'additional.c');
% now rebuild
slbuild(gcs)

Programmatically Gathering NUnit results

I am running some NUnit tests automatically when my nightly build completes. I have a console application which detects the new build, and then copies the built MSI's to a local folder, and deploys all of my components to a test server. After that, I have a bunch of tests in NUnit dll's that I run by executing "nunit-console.exe" using Process/ProcessStartInfo. My question is, how can programmatically I get the numbers for Total Success/Failed tests?
Did you consider using a continous integration server like CruiseControl.NET?
It builds and runs the tests for you and displays the results in a web page. If you just want a tool, let the nunit-console.exe output the results in XML and parse/transform it with an XSLT script like the ones coming from cruise control.
Here is an example of such an XSL file if you run the transformation on the direct output of nunit-console.exe then you will have to adapt the select statements and remove cruisecontrol.
However it sounds like you might be interested in continuous integration.
We had a similar requirement and what we did was to read into the Test Result XML file that is generated by NUnit.
XmlDocument testresultxmldoc = new XmlDocument();
testresultxmldoc.Load(this.nunitresultxmlfile);
XmlNode mainresultnode = testresultxmldoc.SelectSingleNode("test-results");
this.MachineName = mainresultnode.SelectSingleNode("environment").Attributes["machine-name"].Value;
int ignoredtests = Convert.ToInt16(mainresultnode.Attributes["ignored"].Value);
int errors = Convert.ToInt16(mainresultnode.Attributes["errors"].Value);
int failures = Convert.ToInt16(mainresultnode.Attributes["failures"].Value);
int totaltests = Convert.ToInt16(mainresultnode.Attributes["total"].Value);
int invalidtests = Convert.ToInt16(mainresultnode.Attributes["invalid"].Value);
int inconclusivetests = Convert.ToInt16(mainresultnode.Attributes["inconclusive"].Value);
We recently had a similar requirement, and wrote a small open source library to combine the results files into one aggregate set of results (as if you had run all of the tests with a single run of nunit-console).
You can find it at https://github.com/15below/NUnitMerger
I'll quote from the release notes for nunit 2.4.3:
The console runner now uses negative return codes for errors encountered in trying to run the test. Failures or errors in the test themselves give a positive return code equal to the number of such failures or errors.
(emphasis mine). The implication here is that, as is usual in bash, a return of 0 indicates success, and non-zero indicates failure or error (as above).
HTH