I am working with already generated coverpoints and covergroups. I have a way to access all the coverpoints in the covergroup through an `include file, but cannot edit the coverpoints directly.
covergroup cg_FOO;
apple: coverpoint Atype_e'(sample.apple.value);
kiwi: coverpoint Ktype_e'(sample.kiwi.vale);
`ifdef MY_COV
`include "cg_FOO.svh"
`endif
end group: cg_FOO
cg_FOO.svh is an example for this covergroup, but with another generated covergroup I have a separate associated file. I define all sorts of crosses and non-generated coverpoints in these `included files. However, I also want to specify certain enum values for coverpoints apple and kiwi that may not be valid for cg_FOO.
If I was able to change each coverpoint, I would just do the following:
apple: coverpoint Atype_e'(sample.value) {
ignore_bins ignore_FUJI = apple with (FUJI);
}
But, including a separate file into each coverpoint of each covergroup is messy and not feasible.
Everything I have found thus far makes it seem like I need to specify an ignore_bin inside a coverpoint structure. How could I ignore certain bins of a coverpoint from within my cg_FOO.svh file (ie in the covergroup, but not changing all the generated coverpoints)?
Note, I also have 2 other `include files to work with, one outside the covergroups but inside the class that contains them (I am using this file for macro, variable, and function definitions), and another file to define helping logic just before the covergroups get sampled (ie when sample gets defined).
You're out of luck unless the tool you are using gives you an API to access and modify the coverage database. Bins are very difficult to access because they have no easy names to reference them, you have to scan through them. It's very easy to exclude coverpoints externally by setting their weights to 0.
Without knowing exactly what your coverage model looks like, it's hard to give you a better answer.
Related
I am interested to replace my own PID-regulator models with MSL/Blocks/Continuous/LimPID. The problem is that this model restricts limitations of output signals to be parameters and thus do not allow time-varying limits, which I need to have.
Studying the code I understand that the output limitation is created by a block MSL/Blocks/Nonlinear/Limiter and I just want to change this to the block VariableLimiter.
I can imagine that you need to ensure that changes of output-limitations vary in a time-scale slower than the regulator in order to not excite unwanted behaviour of the controller. Still here is a class of problems where it would be very useful to allow this limits to vary slowly.
Thanks for the good input to my question and below a very simple example to refine my question. (The LimPID is more complicated and I come back to that).
Let us instead just modify the block Add to a local block in MyModel.
I copy the code from Modelica.Blocks.Math.Add and call it Addb in MyModel. Since here is a dependence of Interfaces.SI2SO I need to make an import before the extends-clause. This import I take from the ordinary general MSL package, instead of copying also that in to MyModel. Then I introduce a new parameter "bias" and modify the equation. The annotation may need some update as well but we do not bother with that now.
MyModel
...
block Addb "Output the sum of the two inputs"
import Modelica.Blocks.Interfaces;
extends Interfaces.SI2SO;
parameter Real k1=+1 "Gain of input signal 1";
parameter Real k2=+1 "Gain of input signal 2";
parameter Real bias=0 "Bias term";
equation
y = k1*u1 + k2*u2 + bias;
annotation (...);
end Addb;
MyModel;
This code seems to work.
My added new question is whether it is enough to look up "extends-clauses" and other references to MSL and make the proper imports since the code is now local, or here are more aspects to think of? The LimPID code is rather complex with procedures for initialization etc so I just wonder if here is more to do than just bring in a number of import-clauses?
The models in Modelica Standard Library (MSL) should only be seen as exemplary models, not covering all possible applications. MSL is write protected and it is not possible to replace the limiter block in LimPID (and add max/min input connectors). Also, it wouldn't work out if you shared your simulation model with others, expecting their MSL to work like your modified MSL.
Personally, I have my own libraries of components where MSL models are inadequate. For example, I have PID controllers with variable limits, manual/automatic functions and many more functions which are needed in my applications.
Often, I create a copy of an MSL model, place it in the same package in my own library and make the necessary modifications and additions, e.g. MyLibrary.Blocks.Continuous.PID.
Say, I've UNITs 1,2,3,4 ( either as Model reference or Subsystem) for which I've units tests ready using matlab.unittest.TestCase framework.
What could be the easiest way to write integration test fro entire system ?
I need some way to set Global_Inputx ( x = 1,2,3 ) and verify Global_Outy ( y =1,2 ) in easiest possible way (may be utilizing the Unit tests) ?
I can use Matlab 14a
PS: I've already gone through this but it didn't help.
I think the question of integration testing in Simulink is a complex one that may involve formal methods like code and coverage analysis of the dynamic system under test, automatic test generation e.t.c. If you haven't already, you may want to check out "Verification, Validation and Test" section of the MathWorks product line up: http://www.mathworks.com/products/?s_tid=gn_ps.
However to answer your specific question of how you would set the global input and verify global output in your test:
Depending on where your global input data that feeds the inport blocks resides (MATLAB base workspace, model workspace, e.t.c), do you think you could set the external input of the model to that data. For example:
set_param(, 'ExternalInput', )
This could be defined in your test class setup, test method setup or in the test depending on when the data is available and where defining it is appropriate. The data could also be passed in directly to the sim() command. Parameterized testing (http://www.mathworks.com/help/matlab/matlab_prog/create-basic-parameterized-test.html) is an option to consider if you want to test the system with different sets of inputs. The external input value becomes the parameter in this context.
If you have your model set up for output logging, then once the simulation is done, you would get the logged outputs, which you could then compare against a baseline.
Does that help? or am I way off the base here. If you can add more details, I can try again.
Suppose i have a lot of source files, i want to organize them in folders-tree structure.
Is it possible for me to have several files with same name and use every of them from place i need or i must have all functions and classes with different names?
In C++ i have #include to introduce functions i need, that's here?
Just to illustrate:
.\main.m
.\Algorithms\QR\Factory.m % function Factory
.\Algorithms\QR\Algorithm.m % function Algorithm
.\Algorithms\SVD\Factory.m % function Factory
.\Algorithms\SVD\Algorithm.m % function Algorithm
MATLAB has support for namespaces. So in your example you would create the following:
C:\some\path\main.m
C:\some\path\+algorithms\+qr\factor.m
C:\some\path\+algorithms\+svd\factor.m
(Note: Only the top-level package folder's parent folder must be on the MATLAB path, i.e: addpath('C:\some\path'))
Then you could invoke each using its fully qualified name:
>> y = algorithms.qr.factor(x)
>> y = algorithms.svd.factor(x)
You could also import the package inside some scope. For example:
function y = main(x)
import algorithms.svd.*;
y = factor(x)
end
To understand the problem I need to explain some difference between the relation of c++ source and header files, and to .m files.
First: In matlab, you can only run the function which is defined highest up in the .m file. This file defines the top of the hierarchy. Then subfunctions can be implemented in the same m file, but these can only be used inside the same .m file.
Secondly: In addition to this matlab searched the include path for a specific filename and assume that the function inside the file will have the same name. You will notice this by a warning if you define the function with another name than the filename. The thing here is that you cannot have 2 matlab functions with the same name if all functions are global. This would be the same as if you would have 2 functions with the same name and in the same namespace in c++.
Note: The include path in matlab can typically be done with a hardcoded file in the to folder of your program. This function uses the matlab funcion addpath.
This is a fundamental difference to c/c++ where multiple functions are allowed to be defined in the same source file. Then the header file select what source code that you implement in the program, by providing the function definitions. The important thing here is that the header is completely disconnected from the function names, which they are not in matlab. This means that the analogy in your examples is not exactly accurate. The proposed thing by you is to "include" 2 functions with the same name. This is not possible either c/c++ (assuming the functions uses the same namespace or ar global) or in matlab.
Example: If the headers topFolder/foo/bar.h and topFolder/baz/bar.h would both contain the function void myDup(int a) and both headers uses the same namespace (or are global), then that would generate an error.
However, if the functions are only used by a limited number of other functions, then a function, eg. Factory.m, could be included as private functions in different folders. That would also mean that only this folder can access it. It is also possible to use matlab namespace as said in Amro's answer.
I have a fortran project whith some name conflicts (from doxygen's point of view). Sometimes a local variable in a procedure may have the same name as a subroutine or function. For compilation/linking there are no problems, as the different definitions live separate lives, for instance:
progA/main.f defines and uses the variable delta.
libB/delta.f defines a function named delta.
progB/main.f uses the function delta defined in libB.
progB is linked with libB, progA is not linked with libB.
In this case, when generating call/caller graphs, or linked source code, the variable delta in progA/main.f will be identified as the function delta. Is there some combination of doxygen settings I can use to inform it that progA is not supposed to use definitions in libB, or something similar?
Another issue is that I may have functions/subroutines with the same name in different subdirectories. Again, as long as they are not linked together this does not represent a problem for compilation, but doxygen cannot identify which of them is meant in links, calls, etc. Is there some work around this (without renaming procedures, that is)?
I'm developing a Simulink modell with many C-s-functions. For an easier handling I want to use constants in the c-s-function as in the simulink-modell. So I have a c-header with preprocesser constants like:
#define THIS_IS_A_CONSANT 10
And there is the question:
How it is possible to include this in Simulink in this way I can use THIS_IS_A_CONSANT for example in a constant source like a workspace-variable?
Thanks and regards
Alex
There is functionality in Simulink that will allow you to include custom C header files that define constants, variables, etc.; however, as far as I know (and as one might expect) this really is only pertinent in cases where code is being generated and compiled.
So, for the most part, this particular functionality is only relevant when you are using Simulink Coder to generate a stand-alone executable from your model. For example, this link shows how to include parameters stored in an external header file during code generation through the use of Simulink.Parameter objects with Custom Storage Classes and the Code Generation - Custom Code Pane under the model's Configuration Parameters.
This link from the Simulink doc shows how to use the #define custom storage class to achieve similar results.
However, it sounds like neither of these really solve your issue, as you want to make use of the code in the header file during simulation.
That said, considering that there are elements in Simulink, such as Stateflow Charts and MATLAB Function blocks, that generate and build code "under the hood" during simulation, it's (at least hypothetically) possible that you might be able to use some of the concepts described above to access the values in your header file from one of those elements during simulation. For example, I was successfully able to access preprocessor macros in a Stateflow chart just by going to the Simulation Target->Custom Code pane under Configuration Parameters and including the text #include "header.h" under Include custom C code in generated: Header file. (In this case, header.h contained the line of C code that you included in your post)
Although it seems like you should be able to extend this functionality further, this really was the limit of what I was able to achieve as far as accessing the header file during simulation was concerned. For example, I know that running a model in Rapid Accelerator mode actually generates and builds code under the hood, so seemingly you should be able to use some combination of the techniques I described above to be able to access values from the header file during simulation. It looks like the code that Rapid Accelerator mode generates doesn't respect all of the settings defined by those techniques in the same way that Simulink/Embedded Coder do, though, so I just kept running into compilation errors. (Although maybe I'm just missing some creative combination of settings that could make that work).
Hopefully that helps explain Simulink's abilities (and limitations) regarding the inclusion of C header files. So to summarize, according to the links included above, what you are asking for is almost barely possible, but in practice... not really.
So if really all you want is to be able to create workspace variables out of the preprocessor #define's in your header file, it probably is just easiest to manually parse the file with a MATLAB script, as had previously been suggested in the comments. Here is a quick-and-dirty script that loads in a header file, iterates over each line, uses a regular expression (which you can improve upon if needed) to parse #define statements, and then calls eval to create variables from the parsed input.
filename = 'header.h';
pattern = '^\s*#define\s*(\w*)\s*(\d*\.?\d+)';
fid = fopen(filename);
tline = fgetl(fid);
while ischar(tline)
tokens = regexp(tline, pattern,'tokens','once');
if(numel(tokens) == 2)
eval([tokens{1} ' = ' tokens{2}]);
end
tline = fgetl(fid);
end
fclose(fid);
You could put this code in a callback so that it will execute every time you load your model. Just goto File->Model Properties->Model Properties, click on the Callbacks tab, and then place the code under whichever callback you desire (such as PreLoadFcn if you want it to run immediately before the model loads).