What are the guidelines for choosing generate statements over `define macros and vice versa in Systemverilog?
For example, if I want to conditionally instantiate either module1 or module2, it seems I can either do
`ifdef COND1
module1 ();
`else
module2 ();
or
generate
if (COND1) begin
module1 ();
end else begin
module2();
end
endgenerate
People will have differing opinions, but one big difference between the two is that generates allow different instances to be configured differently, but macros do not. This is because generates are evaluated at elaboration, rather than at compile time.
For example, if we have a module:
module ahwoogaa #(bit COND1) ();
generate
if (COND1) begin
module1 ();
end else begin
module2();
end
endgenerate
endmodule
I can instantiate that twice with COND1 like so:
module neeenaaaw();
ahwoogaa #(1'b0) alarm1();
ahwoogaa #(1'b1) alarm2();
endmodule
With a define you have to have a single value of COND1 for all instances, as you set the value once when you compile the module.
Personally I say go for generates whenever you can.
`ifdef can be used inside port lists:
module music
output [31:0] left,
`ifdef STEREO
output [31:0] right,
`enfif
...
Generate allows loops, instantiate X times
genvar inst;
generate
for (inst=0; inst<3; inst=inst+1) begin : gen_block
sub_module instancex( .clk, .din(din[inst]), .dout(dout[inst]) );
end
endgenerate
NB: this introduces an extra level of hierarchy (gen_block).
I'm assuming that the questions concerns conditional instantiation only.
We had similar dilemma some time ago, when our project, initially developed in order to be integrated into specific platform, had to be adapted for additional platform. We wanted to use IFDEFs/Generates for differentiating a single source code.
In my opinion:
1) IFDEFs look much simpler than Generates (and they really are), but they are dangerous. Use them only when you're completely sure that your conditional instantiation will be exclusive (see the answer by Paul S), you can't achieve the same functionality with generates, and you're completely sure you need this conditional instantiation. The usual place to use IFDEFs is in top hierarchies.
EDIT: there is another question on StackOverflow which is an example of struggling of verification engineer due to abuse of `ifdefs by some designer.
2) After I warned you with the 1 above, I can tell that IFDEFs has a huge advantage (as a result of them being pre-processed): you can conditionally instantiate nets, parameters and ports with IFDEFs. Compare the following use of IFDEF:
module module1 (
input in1,
output logic out1
`ifdef COND
, output logic out2
`endif
);
assign out1 = in1;
`ifdef COND
submodule submodule1(.in(in1), .out(out2));
`else
endmodule
and generate:
module module1 (
input in1,
output logic out1,
output logic out2
);
assign out1 = in1;
generate if (COND)
submodule submodule1(.in(in1), .out(out2));
else
assign out2 = in1;
endgenerate
endmodule
In the above (very simple) example, when COND is not true and you use IFDEF, you may not worry about driving "out2" - this port will not be present. As a downside, you'll have to use the same IFDEF when instantiating "module1".
The advantage of IFDEF may not be obvious in this example, but imagine that you have 100 ports (or nets) which connect some sub-module. If the sub module is not instantiated you'll have all these ports tied to some value when using Generate, but you can remove them with IFDEFs.
3) As Morgan mentioned, Generate introduces one extra level of hierarchy. While it is slightly irritating to see this in design navigator, it becomes a real pain in the #ss when you want to insert generates into existing code and there are tools which make use of paths to modules - you have to find all these tools and change hierarchy there.
4) I heard an opinion that IFDEFs "age" badly. It means that maintaining your code will be more difficult if you use IFDEFs over Generates.
Summary: after trying some time to differentiate our design with IFDEFs and Generates, we realized that it is not practical. Now we maintain two source codes in parallel - one for each platform. Use IFDEFs in top modules to remove/switch between modules and remove unnecessary ports and parameters. Use Generates when you want to use the same modules in slightly different parametrization. Do not use these constructs to apply major changes to your design.
Related
I am interested to replace my own PID-regulator models with MSL/Blocks/Continuous/LimPID. The problem is that this model restricts limitations of output signals to be parameters and thus do not allow time-varying limits, which I need to have.
Studying the code I understand that the output limitation is created by a block MSL/Blocks/Nonlinear/Limiter and I just want to change this to the block VariableLimiter.
I can imagine that you need to ensure that changes of output-limitations vary in a time-scale slower than the regulator in order to not excite unwanted behaviour of the controller. Still here is a class of problems where it would be very useful to allow this limits to vary slowly.
Thanks for the good input to my question and below a very simple example to refine my question. (The LimPID is more complicated and I come back to that).
Let us instead just modify the block Add to a local block in MyModel.
I copy the code from Modelica.Blocks.Math.Add and call it Addb in MyModel. Since here is a dependence of Interfaces.SI2SO I need to make an import before the extends-clause. This import I take from the ordinary general MSL package, instead of copying also that in to MyModel. Then I introduce a new parameter "bias" and modify the equation. The annotation may need some update as well but we do not bother with that now.
MyModel
...
block Addb "Output the sum of the two inputs"
import Modelica.Blocks.Interfaces;
extends Interfaces.SI2SO;
parameter Real k1=+1 "Gain of input signal 1";
parameter Real k2=+1 "Gain of input signal 2";
parameter Real bias=0 "Bias term";
equation
y = k1*u1 + k2*u2 + bias;
annotation (...);
end Addb;
MyModel;
This code seems to work.
My added new question is whether it is enough to look up "extends-clauses" and other references to MSL and make the proper imports since the code is now local, or here are more aspects to think of? The LimPID code is rather complex with procedures for initialization etc so I just wonder if here is more to do than just bring in a number of import-clauses?
The models in Modelica Standard Library (MSL) should only be seen as exemplary models, not covering all possible applications. MSL is write protected and it is not possible to replace the limiter block in LimPID (and add max/min input connectors). Also, it wouldn't work out if you shared your simulation model with others, expecting their MSL to work like your modified MSL.
Personally, I have my own libraries of components where MSL models are inadequate. For example, I have PID controllers with variable limits, manual/automatic functions and many more functions which are needed in my applications.
Often, I create a copy of an MSL model, place it in the same package in my own library and make the necessary modifications and additions, e.g. MyLibrary.Blocks.Continuous.PID.
I am working with already generated coverpoints and covergroups. I have a way to access all the coverpoints in the covergroup through an `include file, but cannot edit the coverpoints directly.
covergroup cg_FOO;
apple: coverpoint Atype_e'(sample.apple.value);
kiwi: coverpoint Ktype_e'(sample.kiwi.vale);
`ifdef MY_COV
`include "cg_FOO.svh"
`endif
end group: cg_FOO
cg_FOO.svh is an example for this covergroup, but with another generated covergroup I have a separate associated file. I define all sorts of crosses and non-generated coverpoints in these `included files. However, I also want to specify certain enum values for coverpoints apple and kiwi that may not be valid for cg_FOO.
If I was able to change each coverpoint, I would just do the following:
apple: coverpoint Atype_e'(sample.value) {
ignore_bins ignore_FUJI = apple with (FUJI);
}
But, including a separate file into each coverpoint of each covergroup is messy and not feasible.
Everything I have found thus far makes it seem like I need to specify an ignore_bin inside a coverpoint structure. How could I ignore certain bins of a coverpoint from within my cg_FOO.svh file (ie in the covergroup, but not changing all the generated coverpoints)?
Note, I also have 2 other `include files to work with, one outside the covergroups but inside the class that contains them (I am using this file for macro, variable, and function definitions), and another file to define helping logic just before the covergroups get sampled (ie when sample gets defined).
You're out of luck unless the tool you are using gives you an API to access and modify the coverage database. Bins are very difficult to access because they have no easy names to reference them, you have to scan through them. It's very easy to exclude coverpoints externally by setting their weights to 0.
Without knowing exactly what your coverage model looks like, it's hard to give you a better answer.
I have to embed DV code inside an RTL module for verification purposes. There are many (1000s) of instances of this RTL module. How do I make it controllable on a per instance basis from the test? Testbench is in SystemVerilog UVM. I want to stay away from CDPI?
Any suggestions would be appreciated
-Hawki
You use the bind construct to insert a module or interface into your RTL module. Inside this bound module you construct a class with methods that interact with your RTL module. The class object is set into the uvm_config_db for each instance. Then your testbench gets these objects from the uvm_config_db and you can call those object methods from the testbench.
I wrote a DVCon 2012 paper The Missing Link: The Testbench to DUT Connection with a complete example for doing this.
As Dave described in his answer, bind statements are usually the best way to go.
Though there are other ways as well, which might be more convenient in some cases. They are based on parameterized modules.
1) direct instantiation with parameters
module rtl #(bit dvflag = 0) ();
...
if (dvflag)
dv_module dv_instance(...);
...
endmodule
module dut;
rtl rtl1();
rtl #(.dvflag(1)) rtl2();
...
endmodule
2) Overwriting instances with v2k config statements.
module dut;
rtl rtl1();
rtl rtl2();
endmodule
...
config dv_cfg;
instance dut.rtl2 use #(.dvflag(1));
...
endocnfig
As much as I can (mostly for clarity/documentation), I've been trying to say
use Some::Module;
use Another::Module qw( some namespaces );
in my Perl modules that use other modules.
I've been cleaning up some old code and see some places where I reference modules in my code without ever having used them:
my $example = Yet::Another::Module->AFunction($data); # EXAMPLE 1
my $demo = Whats::The::Difference::Here($data); # EXAMPLE 2
So my questions are:
Is there a performance impact (I'm thinking compile time) by not stating use x and simply referencing it in the code?
I assume that I shouldn't use modules that aren't utilized in the code - I'm telling the compiler to compile code that is unnecessary.
What's the difference between calling functions in example 1's style versus example 2's style?
I would say that this falls firmly into the category of preemptive optimisation and if you're not sure, then leave it in. You would have to be including some vast unused libraries if removing them helped at all
It is typical of Perl to hide a complex issue behind a simple mechanism that will generally do what you mean without too much thought
The simple mechanisms are these
use My::Module 'function' is the same as writing
BEGIN {
require My::Module;
My::Module->import( 'function' );
}
The first time perl successfully executes a require statement, it adds an element to the global %INC hash which has the "pathified" module name (in this case, My/Module.pm) for a key and the absolute location where it found the source as a value
If another require for the same module is encountered (that is, it already exists in the %INC hash) then require does nothing
So your question
What happens if I reference a package but don't use/require it?
We're going to have a problem with use, utilise, include and reference here, so I'm code-quoting only use and require when I mean the Perl language words.
Keeping things simple, these are the three possibilities
As above, if require is seen more than once for the same module source, then it is ignored after the first time. The only overhead is checking to see whether there is a corresponding element in %INC
Clearly, if you use source files that aren't needed then you are doing unnecessary compilation. But Perl is damn fast, and you will be able to shave only fractions of a second from the build time unless you have a program that uses enormous libraries and looks like use Catalyst; print "Hello, world!\n";
We know what happens if you make method calls to a class library that has never been compiled. We get
Can't locate object method "new" via package "My::Class" (perhaps you forgot to load "My::Class"?)
If you're using a function library, then what matters is the part of use that says
My::Module->import( 'function' );
because the first part is require and we already know that require never does anything twice. Calling import is usually a simple function call, and you would be saving nothing significant by avoiding it
What is perhaps less obvious is that big modules that include multiple subsidiaries. For instance, if I write just
use LWP::UserAgent;
then it knows what it is likely to need, and these modules will also be compiled
Carp
Config
Exporter
Exporter::Heavy
Fcntl
HTTP::Date
HTTP::Headers
HTTP::Message
HTTP::Request
HTTP::Response
HTTP::Status
LWP
LWP::MemberMixin
LWP::Protocol
LWP::UserAgent
Storable
Time::Local
URI
URI::Escape
and that's ignoring the pragmas!
Did you ever feel like you were kicking your heels, waiting for an LWP program to compile?
I would say that, in the interests of keeping your Perl code clear and tidy, it may be an idea to remove unnecessary modules from the compilation phase. But don't agonise over it, and benchmark your build times before doing any pre-handover tidy. No one will thank you for reducing the build time by 20ms and then causing them hours of work because you removed a non-obvious requirement.
You actually have a bunch of questions.
Is there a performance impact (thinking compile time) by not stating use x and simply referencing it in the code?
No, there is no performance impact, because you can't do that. Every namespace you are using in a working program gets defined somewhere. Either you used or required it earlier to where it's called, or one of your dependencies did, or another way1 was used to make Perl aware of it
Perl keeps track of those things in symbol tables. They hold all the knowledge about namespaces and variable names. So if your Some::Module is not in the referenced symbol table, Perl will complain.
I assume that I shouldn't use modules that aren't utilized in the code - I'm telling the compiler to compile code that is unnecessary.
There is no question here. But yes, you should not do that.
It's hard to say if this is a performance impact. If you have a large Catalyst application that just runs and runs for months it doesn't really matter. Startup cost is usually not relevant in that case. But if this is a cronjob that runs every minute and processes a huge pile of data, then an additional module might well be a performance impact.
That's actually also a reason why all use and require statements should be at the top. So it's easy to find them if you need to add or remove some.
What's the difference between calling functions in example 1's style versus example 2's style?
Those are for different purposes mostly.
my $example = Yet::Another::Module->AFunction($data); # EXAMPLE 1
This syntax is very similar to the following:
my $e = Yet::Another::Module::AFunction('Yet::Another::Module', $data)
It's used for class methods in OOP. The most well-known one would be new, as in Foo->new. It passes the thing in front of the -> to the function named AFunction in the package of the thing on the left (either if it's blessed, or if it's an identifier) as the first argument. But it does more. Because it's a method call, it also takes inheritance into account.
package Yet::Another::Module;
use parent 'A::First::Module';
1;
package A::First::Module;
sub AFunction { ... }
In this case, your example would also call AFunction because it's inherited from A::First::Module. In addition to the symbol table referenced above, it uses #ISA to keep track of who inherits from whom. See perlobj for more details.
my $demo = Whats::The:Difference::Here($data); # EXAMPLE 2
This has a syntax error. There is a : missing after The.
my $demo = Whats::The::Difference::Here($data); # EXAMPLE 2
This is a function call. It calls the function Here in the package Whats::The::Difference and passes $data and nothing else.
Note that as Borodin points out in a comment, your function names are very atypical and confusing. Usually functions in Perl are written with all lowercase and with underscores _ instead of camel case. So AFunction should be a_function, and Here should be here.
1) for example, you can have multiple package definitions in one file, which you should not normally do, or you could assign stuff into a namespace directly with syntax like *Some::Namespace::frobnicate = sub {...}. There are other ways, but that's a bit out of scope for this answer.
My current project sets an environment variable in a perl module and then later on makes a call from a SystemVerilog file to a function that uses that variable. The requirement is that whatever we added in the perl module is present in the environment variable on time of the call.
The problem however is that something between the perl module and systemverilog call meddles with my variable. I can't figure out what it is and fixing this issue is not pertinent to my project so I just want to set the variable to whatever the perl module sets it to and move on.
There's a handy getenv function in Perl and I am able to use getenv in SV as well. But there doesn't seem to be a setenv. What is the appropriate way to set an environment variable in SV?
Is the perl code invoked from within SystemVerilog using a $system() call? If so, environment changes made by the perl code will definitely NOT propagate back to the SV world, because those changes are made only in the $system() subprocess's environment.
The setenv() system call works for me via SystemVerilog DPI-C in all the tools I use (recent Fedora OS, recent versions of Mentor/Cadence/Synopsys simulators), but there may be some older *nix systems on which it's not available. I used the prototype as given in "man 3 setenv". Looking at discussions on other StackOverflow forums, it seems that using putenv() is not a great idea, especially from the DPI where you have no idea what will happen to the memory used for the DPI string argument. setenv() makes a copy of its argument strings, and should not be at risk from that problem.
It seems to me that if your tool flow isn't correctly propagating environment variables in the way you intend, then you have bigger problems than how to mess with the env from SystemVerilog. I specifically chose NOT to add environment-modifying functions to the svlib utility library, precisely because using the environment is a very bad way to communicate information within a SV simulation. I guess it would make sense if you need to set up an environment for some external program that you would then invoke using a SV $system() call.
Mh... Turns out the answer is trivial but this is the only thread to this question so I'll leave it up in case someone else finds themselve in a similar situation:
SystemVerilog does not have a setenv( ) or getenv( ) function. They're actually implemented from C using the following construct:
module/program foo();
import "DPI-C" function <return type> foonction(<function arguments>);
endmodule/program;
Apparently in my case someone had done this for getenv( ) but never setenv ( ). Reason I didn't catch it was because my code in question was included the following way:
**foo.sv**
if(var.bit) begin
call_function();
use_environment_variable();
end
**bar.sv**
module bar();
<do stuff>
`include foo.sv <-- foo code is copied in after calculations have occured.
endmodule
Trying to import DPI-C in foo.sv will trigger an error because the import will arrive after calculations have taken place. To solve this we need to import in bar.sv like so:
module bar();
import "DPI-C" function int setenv(string name, string value, int override);
<do stuff>
`include foo.sv
endmodule
Setting environment variables from SV is not very useful unless you are running another executable from SV.
If you want to get environment variables, you can use Verilab's svlib functions:
function automatic string sys_getEnv(string envVar);
function automatic bit sys_hasEnv(string envVar);