Systemverilog: scope of text substitution macro - system-verilog

I read that text substitution macros have global scope in 'verilog'. How does SystemVerilog work? I want to use 2 different definitions of the same text macro in 2 different SystemVerilog files - is that OK to do?

In SystemVerilog, macro definitions are limited to the compilation-unit scope but what that is depends on the tool configuration. From the specification:
The exact mechanism for defining which files constitute a compilation
unit is tool-specific. However, compliant tools shall provide use
models that allow both of the following cases:
a) All files on a given compilation command line make a single
compilation unit (in which case the declarations within those files
are accessible following normal visibility rules throughout the
entire set of files).
b) Each file is a separate compilation unit (in which case the
declarations in each compilation-unit scope are accessible only
within its corresponding file).
Therefore if you use multiple-file compilation units (-mfcu for Modelsim), there will be collisions since the macro namespace will have global scope. However the specification explicitly allows redefinitions so you may not get an error(or warning) in this case, unless your tool supports it.
The text macro name space is global within the compilation unit.
Because text macro names are introduced and used with a leading ‘
character, they remain unambiguous with any other name space. The text
macro names are defined in the linear order of appearance in the set
of input files that make up the compilation unit. Subsequent
definitions of the same name override the previous definitions for the
balance of the input files.
Depending on how you are using macros, you may want to consider using parameters instead. Parameters are essentially constants that are more limited in scope than preprocessor directives. They can also be used to selectively instance code using generate constructs.
You can get the SV specification here for free.

If the desired macro have simuliar structure/format, then you can use macro with arguments. See IEEE1800-2012 Section 22.5.1.
`define myMacro(arg1,arg2) \
prefix_``arg1 = arg2``_postfix
If the desired macro definition is exclusively in its respected file and unique, then you can do the following. All other files will not have an `mymacro that can be called. `undef is from Verilog, IEEE1364-1995 Section 16.3.2, and has been in included into SystemVerilog. You can read more about `undef in the latest revision; IEEE1800-2012 Section 22.5.2.
file1.sv:
`define mymacro abcd
/* SystemVerilog code */
`undef mymacro
file2.sv:
`define mymacro wxyz
/* SystemVerilog code */
`undef mymacro

Related

Use of internal rules in Rust macros 2.0

I cannot understand where lazy-static's #TAIL and #MAKE have been defined and their particular use cases.
If I've understood internal rules correctly, the primary usage of #as_expr in the example is to hide as_expr! (or in general, previously defined macros) from being exported i.e. it's a way of altering the global macro namespace. Following that, then #TAIL or #MAKE should already be a macro while I cannot find them in the lazy_static source.
You linked to the definitions. #TAIL is right there three lines down on 137, #MAKE is on 162.
#name is not special in any way whatsoever. There is absolutely no special behaviour. It's just a sequence of tokens that cannot show up in "normal" code, and is thus unlikely to be accidentally matched to other rules. #as_expr does not hide an as_expr! macro, it's used instead of defining a publicly visible as_expr! macro.

Macros in package

As I understood, SystemVerilog does not support macros definition in the package.
And if you want to implement your own macros for UVM, than you should write them in separate file and include that file in the top, similar to including "uvm_macros.svh" file.
Can someone please confirm this.
Macros definitions and other compiler directives are processed as part of a compilation unit before any other SystemVerilog syntax gets recognized. So the text for a macro deffintion might appear within the text that defines a package, but the definition is valid for any source code that appears after it in the compilation unit, and has no relevance to any scope defined in SystemVerilog. So yes, you do want to put your macros in a separate file and include them in any compilation unit that wants to use them.
Please see:
https://verificationacademy.com/forums/ovm/do-you-include-or-import#reply-35286

gcc precompiler directive __attribute__ ((__cleanup__)) vs ((cleanup)) (with vs without underscores?)

I'm learning about gcc's cleanup attribute, and learning how it calls a function to be run when a variable goes out of scope, and I don't understand why you can use the word "cleanup" with or without underscores. Where is the documentation for, or documentation of, the version with underscores?
The gcc documentation above shows it like this:
__attribute__ ((cleanup(cleanup_function)))
However, most code samples I read, show it like this:
__attribute__ ((__cleanup__(cleanup_function)))
Ex:
http://echorand.me/site/notes/articles/c_cleanup/cleanup_attribute_c.html
http://www.nongnu.org/avr-libc/user-manual/atomic_8h_source.html
Note that the first example link states they are identical, and of course coding it proves this, but how did he know this originally? Where did this come from?
Why the difference? Where is __cleanup__ defined or documented, as opposed to cleanup?
My fundamental problem lies in the fact that I don't know what I don't know, therefore I am trying to expose some of my unknown unknowns so they become known unknowns, until I can study them and make them known knowns.
My thinking is that perhaps there is some globally-applied principle to gcc preprocessor directives, where you can arbitrarily add underscores before or after any of them? -- Or perhaps only some of them? -- Or perhaps it modifies the preprocessor directive or attribute somehow and there are cases where one method, with or without the extra underscores, is preferred over the other?
You are allowed to define a macro cleanup, as it is not a name that is reserved to the compiler. You are not allowed to define one named __cleanup__. This guarantees that your code using __cleanup__ is unaffected by other code (provided that other code behaves, of course).
As https://gcc.gnu.org/onlinedocs/gcc/Attribute-Syntax.html#Attribute-Syntax explains:
You may optionally specify attribute names with __ preceding and following the name. This allows you to use them in header files without being concerned about a possible macro of the same name. For example, you may use the attribute name __noreturn__ instead of noreturn.
(But note that attributes are not preprocessor directives.)

what is the difference between 'define as' to 'define as computed' in specman?

The difference between the two is not so clear from the Cadence documentation.
Could someone please elaborate on the difference between the two?
A define as macro is just a plain old macro that you probably know from other programming languages. It just means that at some select locations in the macro code you can substitute your own code.
A define as computed macro allows you to construct your output code programmatically, by using control flow statements (if, for, etc.). It acts kind of like a function that returns a string, with the return value being the code that will be inserted in its place by the pre-processor.
With both define as and define as computed macros you define a new syntactic construct of a given syntactic category (for example, <statement> or <action>), and you implement the replacement code that replaces a construct matching the macro match expression (or pattern).
In both cases the macro match expression can have syntactic arguments that are used inside the replacement code and are substituted with the actual code strings used in the matched code.
The difference is that with a define as macro the replacement code is just written in the macro body.
With a define as computed macro you write a procedural code that computes the desired replacement code text and returns it as a string. It's effectively a method that returns string, you can even use the result keyword to assign the resulting string, just like in any e method.
A define as computed macro is useful when the replacement code is not fixed, and can be different depending on the exact macro argument values or even semantic context (for example, in some cases a reflection query can be used to decide on the exact replacement code).
(But it's important to remember that even define as computed macros are executed during compilation and not at run time, so they cannot query actual run time values of fields or variables to decide on the resulting replacement code).
Here are some important differences between the two macro kinds.
A define as macro is more readable and usually easier to write. You just write down the code that you want to be created.
Define as computed macros are stronger. Everything that can be implemented with define as, can also be implemented with define as computed, but not vice versa. When the replacement code is not fixed, define as is not sufficient.
A define as macro can be used immediately after its definition. If the construct it introduces is used in the statement just following the macro, it will already be matched. A define as computed macro can only be used in the next file, and is not usable in the same file in which the macro is defined.

Ensuring hygiene in the absence of reify

It's easy to write hygienic macros in Scala by using reify and eval. But it's not always possible to use reify and eval.
So, if one can't use them, what are the rules that will ensure that a macro is hygienic? And is there any way to test a macro to ensure that no bad hygiene has slipped through the cracks?
upd. In later milestones of 2.10.0, Expr.eval got renamed to Expr.splice.
Reify is hygienic, because it saves symbols along with Ident and This trees.
If your macro expansion result doesn't have symbols attached to idents (e.g. you have just Ident("x") to specify a reference to something named x), then subsequent typechecking of the macro expansion will bind x to whatever is in scope of the call site (or if that scope does not have an x, you will get a compilation error).
To the contrast, when your macro expansion has symbols for its idents, typechecker doesnt attempt to re-resolve them, and simply uses what it has. This means that when you reify an expression and use the result in a macro expansion, then it will carry its symbols into the call site. Well, not all symbols, e.g. it is impossible to refer to local variables or to private/protected things, but references to globally accessible declarations are persisted.
Bottom line is that to check whether your macro is hygienic, check whether your idents and thises have symbols attached to them. You can achieve this by reifying or by manually assigning symbols to your hand-crafted trees.
Since reify is a macro, I'd just look at its implementation to figure out what it does.