Verilog 2K has support for compiling modules with different implementation using the "config" facility. In my multi chip uvm env I need to use 2 different packages(chip_top_pkg.sv) which have exactly the same name but different uvm components.
Is there a way to compile them separately and use them at elaboration. Or do I necessarily have to prefix all package names with say unique chip name?
-sanjeev
Unfortunately, SystemVerilog packages are used early in the compilation process and must be declared before they can be referenced. Module elaboration happens much later in the process, which allows later bindings for the config construct.
So your package names must be unique across the system.
Related
When compiling RTL from multiple sources it is normal to compile them into separate SystemVerilog libraries. Doing this means they cannot interfere with each other, and you can compile multiple different modules with the same name into different libraries.
SystemVerilog configurations are used to select which library to elaborate a module from. As described in the SV LRM 2017 (33 Configuring the contents of a design). E.g.
config cfg1; // specify rtl adder for top.a1, gate-level adder for top.a2
design rtlLib.top;
default liblist rtlLib;
instance top.a2 liblist gateLib;
endconfig
Does Verilator support compilation into separate libraries like the commercial simulators?
No and it never will.
Chapter 33 of the LRM is specifically procluded from being supported by Verilator. See here: https://github.com/verilator/verilator/blob/master/docs/internals.rst#never-features
I work with multiple grammars in the repl. The grammars use same names for some of their rules.
One of the documentation recipes mentions full qualification, to disambiguate type annotations in function pattern matching (it's in a note of the load function, but not in the code of this page - the .jar has it correct). But that might become tedious, so maybe there is aliasing for imports (like in Python import regex as r)?! And using full qualification in the first argument of the parse function doesn't seem to help to disambiguate all parse rules that are invoked recursively, parse(#lang::java::\syntax::Java18::CompilationUnit, src). At least it produces weird errors if I also import lang::java::\syntax::Java15.
In general, what is a safe way to handle symbols from different modules with same names?
Alternatively, is there a way to "unload" a module in the repl?
Some background information:
Rascal modules are open for reasons of extensibility, in particular data, syntax definitions and overloaded functions can be extended by importing another module; In this way you can extend a language and its processing functions by importing another module and adding rules and function alternatives at leisure.
There is a semantic difference between importing and extending a module. In particular import is not transitive and fuses only the uses of a name inside the importing module, while extend is transitive and also fuses recursive uses of a name in the module that is extended. So for extending a language, you'd default to using extend, while for using a library of functions you'd use import.
We are planning to remove the fusing behavior from import completely in one of the releases of 2020. After this all conflictingly imported non-terminal names must be disambiguated by prefixing with the module name, and prefixing will not have a side-effect of fusing recursively used non-terminals from different modules anymore. Not so for extend, which will still fuse the non-terminal and functions all the way.
all the definitions in a REPL instance simulate the semantics of the members of a single anonymous module.
So to answer your questions:
it's not particularly safe to handle symbols from different imported modules with the same name, until we fix the semantics of import that is.
the module prefix trick only works "top-level", below this the types are fused anyway because the code which reifies a non-terminal as a grammar does not propagate the prefix. It wouldn't know how.
Unimporting a module:
rascal>import IO;
ok
rascal>println("x");
x
ok
rascal>:un
undeclare unimport
rascal>:unimport IO
ok
rascal>println("x");
|prompt:///|(0,7,<1,0>,<1,7>): Undeclared variable: println
probably one of the least used features in the environment; caveat emptor!
To work around these issues, a way is to write functions inside a different module for every separate language/language version, and create a top module which imports these if you want to bundle the functionality in a single interface. This way, because import is not transitive, the namespaces stay separate and clean. Of course this does not solve the REPL issue; the only thing I can offer there is to start a fresh REPL for each language version you are playing with.
I was trying to understand the different sections in the package declaration file (.dec) of an EDK2 module, however I can't seem to figure out why some GUID definitions are under the [GUIDs] section and some are under the [Protocols] section or [Ppis] section. Is there a reason why they should not be under the same section, especially from the perspective of the EDK2 build process?
So, this is half an answer at most, but:
A GUID, ultimately, is nothing other than a 128-bit value statistically guaranteed to be unique (if generated using the defined method).
The [Guids] section of the .dec defines GUIDs that point to generic data structures, variable namespaces, things...
The [Protocols] section defines discoverable UEFI APIs, whereas [Ppis] defines PEI (Pre-EFI) APIs.
Ultimately, this becomes relevant when processing module .inf files, which declare which [Guids], [Protocols] and [Ppis] they require to build. I.e., you could possibly get away with just declaring everything as GUIDs - but then you'd loose any sanity checking preventing you from using PPIs in DXE, or the other way around.
I'm researching about how to package some of my Perl apps and better manage their dependencies to make distribution easier for me and my customers, which most likely doesn't include uploading to CPAN at all. Instead, I would provide custom repos if necessary or, more likely, access to SCMs like Subversion.
CPAN::Meta::Spec seems to provide what I need to describe my apps, their dependencies and even where to get them from, but what I'm wondering is about the level of detail of pre-requisites. The spec contains the following sentence:
The set of relations must be specified as a Map of package names to version ranges.
Requiring packages seems a little too low level for my needs, I would prefer requiring distributions instead. Pretty much the level (from my understanding) tools like Maven and Gradle work at, e.g. Apache Commons Lang vs. Apache Commons IO etc. instead of individual classes like org.apache.commons.lang3.AnnotationUtils or org.apache.commons.io.ByteOrderMark. OTOH, the example in the docs contains the following lines:
requires => {
'perl' => '5.006',
'File::Spec' => '0.86',
'JSON' => '2.16',
},
The line containing perl doesn't look like a package to me and I didn't find some package perl or perl.pm anywhere on my system. Seems to me like that is handled differently to the other things of the example.
I have a system wide folder containing e.g. some utility packages, which seems comparable to some abstract perl to me. That folder should get defined as one distribution, maintain a version number for all of the packages in that folder and therefore should allow other apps to require that whole thing. If I understand the docs correctly, I would need to create not only the META.yml in the folder, but additionally some e.g. sysutils.pm containing package sysutils; and defining some version.
Is there some way to avoid creating that file and really require the distribution itself only?
The META.yml already contains a name and version on it's own, so looks like some abstract thing one could require in theory. I don't see the need of adding an additional .pm-file representing the distribution itself only to allow require to work. It wouldn't contain any business logic in my case.
Thanks!
That's really not what you want to do. You want to pre-req what you actually require. So, for example, if you need File::Spec, that's what you need, regardless of whether it comes from perl core or from a separate CPAN distribution.
I've seen cases where certain modules have moved from CPAN to core, or vice versa. By requiring the module directly, you don't need to ship new releases of your dependent distributions simply because someone you depend on changed their method of distribution.
I've also seen cases where certain modules are split off from their original distributions when it was determined they were valuable as standalone modules. Depending on the module means that you no longer drag in a bunch of other modules for a simple dependency.
What you're more or less looking for is akin to the Task::* modules. No real logic in most of them, just a list of further dependencies.
The Perl dependency system works entirely on package names, on multiple levels. When a CPAN distribution is uploaded, each package within is indexed by PAUSE, which also checks if the uploader has permissions for that package and that the package has a newer version than the currently indexed package. None of these checks care about the distribution as a whole (though the indexer does do other checks at that level).
Then, when a CPAN client sees a dependency, or you tell it to install something, it checks the index for that package name, which tells it what distribution release to install. If it depends on a certain version, that is checked against the $VERSION declared in that package if you have it installed; whereas once a distribution is installed, its "version" is no longer tracked. The distribution level is almost entirely meaningless except that it is what is ultimately downloaded and installed to satisfy these dependencies. This is important, because modules can and do move between distributions, maintaining their version increments, and the package index will always tell you which distribution to get the version you need.
As you noticed, the perl dependency is weird. It's a special case that has been there forever, as a convention to declare what version of Perl you require, you declare a runtime requirement of perl. It is not an indexed module, and every CPAN client and other consumer of CPAN metadata special cases this to either ignore it or treat it as a minimum Perl version, rather than something that can be installed. There's no way to extend this to work for distributions in general, and it would be a bad idea to try.
As an additional note, the CPAN meta spec is a specification for the file named META.json included in CPAN distributions (META.yml is the legacy version), but this file is automatically generated by your authoring tool. It should never be manually created, though you may have your authoring tool manually add certain keys (in which case reading the spec is important), including prereqs. See neilb's blog post for how to specify dependencies for various authoring tools, which will then transpose these into the generated META file, and also how to use cpanfiles to specify dependencies in general.
If I am not mistaken, once a package has been analyzed, its visibility is global (like that of a module, for example).
If the design and verification teams each have their own "common_pkg" package, is it possible to somehow compile them both and use design's common_pkg for design and verification's common_pkg for verification?
My idea was to limit their scope by encapsulating them in design/verification packages, like so:
package design_pkg;
package common_pkg;
typedef enum {<something>} my_type;
endpackage : common_pkg
endpackage : design_pkg
package verification_pkg;
package common_pkg;
typedef enum {<something_else>} my_type;
endpackage : common_pkg
endpackage : design_pkg
// In design:
design_pkg::common_pkg::my_type my_design_var;
// In verification:
verification_pkg::common_pkg::my_type my_verification_var;
But, it seems that package nesting is illegal in systemverilog, which is strange since module definitions can be nested.
Is there a solution for this problem, other than renaming the packages and avoiding too "broad" names such as "common_pkg" which might conflict with other areas?
Welcome to system verilog :-(. Unfortunately SV does not provide any solution for package name collisions, nor any good way for any other type of naming collisions in global scope.
This is why you might hear the term 'uniquification' from some companies. It really means modifying source code in an automated way to uniquely name global scope items, like packages, modules, macros, ... for verilog building blocks coming from multiple teams (IPs).
So, the best solution is for your teams to talk to each other and agreed on using of names.
Wouldn't it be great if engineering teams could work together and common_pkg really meant a common package common to all?
SystemVerilog has no built-in way of having multiple packages with the same name. Even if you could nest packages, the visibility of the package would be limited to the package it was enclosed in.
Some tools might allow you to compile and optimize the design with its package, and then bind the design unit as a whole.