Does Verilator support SystemVerilog libraries? - system-verilog

When compiling RTL from multiple sources it is normal to compile them into separate SystemVerilog libraries. Doing this means they cannot interfere with each other, and you can compile multiple different modules with the same name into different libraries.
SystemVerilog configurations are used to select which library to elaborate a module from. As described in the SV LRM 2017 (33 Configuring the contents of a design). E.g.
config cfg1; // specify rtl adder for top.a1, gate-level adder for top.a2
design rtlLib.top;
default liblist rtlLib;
instance top.a2 liblist gateLib;
endconfig
Does Verilator support compilation into separate libraries like the commercial simulators?

No and it never will.
Chapter 33 of the LRM is specifically procluded from being supported by Verilator. See here: https://github.com/verilator/verilator/blob/master/docs/internals.rst#never-features

Related

What are the differences between diffferent Modelica Simulation Environments?

There are different Modelica Simulation Environments, including Dymola, Wolfram SystemModeler, OpenModelica, and Jmodelica. So, I try to load a thermal fluid library(ThermoSysPro https://github.com/Dwarf-Planet-Project/ThermoSysPro), but except Dymola, the results on the other software all end with errors.
If the library and the simulation environment are all based on the Modelica Specification Standard, why there is a compatibility issue? I think that maybe the library includes some features that are only supported by Dymola. Could anyone clarify the difference between these simulation environments?
In general:
The tool you use might not support certain Modelica language elements
Just because a tool supports Modelica, it does not mean that it has implemented everything yet what the Modelica standard defines. Take OpenModelica for example, which did not fully support synchronous features before v1.12.
The code of the library might not be conform with the version of the Modelica Language Specification (Modelica spec) used by your tool
Some tools allow certain things, which are not defined in the Modelica spec: maybe because the Modelica spec was not precise enough on a topic, or maybe they are a bit ahead and already support things which might be part of future spec versions.
In Dymola you have two options to check a bit stricter if your code is conform with the current Modelica Language Specification: use the pedantic mode for checking and set the flag Advanced.EnableAnnotationCheck=true to let Dymola also check annotations
In your concrete example: There are various troubles with the ThermoSysPor library, which might explain your problems.
The library was written with the rather old Modelica Standard Library (MSL) 3.2.1., which is based on the Modelica Language Specification 3.2.
The current Dymola version (2020) uses the Modelica Language Specification 3.4 (see the Dymola release notes of each version to find that out).
OpenModelica apparently supports Modelica 3.3 (as noted in the release notes).
The MSL has also evolved a bit in the meantime, with the current version being 3.2.3.
Hence, it is required to update ThermoSysPro to the latest MSL version 3.2.3 and to the Modelica spec the tool supports. Then you can start comparing in which tools it works and in which not.
The library does not fully work in Dymola either
I tested with the latest Dymola version and Dymola 2016 FD01, which contained the MSL 3.2.1.
Dymola 2016 FD01: 31 errors, 62 warnings
Dymola 2020: 175 errors, 095 warnings
The library contains invalid language elements. Two examples:
In ThermoSysPro.Examples.SimpleExamples.TestCentrifugalPump OpenModelica v1.14 beta 2 complains, that cardinality is not used in a legal way. Apparently Dymola 2020 does not care (even in pedantic mode), but it's against the Modelica Spec 3.4.
Many models contain the annotation DymolaStoredErrors, which is not standard conform. Custom tool annotations must start with '__'.

GraalVM: How to implement compiler optimizations?

I want to develop a tool that performs certain optimizations in a program based on the program structure. For example, let's say I want to identify if-else within a loop, and my tool shall rewrite it into two loops.
I want the tool to be able to rewrite programs from a wide range of languages, example Java, C++, Python, Javascript, etc.
I am exploring if GraalVM can be used for this purpose, to act as the common platform in which I can implement the same optimizations for various languages.
Does GraalVM have a common intermediate representation (something like the LLVM IR)? I looked at the documentation but I am not sure where to get started. Any pointers?
Note: I am not looking for inter-operability between languages. You can assume that the programs I want to rewrite are written in one single language; the language may be different for different programs.
GraalVM has two components that are relevant for this:
compiler, which compiles Java bytecode to native code
truffle, which is a framework for implementing other programming languages on top of GraalVM.
Languages implemented with the Truffle framework get partially evaluated to Java bytecode, which is then compiled by the Graal compiler. This article/talk gives more details including the IR used by Graal compiler: https://chrisseaton.com/truffleruby/jokerconf17/. Depending on your concrete use case you may want to hook into Truffle, Truffle partial evaluator or Graal compiler.

Compiling verilog packages with same name

Verilog 2K has support for compiling modules with different implementation using the "config" facility. In my multi chip uvm env I need to use 2 different packages(chip_top_pkg.sv) which have exactly the same name but different uvm components.
Is there a way to compile them separately and use them at elaboration. Or do I necessarily have to prefix all package names with say unique chip name?
-sanjeev
Unfortunately, SystemVerilog packages are used early in the compilation process and must be declared before they can be referenced. Module elaboration happens much later in the process, which allows later bindings for the config construct.
So your package names must be unique across the system.

What is the best way to implement optional library dependencies in Rust?

I am writing a toy software library in Rust that needs to be able to load images of almost any type into an internal data structure for the image. It is early days for the Rust ecosystem, and there is no one library/set of bindings that I would trust for this task.
I would ideally like:
Support multiple redundant external libraries that may or may not be available at runtime
Support multiple redundant external libraries that may or may not be available at compile-time.
Include at least one fallback implementation that ships with my code.
Fully encapsulate all of the file loading stuff behind a function that does path -> InternalImage loading.
Is there a best practice way to implement optional dependencies like this in Rust? Some of the libraries will be Rust, and some of them will probably be C libraries with Rust bindings.
Cargo, the Rust package manager, can help with that. It lets you declare optional compile-time dependencies. See the [features] section of Cargo's documentation.
For runtime dependencies I'm not sure. I think std::dynamic_lib could be helpful. See an example of using DynamicLibrary in a previous SO question.

Avoid namespace conflicts in Java MPI-Bindings

I am using the MPJ-api for my current project. The two implementations I am using are MPJ-express and Fast-MPJ. However, since they both implement the same API, namely the MPJ-API, I cannot simultaneously support both implementations due to name-space collisions.
Is there any way to wrap two different libraries with the same package and class-names such that both can be supported at the same time in Java or Scala?
So far, the only way I can think of is to move the module into separate projects, but I am not sure this would be the way to go.
If your code use only a subset of MPI functions (like most of the MPI code I've reviewed), you can write an abstraction layer (traits or even Cake-Pattern) which defines the ops your are actually using. You can then implement a concrete adapter for each implementation.
This approach will also work with non-MPI communication layers (think Akka, JGroups, etc.)
As a bonus point, you could use the SLF4J approach: the correct implementation is chosen at runtime according to what's actually in the classpath.