Why is import package outside module visible from other files/modules? - system-verilog

file A
`timescale 1ns/10ps
import mypkg::*;
module test_A (
...
);
file B:
`timescale 1ns/10ps
module test_B (
...
);
Why are typedef of mypkg visible in file B?
Isn't it supposed to be per compilation unit (=file here)?

Section 3.12.1 Compilation units of the IEEE 1800-2017 SystemVerilog LRM gives tools the choice of handling each file as a separate compilation unit like in C/C++, or every file being compiled on the command line as one compilation unit like in Verilog. All tools treat each file as a separate compilation unit if they are compiled with separate command lines.
Unfortunately there is no standard default choice; you need to read your tool's User Manual.
I know that Modelsim/Questa treats each SystemVerilog file as a separate compilation unit by default.

Related

TIMESCALEMOD verilator error when attempting to add a new black box in chisel

I'm trying to add a new blackboxed verilog module to the chipyard hardware generation framework and simulate it with verilator.
My changes pass chipyard's scala compilation phase in which the chisel hardware specification is compiled into verilog. However, it appears during the "verilation" process in which that verilog is translated into a C++ executable, I'm encountering an error:
%Error-TIMESCALEMOD: [filename].v:238066:15: Timescale missing on this
module as other modules have it (IEEE 1800-2017 3.14.2.2)
chipyard/sims/verilator/generated-src/. . ./ClockDividerN.sv:8:8: ...
Location of module with timescale
8 | module ClockDividerN #(parameter DIV = 1)(output logic clk_out = 1'b0, input clk_in);
| ^~~~~~~~~~~~~ %Error: Exiting due to 1 error(s)
Searching around, the "timescale" this is referring to appears to be a configuration option for verilator simulations to do with how much time (usually in picoseconds) advances during one step of the simulation.
What's strange is the error claims that this module "ClockDividerN" (also a blackboxed verilog module included in the chipyard generator's vsrc directory) has a timescale, but the verilog source for ClockDividerN does not contain anything that appears to be a timescale.
Likewise, adding a timescale directive to the verilog source I'm trying to integrate produces the same error message. There are some verilator command-line options to do with timescales, but they're difficult to add in in the chipyard framework (it uses a pretty opaque makefile to run verilator).
Any help?
Update: the documentation for handling a TIMESCALEMOD error recommends using the "--timescale" command-line argument, but it turns out chipyard's Makefile for verilator simulations already uses that argument!
When adding your blackbox resources, something has gone wrong. Make sure the addResource path is correct. The TIMESCALEMOD error has nothing to do with this, but the blackbox path being included incorrectly causes that error to go off.

How to import in F#

I have a file called Parser.fs with module Parser at the top of the file. It compiles. I have another module in the same directory, Main, that looks like this:
module Main
open Parser
let _ = //do stuff
I tried to compile Main.fs with $ fsharpc Main.fs (idk if there's another way to compile). The first error is module or namespace 'Parser' is not defined, all other errors are because of the fact that the functions in Parser are not in scope.
I don't know if it matters, but I did try compiling Main after Parser, and it still didn't work. What am I doing wrong?
F#, unlike Haskell, does not have separate compilation. Well, it does at the assembly level, but not at the module level. If you want both modules to be in the same assembly, you need to compile them together:
fsharpc Parser.fs Main.fs
Another difference from Haskell: order of compilation matters. If you reverse the files, it won't compile.
Alternatively, you could compile Parser into its own assembly:
fsharpc Parser.fs -o:Parser.dll
And then reference that assembly when compiling Main:
fsharpc Main.fs -r:Parser.dll
That said, I would recommend using an fsproj project file (analog of cabal file). Less headache, more control.

SwiftPM: How to setup Swift module.map referring to two connected C libraries

I'm trying to build a Swift Package Manager system package (a module.modulemap)
making available two system C libraries where one includes the other.
That is, one (say libcurl) is a base module and the other C library is including
that (like so: #include "libcurl.h"). On the regular C side this works, because
the makefiles pass in proper -I flags and all is good (and I could presumably
do the same in SPM, but I'd like to avoid extra flags to SPM).
So what I came up with is this module map:
module CBase [system] {
header "/usr/include/curl.h"
link "curl"
export *
}
module CMyLib [system] {
use CBase
header "/usr/include/mylib.h"
link "mylib"
export *
}
I got importing CBase in a Swift package working fine.
But when I try to import CMyLib, the compiler complains:
error: 'curl.h' file not found
Which is kinda understandable because the compiler doesn't know where to look
(though I assumed that use CBase would help).
Is there a way to get this to work w/o having to add -Xcc -I flags to the
build process?
Update 1: To a degree this is covered in
Swift SR-145
and
SE-0063: SwiftPM System Module Search Paths.
The recommendation is to use the Package.swift pkgConfig setting. This seems to work OK for my specific setup. However, it is a chicken and egg if there is no .pc file. I tried embedding an own .pc file in the package, but the system package directory isn't added to the PKG_CONFIG_PATH (and hence won't be considered during the compilation of a dependent module). So the question stands: how to accomplish that in an environment where there libs are installed, but w/o a .pc file (just header and lib).

how does scala interpreter execute source file directly without creating class file

I have this scala code
object S extends App{
println("This is trait program")
}
When I execute scala S.scala it executes fine.
Now I want to know how can it execute code without compile and creating of class file.
Scala is a compiled language, and it needs to compile the code and the .class file is needed for execution.
Maybe you are thinking in using the REPL, where you can interactively code: https://www.scala-lang.org/documentation/getting-started.html#run-it-interactively
But, under the hood, the REPL is compiling your code, and executing the compiled .class
The command scala that you are launching is used to launch Scala REPL, and if you provide a file as an argument, it'll execute it will execute the content of the files as if it was bulk pasted in a REPL.
It's true that Scala is a compiled language, but it does not mean that a .class file is necessary. All that the Scala compiler needs to do is generate relevant JVM byte code and call JVM with that byte code. This does not mean that it explicitly has to create a .class file in directory from where you called it. It can do it all using memory and temporary storage and just call JVM with generated byte code.
If you are looking to explicitly generate class files with Scala that you can later execute by calling java manually, you should use Scala compiler CLI (command: scalac).
Please note that Scala compiler has interfaces to check and potentially compile Scala code on the fly, which is very useful for IDEs (checkout IntelliJ and Ensime).
Just call main() on the object (which inherits this method from App):
S.main(Array())
main() expects an Array[String], so you can just provide an empty array.
Scala is a compiled language in terms of source code to java bytecode transition, however some tricks may be taken to make it resemble an interpreted language. A naive implementation is that when run scala myscript.scala, it follows these steps:
scalac Myscript.scala. It generates S.class (which is the entry class that contains main method) and potentially other class files.
scala -cp . S. This runs/interprets from the main entry of the class
file. -cp . specifies the classpath; and S is the entry class without file extension .class. Note that run/interprets means interpreting (java) bytecode (rather than Scala/Java source code), which is done by JVM runtime.
Remove all the temporarily generated class files. This procedure is optional as long as the users are not aware of the temporary files (i.e., transparent to users).
That is to say, scala acts as a driver that may handle 0) initialization 1) compilation(scalac) 2) execute/run (scala) 3) cleanup.
The actual procedures may be different (e.g., due to performance concerns some files are only in memory/cached, or not generated, or not deleted, by using lower-level APIs of scala driver, etc.) but the general idea should be similar.
On Linux machines, you might find some evidences in /tmp folder. For me,
$ tree /tmp
/tmp
├── hsperfdata_hongxu
│   └── 64143
└── scala-develhongxu
├── output-redirects
│   ├── scala-compile-server-err.log
│   └── scala-compile-server-out.log
└── scalac-compile-server-port
└── 34751
4 directories, 4 files
It is also noteworthy that this way of running Scstepsala is not full-fledged. For example, package declarations are not supported.
# MyScript.scala
package hw
object S extends App {
println("Hello, world!")
}
And it emit error:
$ scala Myscript.scala
.../Myscript.scala:1: error: illegal start of definition
package hw
^
one error found
Others have also mentioned the REPL (read–eval–print loop), which is quite similar. Essentially, almost every language can have an (interactive) interpreter. Here is a text from wikipedia:
REPLs can be created to support any language. REPL support for compiled languages is usually achieved by implementing an interpreter on top of a virtual machine which provides an interface to the compiler. Examples of REPLs for compiled languages include CINT (and its successor Cling), Ch, and BeanShell
However interpreted languages (Python, Ruby, etc.) are typically superior due to their dynamic nature and runtime VMs/interpreters.
Additionally, the gap between compiled and interpreted is not that big. And you can see Scala actually has some interpreted features (at least it appears) since it makes you feel that you can execute like a script language.

CMake macro inclusion issue

The aim is to have small and useful libraries included into a main application.
I create a CMakeLists.txt file to create three different library : image, utils_dir and utils_geom. The thing that bother me is the horrible redundancy with the target definition. So I tried to create some macro and I'm confronted to an inclusion issue.
The pattern of my project is presented below.
src/CMakeLists.txt (main CMakeLists including subdirs)
src/cmake/Macro.cmake (containing macro)
src/libs/core/CMakeLists.txt (library def and macro use)
I can't include my Macro.cmake file which contain the macro definition.
With the following code in the top level CMakeLists.txt (in src/) :
include(Macro.cmake)
test_macro()
And in the Macro.cmake :
macro( test_macro )
MESSAGE("Success !")
endmacro
I've got :
CMake Error at libs/core/CMakeLists.txt:8 (include):
include could not find load file:
Macro.cmake
CMake Error at libs/core/CMakeLists.txt:9 (test_macro):
Unknown CMake command "test_macro".
Did someone is using a likely configuration ?