TIMESCALEMOD verilator error when attempting to add a new black box in chisel - scala

I'm trying to add a new blackboxed verilog module to the chipyard hardware generation framework and simulate it with verilator.
My changes pass chipyard's scala compilation phase in which the chisel hardware specification is compiled into verilog. However, it appears during the "verilation" process in which that verilog is translated into a C++ executable, I'm encountering an error:
%Error-TIMESCALEMOD: [filename].v:238066:15: Timescale missing on this
module as other modules have it (IEEE 1800-2017 3.14.2.2)
chipyard/sims/verilator/generated-src/. . ./ClockDividerN.sv:8:8: ...
Location of module with timescale
8 | module ClockDividerN #(parameter DIV = 1)(output logic clk_out = 1'b0, input clk_in);
| ^~~~~~~~~~~~~ %Error: Exiting due to 1 error(s)
Searching around, the "timescale" this is referring to appears to be a configuration option for verilator simulations to do with how much time (usually in picoseconds) advances during one step of the simulation.
What's strange is the error claims that this module "ClockDividerN" (also a blackboxed verilog module included in the chipyard generator's vsrc directory) has a timescale, but the verilog source for ClockDividerN does not contain anything that appears to be a timescale.
Likewise, adding a timescale directive to the verilog source I'm trying to integrate produces the same error message. There are some verilator command-line options to do with timescales, but they're difficult to add in in the chipyard framework (it uses a pretty opaque makefile to run verilator).
Any help?
Update: the documentation for handling a TIMESCALEMOD error recommends using the "--timescale" command-line argument, but it turns out chipyard's Makefile for verilator simulations already uses that argument!

When adding your blackbox resources, something has gone wrong. Make sure the addResource path is correct. The TIMESCALEMOD error has nothing to do with this, but the blackbox path being included incorrectly causes that error to go off.

Related

Converting Modelica models from Dymola to JModelica - Addressing Errors in Log File

I am currently trying to compile the netCDF-DataReader in JModelica but it appears the package has been developed in Dymola. The process fails at the compilation stage:
netCD = compile_fmu('NcDataReader2.Examples.Simple',r'H:\Modelica\Modelica Libraries\NcDataReader2',compiler_log_level= 'w,i:log.txt')
CcodeCompilationError: Compilation of generated C code failed.
The log file created contains 326 lines. Midway it says
====== Model compiled successfully =======
But there are many errors after. Some of the errors include:
Warning: .drectve `/DEFAULTLIB:"LIBCMT" /DEFAULTLIB:"OLDNAMES" '
unrecognized collect2.exe: error: ld returned 1 exit status
mingw32-make1: *** [ceval_] Error 1 Cannot export
??_C#_01LFCBOECM#?4?$AA#: symbol not found Cannot export
??_C#_01NOFIACDB#w?$AA#: symbol not found
C:\JModelica.org-2.1\install\Makefiles\MakeFile:190: recipe for target
'fmume10' failed
I don't have much experience with compilers and debugging C-code and would prefer to spend my time focused on creating models; therefore this leads to a number of questions:
Are there patterns in this errror log that could be addressed in such a way to make Dymola libraries useable in other Modelica based programs, such as JModelica?
Are then any other compilers that may be more suited to cross-compatible models?
Am I wasting my time trying to make Dymola models run in JModelica? Would it be more sensible to recreate the model separately in JModelica? The lack of a front-end interface makes this tricky.
The problem is that the external libraries available in netCDF-DataReader needs to be compiled using the GCC compiler available in the JModelica distribution. Try to recompile the libraries using GCC and put the libraries in NcDataReader2\Resources\Library\win32 (or even better put them in NcDataReader2\Resources\Library\win32\gcc472)

Installing LibLinear on MATLAB R2014a

I'm trying to install LibLinear for MATLABR2014a on linux. When compiling in MATLAB the read.mexa64 and write.mexa64 are created just fine, it's on the train.mexa64 that it fails. The error I used to get was:
/home/admin/Documents/Project/Software/liblinear-1.94/linear.cpp:2739:19:
warning: ignoring return value of ‘int fscanf(FILE*, const char*, ...)’,
declared with attribute warn_unused_result [-Wunused-result] fscanf(fp,"\n"); ^
If make.m fails, please check README about detailed instructions.
So I sorted out the handling of the return value in linear.cpp and it still fails, providing only the last line directing me to the README. I've also tried installing gcc-4.7, g++-4.7 and cpp-4.7 but the make process still terminates without any information. As it suggests I have the matlab directory set to:
MATLABDIR = /usr/local/matlab
and I have included
-U_FORTIFY_SOURCE
in the CFLAGS just in case it was the ignored values causing a fuss. I managed to compile it from the command line in the main directory and it works fine, but it would be awesome to have a nice MATLAB interface :) If anyone's managed to get it up and running I'd be super grateful for any help!
If anyone stumbles across this then I found a method that seems to work. I tried using the Makefile in the matlab folder directly but kept strange getting compiler errors asking to change the source directory. However, if you go into the Makefile and change the line
MEX_OPTION = CC\#$(CXX) CXX\#$(CXX) CFLAGS\#"$(CFLAGS)" CXXFLAGS\#"$(CFLAGS)"
to
MEX_OPTION = CC=$(CXX) CXX=$(CXX) CFLAGS="$(CFLAGS)" CXXFLAGS="$(CFLAGS)"
it should work.

How do I get information about compiler (version) that is used by Cython and f2py in IPython?

does anyone know if there is a way to print the compiler (and its version) that is used when I use the Fortran magic and Cython magic in IPython
For example, like the compiler that was used to build Python: platform.python_compiler()
There are probably better ways to do this, but here are two quick ones.
For Cython, the first thing that came to mind was to make a Cython file that passes the Cython compiler and causes an error at the C level.
Here's a simple one.
cdef extern from "nosuchheader.h":
void myfakefunction(int a, double b)
On my computer IPython shows an error from distutils saying that "gcc failed with exit status 1".
I don't currently use the %%fortran magic, but you should be able to see what f2py is doing based on its output.
f2py usually shows which compiler it is using, both when it searches for a compiler, and when it actually calls the Fortran compiler.
To figure that out, I'd recommend compiling some snippet of Fortran code via f2py and looking at the output.
On my windows machine it shows the output as f2py searches for a Fortran compiler, and prints the lines
'Found executable C:\\mingw64\\bin\\gfortran.exe',
'Found executable C:\\mingw64\\bin\\gfortran.exe',
This tells me it is using gfortran.
Further down in the output it also shows the commands used to build the Fortran source code.
The documentation for the fortran magic mentions how to get verbose output.
If you pass the flag -vvv to the fortran magic, it will print the output from f2py.
You could also try looking at the %fortran_config magic mentioned in the documentation.

Translate F2PY compile steps into setup.py

I've inherited a Fortran 77 code which implements several subroutines which are run through a program block which requires a significant amount of user-input via an interactive command prompt every time the program is run. Since I'd like to automate running the code, I moved all the subroutines into a module and wrote a wrapper code through F2PY. Everything works fine after a 2-step compilation:
gfortran -c my_module.f90 -o my_module.o -ffixed-form
f2py -c my_module.o -m my_wrapper my_wrapper.f90
This ultimately creates three files: my_module.o, my_wrapper.o, my_module.mod, and my_wrapper.so. The my_wrapper.so is the module which I import into Python to access the legacy Fortran code.
My goal is to include this code to use in a larger package of scientific codes, which already has a setup.py using distutils to build a Cython module. Totally ignoring the Cython code for the moment, how am I supposed to translate the 2-step build into an extension in the setup.py? The closes I've been able to figure out looks like:
from numpy.distutils.core import setup, Extension
wrapper = Extension('my_wrapper', ['my_wrapper.f90', ])
setup(
libraries = [('my_module', dict(sources=['my_module.f90']],
extra_f90_compile_args=["-ffixed-form", ])))],
ext_modules = [wrapper, ]
)
This doesn't work, though. My compiler throws many warnings on the my_module.f90, but it still compiles (it throws no warnings if I use the compiler invocation above). When it tries to compile the wrapper though, it fails to find the my_module.mod, even though it is successfully created.
Any thoughts? I have a feeling I'm missing something trivial, but the documentation just doesn't seem fleshed out enough to indicate what it might be.
It might be a little late, but your problem is that you are not linking in my_module when building my_wrapper:
wrapper = Extension('my_wrapper', sources=['my_wrapper.f90'], libraries=['my_module'])
setup(
libraries = [('my_module', dict(sources=['my_module.f90'],
extra_f90_compile_args=["-ffixed-form"]))],
ext_modules = [wrapper]
)
If your only use of my_module is for my_wrapper, you could simply add it to the sources of my_wrapper:
wrapper = Extension('my_wrapper', sources=['my_wrapper.f90', 'my_module.f90'],
extra_f90_compile_args=["-ffixed-form"])
setup(
ext_modules = [wrapper]
)
Note that this will also export everything in my_module to Python, which you probably don't want.
I am dealing with such a two-layer library structure outside of Python, using cmake as the top level build system. I have it setup so that make python calls distutils to build the Python wrappers. The setup.pys can safely assume that all external libraries are already built and installed. This strategy is advantageous if one wants to have general-purpose libraries that are installed system-wide, and then wrapped for different applications such as Python, Matlab, Octave, IDL,..., which all have different ways to build extensions.

Unresolved __builtin_ia32_stmxcsr

I have inherited code, trying to compile with gcc on Linux.
what library am I looking for that has __builtin_ia32_stmxcsr ?
apologies -- i was too fast to submit; running gcc inside of Nvidia Eclipse. actual error message is "Functuion . . . could not be resolved" so i jumped the conclusion i needed to reference some lib. As the offending lines hav a :#if defined(SSE) I take it to mean that the -msse2 switch is present although i cannot seem to find a copyh of the compile command line. [just learning this Eclipse tool -- very new!]
You don't need to link with anything - the "builtin" in the name is a clue that it's a gcc built-in (intrinsic) compiler function.
However you do need to be compiling for an x86 target with SSE enabled for this to be recognised, e.g. gcc -msse2 ....
Note that you can use the _mm_getcsr intrinsic from <xmmintrin.h> instead of __builtin_ia32_stmxcsr - this would be a little more portable.
This is a bug in eclipses indexer with gcc's __builtin* functions. The bug report is at https://bugs.eclipse.org/bugs/show_bug.cgi?id=352537
The problem is that even the glibc/gcc libraries themselves use these __builtin* functions, so eclipse complains about a faulty xmmintrin.h etc., which is of course nonsense.
There is a workaround given in the bug report, you can add the function prototypes as user defined macros for the indexer, but of course this becomes tedious if there are a few more and some type checking abilities are lost.