MPI Autocompletion in Visual Studio Code - visual-studio-code

I'm trying to use Visual Studio Code to develop Fortran MPI programs. However, while I can successfully build and run them just fine, it would be very helpful for me if I can use intellisense/autocompletion features for MPI (as well as other external modules). I have /usr/lib/openmpi/ (which contains mpi_f08.mod) as part of fortran.includePaths in my settings.json. However, when I use mpi_f08, I get the problem message from VS Code Module "mpi_f08" not found in project. Here is a minimal CMake build example:
! hello.f90
program hello
use mpi_f08
implicit none
integer :: ierror, nproc, my_rank
call MPI_Init()
call MPI_Comm_size(MPI_COMM_WORLD, nproc, ierror)
call MPI_Comm_rank(MPI_COMM_WORLD, my_rank, ierror)
print*, "hello from rank ", my_rank
call MPI_Finalize()
end program hello
# CMakeLists.txt
cmake_minimum_required(VERSION 3.12)
project(hello_mpi)
enable_language(Fortran)
find_package(MPI REQUIRED)
add_executable(hello_mpi hello.f90)
include_directories(${MPI_Fortran_INCLUDE_PATH})
target_link_libraries(hello_mpi PUBLIC ${MPI_Fortran_LIBRARIES})
I would like to be able to (i) get rid of the warning/message and more importantly (ii) enable suggestions from MPI when I press CTRL+space as it would if I was calling from an internal module.

I'll post a partial answer since it's better than nothing, hopefully this helps someone else and/or enables someone else to answer my question fully.
It seems the issue relates to the Fortran language server, which can be configured by adding a .fortls JSON file, as explained on its Github README: https://github.com/hansec/fortran-language-server
I added the following, which allowed it to find not only local modules but also MPI (and the external module json-fortran):
{
"source_dirs": ["src", "."],
"ext_source_dirs": [
"/path/to/json-fortran/src",
"/path/to/openmpi-4.1.2/ompi/mpi/fortran/use-mpi-f08",
]
}
This doesn't capture all functions in json-fortran, which I think is because of its .inc files, as it doesn't give me function pointers like json_file::get at autocomplete.
As for MPI, this kind of works, as it gives me all the functions I can think of needing, but with _f08 appended to the end of it. I don't know the inner workings of OpenMPI but I guess e.g. MPI_Init wraps MPI_Init_f08 for reasons of backward compatibility. For now I can simply autocomplete to the _f08 version and remove that bit manually. (I also tried adding openmpi-4.1.2/ompi/mpi/fortran/use-mpi-tkr and openmpi-4.1.2/ompi/mpi/fortran/mpif.h but no luck).
Would be nice to get this detail sorted though. It is also mildly annoying that I must manually include the source dirs now (removing it makes it not find local modules).

Related

How to tell MakeMaker to add exactly the libraries I want?

I'm using XS to create a Perl Module which uses a C library.
For testing purposes, I've created a test library which has two simple functions:
void kzA() (does a simple printf)
void kzB(int i, char *str) (does a printf of the received parameters)
I've also created some glue in XS, in order, for now, to access the kzA() function:
(I'm only showing the function itself, but the includes are there, too, in the XS)
void
ka()
CODE:
printf("Before kzA()\n");
kzA();
printf("After kzA()\n");
So, I compiled the test library as fc.so, and it is in the same directory as my xs file, and my Makefile.PL (/workspace/LirePivots)
In my Makefile.PL, I set the LIBS key to ['-L/workspace/LirePivots -l:fc.so'], but when executing it with perl (perl Makefile.PL), it says "Warning (mostly harmless): No library found for -l:fc.so"
It then writes a Makefile which does NOT mention said library. And then, after compiling (with "make") and installing (with "sudo make install"), when I run my test script which calls the ka() function from my module, I get the line "before", but the kzA() function isn't called, obviously, because it cannot find the kzA symbol, and the program stops there.
Creating a C test program which I would link with the very same arguments (-l:fc.so -L/workspace/LirePivots) does work, and, as long as I put the path in LD_LIBRARY_PATH, it finds the function and runs it correctly.
I also tried renaming the library libfc.so, and changing the -l part to "-lfc", but it didn't work either. It never manages to find the library.
Does anyone know what I do wrong ?
EDIT:
As requested, I created a minimum example: https://github.com/kzwix/xsTest
To run it, you'll need to have a Linux with Perl 5, along with XS (package perl-devel, on Redhat). And a C compiler, and make, obviously.
Ok, I still don't know the reason why it wouldn't find the library. But I found a workaround:
By adding the test library as "libfc.so" to /usr/lib, then running "sudo ldconfig", the library got added to the cache (when named fc.so, even if in /usr/lib, ldconfig would not add it. Also, it wouldn't work with symbolic links, either, I had to really copy it there).
After the library got added to the cache, "perl Makefile.PL" would still not find the library. I had to use "-L/usr/lib" in addition to "-lfc" for it to at long last find the library, and "agree" to add the parameters to the link step of the library.
After this happened, I could compile with "make", and executing the test program did work as originally intended (I saw both the "before" and "after" printf, and I saw the one from the function in the kzA() function from libfc.so)
Thanks Håkon Hægland for helping.
(I can't provide the Dockerfiles, they link to images internal to my organization, which I'm not allowed to share)

Require module in Atom's init.coffee

I've already Google'd for an answer, since this is a common problem, but all the replies point in using alternatives instead of explaining why this doesn't work, so I'm asking here.
I put this code in my Atom's init.coffee script:
beautify = require('js-beautify').html
But Atom fails with Failed to load init.coffee and Cannot find module 'js-beautify'. Curiously enough, this works on a package and this works if I type the exact same code on Atom's console.
Of course, I could write a package for this, in fact there are a couple available, this is just an example because I want to learn how to require modules from init.coffee for future tweaks.
Thanks a lot!
When you require() from init.coffee, Atom looks for those modules in its own path. An example of where you might want to do that is if you had oni = require('oniguruma') to get access to regular expression functions.
In order to get to js-beautify, you have to specify its complete path. So far, only explicitly declaring the entire absolute path has worked for me:
beaut = require 'C:\\Users\\<username>\\.atom\\packages\\atom-beautify\\node_modules\\js-beautify'
console.log beaut
In practice, the most reliable way to use a module like this is to globally install it so that you can link to your global NPM folder. Linking to a module inside a package will break if the package is ever uninstalled.

How do I find all the modules used by a Perl script?

I'm getting ready to try to deploy some code to multiple machines. As far as I know, using a Makefile.pm to track dependencies is the best way to ensure they are installed everywhere. The problem I have is I'm not sure our Makefile.pm has been updated as this application has passed through a few different developers.
Is there any way to automatically parse through either my source or a few full runs of my program to determine exactly what versions of what modules my application is depending on? On top of that, is there any way to filter it based on CPAN packages? (So that I only depend on Moose instead of every single module that comes with Moose.)
A third related question is, if you depend on a version of a module that is not the latest, what is the best way to have someone else install it? Should I start including entire localized Perl installations with my application?
Just to be clear - you can not generically get a list of modules that the app depends on by code analysis alone. E.g. if your apps does eval { require $module; $module->import() }, where $module is passed via command line, then this can ONLY be detected by actually running the specific command line version with ALL the module values.
If you do wish to do this, you can figure out every module used by a combination of runs via:
Devel::Cover. Coverage reports would list 100% of modules used. But you don't get version #s.
Print %INC at every single possible exit point in the code as slu's answer said. This should probably be done in END{} block as well as __DIE__ handler to cover all possible exit points, and even then may be not fully 100% covering in generic case if somewhere within the program your __DIE__ handler gets overwritten.
Devel::Modlist (also mentioned by slu's answer) - the downside compared to Devel::Cover is that it does NOT seem to be able to aggregate a database across multiple sample runs like Devel::Cover does. On the plus side, it's purpose-built, so has a lot of very useful options (CPAN paths, versions).
Please note that the other module (Module::ScanDeps) does NOT seem to allow you to do runtime analysis based on arbitrary command line arguments (e.g. it seems at first glance to only allow you to execute the program with no arguments) and if that's true, is inferior to all the above 3 methods for any code that may possibly load modules dynamically.
Module::ScanDeps - Recursively scan Perl code for dependencies
Does both static and runtime scanning. Just modules, I don't know of any exact way of verifying what versions from what distributions. You could get old packages from BackPan, or just package your entire chain of local dependencies up with PAR.
You could look at %INC, see http://www.perlmonks.org/?node_id=681911 which also mentions Devel::Modlist
I would definitely use Devel::TraceUse, which also shows a tree of the modules, so it's easy to guess where they are being loaded.

Dependencies in Perl code

I've been assigned to pick up a webapplication written in some old Perl Legacy code, get it working on our server to later extend it. The code was written 10 years ago by a solitary self-taught developer...
The code has weird stuff going on - they are not afraid to do lib-param.pl on line one, and later in the file do /lib-pl/lib-param.pl - which is offcourse a different file.
Including a.pl with methods b() and c() and later including d.pl with methods c() and e() seems to be quite popular too... Packages appear to be unknown, so you'll just find &c() somewhere in the code later.
Interesting questions:
Is there a tool that can draw relations between perl-files? Show a list of files used by each other file?
The same for MySQL databases and tables? Can it show which schema's/tables are used by which files?
Is there an IDE that knows which c() is called - the one in a.pl or the one in d.pl?
How would you start to try to understand the code?
I'm inclined to go through each file and refactor it, but am not allowed to do that - only the strict minimum to get the code working. (But since the code never uses strict, I don't know if I'm gonna...)
Not using strict is a mistake -- don't continue it. Move the stuff in d.pl to D.pm (or perhaps a better name alltogether), and if the code is procedural use Sub::Exporter to get those subs back into the calling package. strict is lexical, you can turn it on for just one package. Such as your new package D;. To find out which code is being called, use Devel::SimpleTrace.
perl -MDevel::SimpleTrace ./foo.pl
Now any warnings will be accompanied by a full back-log -- sprinkle warnings around the code and run it.
I think the MySQL question should be removed, from this. Schema Table mappings have nothing to do with perl, it seems an out of place distraction on this question.
I would write a utility to scan a complete list of all subs and which file they live in; then I would write a utility to give me a list of all function calls and which file they come from.
By the way - it is not terribly hard to write a fairly mindless static analysis tool to generate a call graph.
For many cases, in well-written code, that will be enough to help me out...

In a project using GNU Autotools, is there a task to launch xgettext?

Summary :
I have a project using GNU Autotools. I have a pot file. I need to update it. Is there a magical "make" task that run xgettext for me (I'm lazy ?)
Verbose version :
Hi
I am trying to setup a project using GNU autotools and gettext.
I'm trying to follow the 'lazy' path (that is, only writing configure.ac, Makefile.am, and such, and let tools generate the rest for me as much as possible).
I used gettextize once on my package, so I got a package.pot file created, and I derived a fr.po file (I'm trying to translate in french).
I never managed to get my code translated, but I figured out it might be because the code was not in the proper place. The translated string is in a lib instead of a main, and the documentation is quite unclear about what I must do in this case. If my main call a function in a lib, and the function from the lib is using _(). Should I use gettext of dgettext in this case ? My lib is just here for organisation purpose, so I'm okay with using the same domain (only one package.pot file for the whole app).
So, to try something simpler, I moved my string to the main (it's really just a hello world, for the moment). So I need to update the package.pot file, at least, to realize that the string position changed, need I ? In this case, would I use xgettext manually (painfully passing it the list of all interesting cpp files, which will be a pain in the ass when I have more than one file), or is there a 'make whatever' task somewhere that I can run ?
This may look stupid, but I've not been able to find it.
Also, any help on finding why my code is not translated, (anything not in http://www.gnu.org/software/gettext/FAQ.html#integrating_noop) is welcome !
Thanks
PH
Ok, it turns out that :
there is a update-po task in the generated Makefile of the po/ folder, that does just what I want ;
this tasks looks to file referenced in the POTFILES.in file, which I had forgotten to update.
So it was something stupid.