Making ReasonML/Bucklescript output ES5 compatible code - reason

While using ReasonML and Bucklescript, is it possible to configure Bucklescript so it won't generate export statements? I'd prefer if the generated code could be used as is in a browser, that is, being ES5 (or ES6) compatible.
Edit: OK, while trying out the tool chain a bit more, I realize just turning off the export is not enough. See example below:
function foo(x, y) {
return x + y | 0;
}
var Test = /* module */[
/* foo */foo
];
exports.Test = Test;
This code will pollute global namespace if exports is removed, and is simply broken from an ES5 compatibility viewpoint.
Edit 2: Reading on Bucklescript's blog, this seems not possible:
one OCaml module compiled into one JavaScript module (AMDJS, CommonJS, or Google module) without name mangling.
Source.

BuckleScript can output modules in a number of different module formats, which can then be bundled up along with their dependencies using a bundler such as webpack or rollup. The output is not really intended to be used as a stand-alone unit, since you'd be rather limited in what you could do in any case, as the standard and runtime libraries are separate modules. And even something as trivial as multiplication will involve the runtime library.
You can configure BuckleScript to output es6 modules, which can be run directly in the browser as long as your browser supports it. But that would still require manually extracting the standard and runtime libraries from your bs-platform installation.
The module format is configured through the package-specs property in bsconfig.json:
{
...
"packages-specs": ["es6-global"] /* Or "es6" */
}
Having said all that, you actually can turn off exports by putting [###bs.config { no_export }] at the top of the file. But this is undocumented since it's of very limited use in practice, for the above mentioned reasons.

Related

MPI Autocompletion in Visual Studio Code

I'm trying to use Visual Studio Code to develop Fortran MPI programs. However, while I can successfully build and run them just fine, it would be very helpful for me if I can use intellisense/autocompletion features for MPI (as well as other external modules). I have /usr/lib/openmpi/ (which contains mpi_f08.mod) as part of fortran.includePaths in my settings.json. However, when I use mpi_f08, I get the problem message from VS Code Module "mpi_f08" not found in project. Here is a minimal CMake build example:
! hello.f90
program hello
use mpi_f08
implicit none
integer :: ierror, nproc, my_rank
call MPI_Init()
call MPI_Comm_size(MPI_COMM_WORLD, nproc, ierror)
call MPI_Comm_rank(MPI_COMM_WORLD, my_rank, ierror)
print*, "hello from rank ", my_rank
call MPI_Finalize()
end program hello
# CMakeLists.txt
cmake_minimum_required(VERSION 3.12)
project(hello_mpi)
enable_language(Fortran)
find_package(MPI REQUIRED)
add_executable(hello_mpi hello.f90)
include_directories(${MPI_Fortran_INCLUDE_PATH})
target_link_libraries(hello_mpi PUBLIC ${MPI_Fortran_LIBRARIES})
I would like to be able to (i) get rid of the warning/message and more importantly (ii) enable suggestions from MPI when I press CTRL+space as it would if I was calling from an internal module.
I'll post a partial answer since it's better than nothing, hopefully this helps someone else and/or enables someone else to answer my question fully.
It seems the issue relates to the Fortran language server, which can be configured by adding a .fortls JSON file, as explained on its Github README: https://github.com/hansec/fortran-language-server
I added the following, which allowed it to find not only local modules but also MPI (and the external module json-fortran):
{
"source_dirs": ["src", "."],
"ext_source_dirs": [
"/path/to/json-fortran/src",
"/path/to/openmpi-4.1.2/ompi/mpi/fortran/use-mpi-f08",
]
}
This doesn't capture all functions in json-fortran, which I think is because of its .inc files, as it doesn't give me function pointers like json_file::get at autocomplete.
As for MPI, this kind of works, as it gives me all the functions I can think of needing, but with _f08 appended to the end of it. I don't know the inner workings of OpenMPI but I guess e.g. MPI_Init wraps MPI_Init_f08 for reasons of backward compatibility. For now I can simply autocomplete to the _f08 version and remove that bit manually. (I also tried adding openmpi-4.1.2/ompi/mpi/fortran/use-mpi-tkr and openmpi-4.1.2/ompi/mpi/fortran/mpif.h but no luck).
Would be nice to get this detail sorted though. It is also mildly annoying that I must manually include the source dirs now (removing it makes it not find local modules).

Why does Erlang offer both `import` for modules and `include` for headers?

Erlang's -import() directive lets you import code from other modules. Its include() directive lets you import code from headers. Why reasons are there to prefer either one over the other?
My hunch is that headers are good for short, easy-on-the-compiler kinds of code, such as record definitions, when you don't want to have to qualify the
Learn You Some Erlang states[1] that "Erlang header files are pretty similar to their C counter-part: they're nothing but a snippet of code that gets added to the module as if it were written there in the first place." Thus inclusion seems to cause the compiler to duplicate effort across different modules. And header files are what appear to be an optional complication on top of the mandatory module system. So why would I ever use a header file?
[1] https://learnyousomeerlang.com/a-short-visit-to-common-data-structures
Erlang's -import just allows you to call imported functions without the Module. It hurts legibility and should not be used: You need to check the import directive to know whether a function is local or external to the module.
With header files you get the same functionality as in C, you can use them to share -record definitions instead of having a dto-like module (1), you can use them to include -defines to use the same macros (2).
1:
-record(position, {x, y}).
Imagine that you have #position{} throughout the code, instead of defining the record everywhere and updating all of the copies when the record definition changes, you use a header (or a dto module with opaque types, but that's for another question).
And let's just hope that you remember to update all the copies, otherwise chaos ensues.
2:
-define(ENUM01, enum01).
-define(DEFAULT_TIMEOUT, 1000).
Instead of using enum01 and 1000 everywhere, which is error prone and requires multiple updates if you need to change them, you define them in a header and use them as ?ENUM01 and ?DEFAULT_TIMEOUT
Or you can be more thorough when testing:
-ifdef(TEST).
-define(assert(A), true = A).
-else
-define(assert(A), A).
-endif.
Or you can include some useful information:
-define(LOG(Level, X), logger:log(Level, X, #{line => ?LINE}).
The Erlang standard library uses header files to provide the ability to add metadata to your code.
For instance, EUnit functionality:
-include_lib("eunit/include/eunit.hrl").
import is helpful in building encapsulations, whereas include is kind of pre-processing(which means code will be part of the unit before it gets through the compiler).
An import ensues a dependency between two modules, which means a module A importing module B has B as dependency ... Whereas an include is extensional which means a module has included some code and that code is part of the module itself, and that is what header files do.
module(s) and header(s) are 2 semantically different things and serve different purposes. With modules, we can build abstractions by using the export(s), keep things confined by not exporting them, import from other modules(but they are not exported by default), re-export imported things etc etc. So when we import stuff, we can call upon functions from the other module, but only those which are exported in the other module. However that is not the case with header files. Everything inside a header file becomes part of the module which includes them. There is no sense of export/import inside header files. And header files are quite useful for writing and distributing definition(s) which otherwise could lead to redundancy in case of large programs.
So essentially they are 2 different things, so 2 different keywords available at our disposal. So don't prefer one over the other. Learn both of them as we need both of them.

Using classes in TCL using Simple Agent Pro

I am using this software called Simple Agent Pro, and it primarily uses TCL code. I was wondering anybody familiar with TCL or Sapro would be kind enough to tell me how to import the modules into the .tel file for Sapro.
When I try this:
package require tclOO.h
The program stops working.
Any help would be appreciated.
I don't know Simple Agent Pro at all, but if you're doing a “guerilla install” of TclOO then you need a few things:
Make sure you're using Tcl 8.5 (see what package require Tcl returns).
If you're using 8.4 (note: 8.4 EOLed this month), TclOO will not work at all (and it cannot be backported).
If you're using 8.6, it already provides the TclOO package and you shouldn't need to fuss around with all this.
Do a build of TclOO and install it to a location you prefer.
This will require Tcl's internal source files; TclOO explicitly pokes its nose into places where most code shouldn't.
You probably don't need to have a custom build of 8.5; just the configured sources somewhere will do. (You might need to hack the configure scripts a little bit.)
Add the location that you installed TclOO to to the search path inside your Tcl 8.5 program.
lappend auto_path /the_dir/you_put/it_in
If you're using Windows, it's probably easiest to use forward slashes for this path anyway (this is a directory name that is always highly protected before it hits the OS, so that's OK).
Now you should be able to require/use the package.
package require TclOO
oo::class create Foo {
# etc.
}
Note that the case and exactly how you write it matters. The version you get ought to be at least 1.0 (earlier versions were for development only) which corresponds exactly with the API as supported in Tcl 8.6 (modulo a few things that require 8.6 for other reasons, such as being able to yield inside a method which only works in 8.6 because that's where yield was first defined).
You probably mean
package require TclOO
Case and other stuff is important there.
Next time you should also include the stack trace. If the program stops working, it should display that either as dialog or on stdout.

Perl shallow syntax check? ie. do not check syntax of imports

How can I perform a "shallow" syntax check on perl files. The standard perl -c is useful but it checks the syntax of imports. This is sometimes nice but not great when you work in a code repository and push to a running environment and you have a function defined in the repository but not yet pushed to the running environment. It fails checking a function because the imports reference system paths (ie. use Custom::Project::Lib qw(foo bar baz)).
It can't practically be done, because imports have the ability to influence the parsing of the code that follows. For example use strict makes it so that barewords aren't parsed as strings (and changes the rules for how variable names can be used), use constant causes constant subs to be defined, and use Try::Tiny changes the parse of expressions involving try, catch, or finally (by giving them & prototypes). More generally, any module that exports anything into the caller's namespace can influence parsing because the perl parser resolves ambiguity in different ways when a name refers to an existing subroutine than when it doesn't.
There are two problems with this:
How to not fail -c if the required modules are missing?
There are two solutions:
A. Add a fake/stub module in production
B. In all your modules, use a special catch-all #INC subroutine entry (using subs in #INC is explained here). This obviously has a problem of having the module NOT fail in real production runtime if the libraries are missing - DoublePlusNotGood in my book.
Even if you could somehow skip failing on missing modules, you would STILL fail on any use of the identifiers imported from the missing module or used explicitly from that module's namespace.
The only realistic solution to this is to go back to #1a and use a fake stub module, but this time one that has a declared and (as needed) exported identifier for every public interface. E.g. do-nothing subs or dummy variables.
However, even that will fail for some advanced modules that dynamically determine what to create in their own namespace and what to export in runtime (and the caller code could dynamically determine which subs to call - heck, sometimes which modules to import).
But this approach would work just fine for normal "Java/C-like" OO or procedural code that only calls statically named predefined public subs, methods and accesses exported variables.
I would suggest that it's better to include your code repository in your syntax check. perl -I/path/to/working/code/repo/local_perl/ -c or set PERL5LIB=/path/to/working/code/repo/local_perl/ prior to running perl -c. Either option should allow you to check against your working code, assuming you have it in a directory structure similar to your live code.
I guess you could make stubs for the missing libraries in your home folder.
Have you looked into PPI? I think it does follow imports, however it could perhaps be more easily modified to guess what looks like a function name.

How do I find all the modules used by a Perl script?

I'm getting ready to try to deploy some code to multiple machines. As far as I know, using a Makefile.pm to track dependencies is the best way to ensure they are installed everywhere. The problem I have is I'm not sure our Makefile.pm has been updated as this application has passed through a few different developers.
Is there any way to automatically parse through either my source or a few full runs of my program to determine exactly what versions of what modules my application is depending on? On top of that, is there any way to filter it based on CPAN packages? (So that I only depend on Moose instead of every single module that comes with Moose.)
A third related question is, if you depend on a version of a module that is not the latest, what is the best way to have someone else install it? Should I start including entire localized Perl installations with my application?
Just to be clear - you can not generically get a list of modules that the app depends on by code analysis alone. E.g. if your apps does eval { require $module; $module->import() }, where $module is passed via command line, then this can ONLY be detected by actually running the specific command line version with ALL the module values.
If you do wish to do this, you can figure out every module used by a combination of runs via:
Devel::Cover. Coverage reports would list 100% of modules used. But you don't get version #s.
Print %INC at every single possible exit point in the code as slu's answer said. This should probably be done in END{} block as well as __DIE__ handler to cover all possible exit points, and even then may be not fully 100% covering in generic case if somewhere within the program your __DIE__ handler gets overwritten.
Devel::Modlist (also mentioned by slu's answer) - the downside compared to Devel::Cover is that it does NOT seem to be able to aggregate a database across multiple sample runs like Devel::Cover does. On the plus side, it's purpose-built, so has a lot of very useful options (CPAN paths, versions).
Please note that the other module (Module::ScanDeps) does NOT seem to allow you to do runtime analysis based on arbitrary command line arguments (e.g. it seems at first glance to only allow you to execute the program with no arguments) and if that's true, is inferior to all the above 3 methods for any code that may possibly load modules dynamically.
Module::ScanDeps - Recursively scan Perl code for dependencies
Does both static and runtime scanning. Just modules, I don't know of any exact way of verifying what versions from what distributions. You could get old packages from BackPan, or just package your entire chain of local dependencies up with PAR.
You could look at %INC, see http://www.perlmonks.org/?node_id=681911 which also mentions Devel::Modlist
I would definitely use Devel::TraceUse, which also shows a tree of the modules, so it's easy to guess where they are being loaded.