I work with multiple grammars in the repl. The grammars use same names for some of their rules.
One of the documentation recipes mentions full qualification, to disambiguate type annotations in function pattern matching (it's in a note of the load function, but not in the code of this page - the .jar has it correct). But that might become tedious, so maybe there is aliasing for imports (like in Python import regex as r)?! And using full qualification in the first argument of the parse function doesn't seem to help to disambiguate all parse rules that are invoked recursively, parse(#lang::java::\syntax::Java18::CompilationUnit, src). At least it produces weird errors if I also import lang::java::\syntax::Java15.
In general, what is a safe way to handle symbols from different modules with same names?
Alternatively, is there a way to "unload" a module in the repl?
Some background information:
Rascal modules are open for reasons of extensibility, in particular data, syntax definitions and overloaded functions can be extended by importing another module; In this way you can extend a language and its processing functions by importing another module and adding rules and function alternatives at leisure.
There is a semantic difference between importing and extending a module. In particular import is not transitive and fuses only the uses of a name inside the importing module, while extend is transitive and also fuses recursive uses of a name in the module that is extended. So for extending a language, you'd default to using extend, while for using a library of functions you'd use import.
We are planning to remove the fusing behavior from import completely in one of the releases of 2020. After this all conflictingly imported non-terminal names must be disambiguated by prefixing with the module name, and prefixing will not have a side-effect of fusing recursively used non-terminals from different modules anymore. Not so for extend, which will still fuse the non-terminal and functions all the way.
all the definitions in a REPL instance simulate the semantics of the members of a single anonymous module.
So to answer your questions:
it's not particularly safe to handle symbols from different imported modules with the same name, until we fix the semantics of import that is.
the module prefix trick only works "top-level", below this the types are fused anyway because the code which reifies a non-terminal as a grammar does not propagate the prefix. It wouldn't know how.
Unimporting a module:
rascal>import IO;
ok
rascal>println("x");
x
ok
rascal>:un
undeclare unimport
rascal>:unimport IO
ok
rascal>println("x");
|prompt:///|(0,7,<1,0>,<1,7>): Undeclared variable: println
probably one of the least used features in the environment; caveat emptor!
To work around these issues, a way is to write functions inside a different module for every separate language/language version, and create a top module which imports these if you want to bundle the functionality in a single interface. This way, because import is not transitive, the namespaces stay separate and clean. Of course this does not solve the REPL issue; the only thing I can offer there is to start a fresh REPL for each language version you are playing with.
Related
I am reading through O'Reilly's Perl Objects, References & Modules, more specifically its section about modules. It states that when using use Some::Module you can specify an import list. From its explanation it seems that the only benefit of using this list is for the sake of keeping your namespace clean. In other words, if you have a subroutine some_sub in your main package and the loaded module has a sub with the same name, your subroutine will be overridden. However, if you specify an import list and leave out some_sub from this list, you'll not have this conflict. You can then still run some_sub from the Module by declaring it like so: Some::Module::some_sub.
Is there any other benefit than the one I described above? I am asking this because in some cases you load modules with loads of functionality, even though you are only interested in some of its methods. At first I thought that by specifying an import list you only loaded those methods and not bloating memory with methods you wouldn't use anyway. However, from the explanation above that does not seem the case. Can you selectively save resources by only loading parts of a module? Or is Perl smart enough to do this when compiling without the need of a programmer's intervention?
From use we see that use Module LIST; means exactly
BEGIN { require Module; Module->import( LIST ); }
On the other hand, from require
Otherwise, require demands that a library file be included if it hasn't already been included. The file is included via the do-FILE mechanism, [...]
and do 'file' executes 'file' as a Perl script. Thus with use we load the whole module.
"Importing" a sub means that its name is added (or overwritten) in the caller's symbol table (via the CODE slot for the typeglob, normally aliased), by the package's import function. The sub's code isn't copied. Now, import can be written any way the author wants to, but generally the import list in the use statement merely controls what symbols are brought into the namespace. The preferred way to provide import in a module is to use the Exporter's import method.
Selective importing relieves the symbol table (and perhaps some related mechanisms), but I am not aware of practical benefits of this. The benefits are related to programming, via reduced chances for collisions.
Another clear benefit is that it nicely documents what is used in the code.
Note that "import list" is just a convention. Module's import function is free to do whatever it pleases with this list and you can see it (ab)used by many so-called pragma modules. Therefore partial loading is NOT bound to use in any way. For example module can load heavy function stubs WHEREVER you've imported them or not and dynamically load heavy implementation on actual first call.
Therefore use with partial import list may, or may not actually save any resources - it is all depends on actual implementation of used module.
While require and use indeed load entire .pm file - that file well could be just a lightweight stub and loader for actual code located elsewhere. There's another convention to call those modules ::Heavy.
Modules are free to implement partial loading in any way they please as well. Here are just some possibilities how module can save resources:
AUTOLOAD (with its complimentary AutoLoader, AutoSplit, and SelfLoader modules).
Use stubs that load necessary submodules.
Dynamically load heavy data (i.e. dictionaries or encoding maps) when they are first accessed by their name.
If you depend on other heavy modules, dynamically require them at runtime in functions that depend on them instead of compile-time use at the very start.
Everything on this list could work automatically behind the scenes, exposed through use import list, or work/be called in other, completely arbitrary way. Once again, it's completely up to module's implementation.
In java, there is a way to import a class and all of its children in one line:
import java.utils.*
In Perl, I've found I can only import specific classes:
use Perl::Utils::Folder;
use Perl::Utils::Classes qw(new run_class);
Is there similar way like java to import everything that falls under a tree structure, only in Perl?
No, there is not a way to easily do what you are after.
You could walk the relevant paths in your PERL library's filesystem and use every .pm file you came across (that's what Module::Find, as suggested by #Daniel Böhmer, does), but that can miss a few things:
Packages that are declared in funny ways/at runtime.
Multiple packages per module file.
Other cases I haven't thought of.
This is also a bad idea, for a few reasons:
You mentioned "classes" in your question, rather than just packages. Perl packages and subpackages do not necessarily represent classes/instantiable object-oriented code. If you were to programmatically generate a list of all packages in a hierarchy and then call $packagename->new() on each of them, you might have a syntax error, if one of the packages was just a library of functions.
Packages and subpackages often are not directly related, developed by the same people, or used for similar things. Just because a package starts with Net:: doesn't mean that it will obey standard conventions that other Net::-prefixed packages expect. For example, File::Find and File::Tail share a prefix, but have very little to do with each other; the prefix is in common because both utilities work with files as their goal.
Lots of packages do things at BEGIN/INIT/etc time when they're compiled. Some of them (sadly) do different things depending on the order in which they're used relative to other modules. The solution to this problem for module developers is "don't do that", but for module users, it's "use sparingly, and only when needed".
It clutters your local namespace with lots of potentially-exported symbols you don't necessarily need (to conditionally import symbols, you'll have to use import arguments like you're doing in your example; there's no programmatic way to define "symbols I'm interested in", since Perl doesn't have that kind static analysis at compile time . . . not for lots of call styles, at least).
It slows down your program's startup time by compiling things you might not necessarily need. This might not seem important at the early phase of a project, but for larger projects it is very easy to end up in situations where you're pulling in over a thousand CPAN modules when you start Apache (or launch your main script, or whatever), and your app takes more than a minute just to start.
I have a hunch that you're trying to reduce boilerplate (as in: all of your modules have a big block of use statements at the top, and that's duplicated everywhere). There are a few ways to do this, starting with:
Don't: import things in each module as you need them, and use strict/warnings and lots of tests to be told early on if you're calling functionality that you haven't imported yet.
You could also make your own Exporter subclass that uses all of your standard modules and adds the functions that you frequently use from them to its #EXPORTS (or splices their #EXPORTS onto its own, or uses Exporter sub-sub-classing, or something).
Factor your code so that the parts that depend on multiple imported modules live in a single utility module, and import that.
Factor your code so that the parts that depend on the imported modules live in a parent class, and address its methods via instances of subclasses (or SUPER), so your subclasses don't have to explicitly contain the imports, e.g. $instance->method_that_calls_an_imported_function_in_the_parent();
Also, as an aside, using package.* imports in Java is debatable, and has many of the same drawbacks of doing it in Perl.
In Perl, the class Foo::Bar::Foo may not be a subclass of Foo::Bar. Nor, is there any guarantee that a subclass module even has the same class prefix. IO::File is a subclass of IO::Handle and not of IO which isn't even a module.
There also isn't even an easy way to tell of a Perl module is a sub-class of another Perl module. There are (at least) three ways to declare a subclass' relationship to a class:
use parent
use base
The #ISA package variable
It is possible to use #INC to find all modules, then look at the source and look at use parent, use base, and #ISA declarations and build a Perl class matrix, then go through that matrix to load the classes you do need. That will probably be slow and cumbersome, and doesn't even cover Moose based classes.
You're asking the wrong question. You're asking "Find all of the subclasses of a particular class.". This will include classes that you're probably not even interested in. I know (for example LWP) that there can be dozens of various classes and subclasses that include stuff you're not even interested in.
What you should be asking is "What do I need to do?", and then find the classes that fulfill your needs. If these classes happen to be child classes of a particular parent class, these subclasses will load the required class.
We do Java programming here, and one of the standards is not to use asterisks in our import statements. This is considered sloppy programming. If you need a particular class, you should declare it rather than simply declaring a superclass. Many of our reporting tools have problems with asterisk declarations in import statements.
There is a Module::Find module, but I am not sure exactly how it works. I believe it simply assumes that subclasses are in the same module hierarchy as the superclass, but that's far from true in Perl.
In general, I think it is a bad idea to load a whole 'tree' of modules (or subclasses so to speak).
There is definitely something wrong in your design if you need to know all and everything about sub classes/modules. You break the rules of encapsulation and you should not need to know how a class handles its responsibilities.
I wonder if macros only have pros in any programming language. As there must be the limit we can create macros, and in frequency of use.
Suppose we create 100 macros in a class and imported that in 100 class of an application project. Is this a right approach?
#define is useful for:
header include guards for C and C++ headers (idiom: #include C and C++ headers, #import ObjC headers)
conditional compilation for language compatibility (e.g. UIKIT_EXTERN) which must be resolved during preprocessing.
conditional compilation for platform/compiler/version specific declarations (e.g. NS_AVAILABLE_IOS()) which must be resolved during preprocessing.
Assertion macros. Specifying the information such as the file, line and expression is just too noisy and error-prone.
for everything else, there is an alternative which will save you headaches along the way.
Suppose we create 100 macros in a class and imported that in 100 class of an application project. Is this a right approach?
no - that is terrifying :)
Cons
There are many. Too often, the program will be difficult to understand by humans and programs (compiler, parser, indexer) alike. It's a good way to introduce bugs, and completely replace text of unrelated programs (via a simple #import). There are more modern replacements for pretty much everything (e.g. functions, inline, typedef, constants, enums) -- Allow the compiler to make your life easier!
#define is a preprocessor macro. You do not create macro in a class as the macro is textually expanded on use but it's definition is not part of the class even if you happen to lexically write it within the class.
Also you do not import a class in other classes, you import the class declaration header. Imports only happen once per compilation unit even if you specify them multiple times. Therefore you'd still have at most one declaration of each macro in each compilation unit, and since macros are preprocessor only it'll only be there during the early phases of compilation.
You are not saying exactly what you want to use the macros for, therefore it's hard to say if they are the best solution for you or if you have a better alternative. But I would not worry about their number. The standard header files define hundreds of macros.
Does it make sense - when loading a module - to import needed functions explicitly when the module does export these functions by default and when it is used a object-orientated interface?
I think this is subjective, but yes, it often makes sense. Default imports are more convenient, but explicit imports are somewhat safer, in that you're less likely to accidentally import something without knowing about it.
[…] and when it is used a object-orientated interface?
If a module only has an object-oriented interface, then it shouldn't export very much by default (since method calls don't benefit from the method names having been imported). If a module offers both an object-oriented interface and a procedural one, and you're only using the object-oriented interface, then it's very likely to be a good idea to specify your imports explicitly, since you'll need very few imports (or none at all). This depends, of course, on whether the module exports any of its procedural function-names by default.
By explicitly declaring the functions that you want to import, even if they are exported by default, also stops the module from importing the other functions you may not be using it would have exported by default.
What's the Pascal way to do C's #include "code.h", Python's import code, etc.?
Pascal uses
uses
to import other modules.
While you can explicitly {$INCLUDE a file it's rarely done other than for configuration files containing compiler switches. The only time I've ever done it was long ago when I wanted two versions of the code identical except one used coprocessor-only datatypes and the other didn't. (And how many people these days even know that single and double types used to require either an expensive additional chip or a slow emulator?)
If you include the same code in two places you will get two copies of it in your .EXE. If you include the same type definition in two places you'll get two types with the same name and since Pascal uses strict typing they will not match.
The normal mechanic is as Greg Hewgill says, to use the file you want. Anything that appears in the interface of the file you use is visible, anything that's only in the implementation is not visible. This is an all-or-nothing process, you don't specify what you are bringing in. Think of the C# using command.
Unlike the C# version it's absolutely mandatory. You can't use fully qualified names to get around it.