Why does Erlang offer both `import` for modules and `include` for headers? - import

Erlang's -import() directive lets you import code from other modules. Its include() directive lets you import code from headers. Why reasons are there to prefer either one over the other?
My hunch is that headers are good for short, easy-on-the-compiler kinds of code, such as record definitions, when you don't want to have to qualify the
Learn You Some Erlang states[1] that "Erlang header files are pretty similar to their C counter-part: they're nothing but a snippet of code that gets added to the module as if it were written there in the first place." Thus inclusion seems to cause the compiler to duplicate effort across different modules. And header files are what appear to be an optional complication on top of the mandatory module system. So why would I ever use a header file?
[1] https://learnyousomeerlang.com/a-short-visit-to-common-data-structures

Erlang's -import just allows you to call imported functions without the Module. It hurts legibility and should not be used: You need to check the import directive to know whether a function is local or external to the module.
With header files you get the same functionality as in C, you can use them to share -record definitions instead of having a dto-like module (1), you can use them to include -defines to use the same macros (2).
1:
-record(position, {x, y}).
Imagine that you have #position{} throughout the code, instead of defining the record everywhere and updating all of the copies when the record definition changes, you use a header (or a dto module with opaque types, but that's for another question).
And let's just hope that you remember to update all the copies, otherwise chaos ensues.
2:
-define(ENUM01, enum01).
-define(DEFAULT_TIMEOUT, 1000).
Instead of using enum01 and 1000 everywhere, which is error prone and requires multiple updates if you need to change them, you define them in a header and use them as ?ENUM01 and ?DEFAULT_TIMEOUT
Or you can be more thorough when testing:
-ifdef(TEST).
-define(assert(A), true = A).
-else
-define(assert(A), A).
-endif.
Or you can include some useful information:
-define(LOG(Level, X), logger:log(Level, X, #{line => ?LINE}).

The Erlang standard library uses header files to provide the ability to add metadata to your code.
For instance, EUnit functionality:
-include_lib("eunit/include/eunit.hrl").

import is helpful in building encapsulations, whereas include is kind of pre-processing(which means code will be part of the unit before it gets through the compiler).
An import ensues a dependency between two modules, which means a module A importing module B has B as dependency ... Whereas an include is extensional which means a module has included some code and that code is part of the module itself, and that is what header files do.
module(s) and header(s) are 2 semantically different things and serve different purposes. With modules, we can build abstractions by using the export(s), keep things confined by not exporting them, import from other modules(but they are not exported by default), re-export imported things etc etc. So when we import stuff, we can call upon functions from the other module, but only those which are exported in the other module. However that is not the case with header files. Everything inside a header file becomes part of the module which includes them. There is no sense of export/import inside header files. And header files are quite useful for writing and distributing definition(s) which otherwise could lead to redundancy in case of large programs.
So essentially they are 2 different things, so 2 different keywords available at our disposal. So don't prefer one over the other. Learn both of them as we need both of them.

Related

How to find unused imports in umbrella header architecture?

I am looking to find unused imports in an umbrella architecture. The imports are used within swift files. These files receive their imports from Objective-C headers so a text based search would not work. I wrote a script that searches for imports and checks if they are used within the class through text. But the issue is that some imports do not have to be explicitly stated. For example, if we have a framework foo that we use by:
import foo
and that framework has
#import <Example/Example.h>
in its own umbrella header foo.h. I have access to Example, without explicitly importing it.
So the issue is that we cannot see if an import is unused because it does not have to be explicitly imported. Is there a way we can detect unused imports?
I created a text based script in python that searches through a code base and checks for imports that are not used. It would go through directories and look for specific imports, and if they are present check if they are actually being used within the class. I expected that this would work without considering the umbrella architecture. But because there are scenarios where imports do not have to be explicitly stated, this did not work.

Absolute and relative path conflict in Modelica

I want to build up a tests library and keep it separated from the libraries under development. My first thought is to go for a structure like the following:
PensLib
--Variants
----BallPoint
----FountainPen
----Tests
------TB_BallPoint
HammocksLib
--Variants
----SingleHammock
----DoubleHammock
----Tests
------TB_DoubleHammock
--Systems
----IndoorWalls
----OutdoorWallAndTree
----CoconutPalms
----Tests
------TB_IndoorWalls
Tests
--PensLib
----Variants
------Test_BallPoint // extends PensLib.Variants.Tests.TB_BallPoint
--HammocksLib
----Variants
------Test_DoubleHammock // extends HammocksLib.Variants.Tests.TB_DoubleHammock
----Systems
------Test_IndoorWalls // extends HammocksLib.Systems.Tests.TB_IndoorWalls
For now let's assume that the way I structure my libraries make sense (which most likely doesn't). I will soon ask more questions on good practices in setting up the testing environment in Dymola and with the Testing Library.
My question is about the correct way to handle relative and absolute paths within models, if possible at all.
The model PensLib.Variants.Tests.TB_BallPoint is used for developing the variant BallPoint
The model Tests.PensLib.Variants.Tests_BallPoint is used for automated testing
I want the model Test_BallPoint to extend the model TB_BallPoint, but I cannot link them. I guess the absolute path PensLib.Variants.Tests.TB_BallPoint is treated as a relative one, since PensLib is found "on the way out" of the Tests library, and from there it goes looking for the rest of the path. Is there perhaps a way to control the path, kind of ..\..\..\PensLib\Variants\Tests\TB_BallPoint?
As you already noted such a setup makes troubles. There are ways around that, namely global name lookup and imports, which I explain briefly further below.
Both solutions are nice when you have such a case in a few situations. But if you have to use it all the time, you make your setup unnecessarily complicated.
Hence, I suggest to make yourself the live easier and change your package structure:
Either create a dedicated test library for every library, maybe PensLib_Tests and HammocksLib_Tests
Or rename the packages in the Tests library and don't use the exact library names
Global name lookup
You can use absolute class paths. They are denoted with a leading ., so this should work:
extends .PensLib.Variants.Tests.TB_BallPoint;
See Modelica Specification chapter 5: Scoping, Name Lookup, and Flattening for details, especially 5.3.3 Global Name Lookup
Importing
You can simply import the library. Lookup of imports is always performed globally.
import PensLib;
extends PensLib.Variants.Tests.TB_BallPoint;

Prism modules : reference assembly once

EDIT : got it shorter.
We created three modules following the prism doc and our requirements.
We did a horizontal slices with modules.
SharedServices
BusinessLogic
UserInterface
In the UserInterface we are using Syncfusion components and other packages, and It would be great to put everything in the UserInterface module but how can we reference nuget assemblies from that module in the shell (to apply theming for example) to avoid having references in each modules & the shell ?
Should we add nugetpackage to each module and the shell (is it bad... ?) or is it possible to have one module which defines base class referencing external assemblies for example and that would be themable (with ResourceDictionary) & usable in the whole solution (shell & other modules) .
Thanks.
Very broad question, it might well be closed, but I try to give you a few guiding thoughts:
Generally, you either slice horizontally (as you did, UI-module with all the views plus logic-module with all the services) or vertically (as your Product-module suggests: views, view models, services for the product in one module, those for the user in another).
You can do both, but then you should "slice through", so one module for product-ui, one for user-ui, one for product-services, one for user-services... you get the idea. That means a lot of modules, though.
Also, when creating your modules, have an idea of what you want to achieve. Modules can encapsulate components to be reused in another app. Or they can encapsulate exchangeable components, so you could create a car-sharing app today and tomorrow swap out the car-module for a bike-module and have a bike-sharing app. Or they can be used to enforce segregation of code based on risk analysis in a regulated environment. What I'm trying to convey: don't create modules just to have modules, make each module have a defined purpose.
Also, define the interfaces for the modules. I don't like modules to reference each other, as it effectively destroys all segregation that would otherwise be there. Create seperate non-module assemblies that only contain public interfaces. Then make your modules contain the implementations as internal types. In an ideal world, no module assembly contains a public type. The interface-assemblies can be either per module or per consumer or per link between modules (those checked boxes in your N2-chart, you have one, don't you?).
You want to keep the number of modules reasonable, as well as the dependencies between them (not as in "assembly references" but through interface-assembly).
how can we reference nuget assemblies from that module in the shell (to apply theming for example) to avoid having references in each modules & the shell ?
You should separate the "interface" part (e.g. base classes or DTOs, not part of the module) and the actual services part (that's the module). Example: unity has a nuget package for the interfaces (Unity.Abstractions) and one that contains the container implementation (Unity.Container). There's nothing wrong with everyone referencing the interface, basically, that's saying "I want to use that interface".

Request for clarification on Yocto inheritance

I've recently made a foray into building Linux-based embedded systems, a far cry from my usual embedded stuff where I have total control over everything.
As part of that, I'm looking into the Yocto/bitbake/OpenEmbedded build system.
There's one thing I'm grappling with and that's the layering concept, so I'm trying to both figure out the way in which layers use/affect other layers.
From my understanding to date, a .bb recipe file uses require to simply include another file, similar to C's #include "myheader.h" which generally looks locally.
A .bbappend file in an "upper" layer will auto-magically include the base file then make changes to it, sort of an inherent require.
In contrast, the inherit keyword looks for a .bbclass class file in much the same way as it locates the .bb files, and inherits all the detials from them (sort of like #include <stdio.h> which, again generally, looks in the system area(a)).
So the first part of my question is: is my understanding correct? Or am I being too simplistic?
The second part of my question then involves the use of BBEXTENDS in the light of my current understanding. If we already have the ability to extend a recipe by using require, what is the purpose of listing said recipes in a BBEXTENDS variable?
(a) Yes, I'm aware they're both totally implementation dependent in terms of where the headers come from, I'm simply talking about their common use.
The learning curve for Yocto is different than other building systems, that's why I understand your confusion. But trust me, this is worth it. Your questions are related to BitBake so I recommend the BitBake User Manual. Just ensure that you're reading the same version as your poky revision.
require and include.
require is similar to include and can be compared to #include from C and C++ just like you have written.
Although generally both of them should be used to add some extensions to a recipe (*.bb) which are common to some amount of recipes (simply - can be reused).
For instance: definitions of paths, custom tasks used by couple recipes. The common purpose is to make recipe cleaner and separate some constants for re-usage.
The very important thing -> difference between include and require (from BitBake manual):
The include directive does not produce an error when the file cannot be found. Consequently, it is recommended that if the file you are including is expected to exist, you should use require instead of include. Doing so makes sure that an error is produced if the file cannot be found.
As a result: when you include a file to *.bb and it hasn't been found, the BitBake will not raise an error during parsing this recipe.
If you would use require, the error will be raised. You should use require when the pointed file must exist because it contains important variables/tasks that are mandatory to process.
*.bbappend mechanism.
In the case of *.bbappend - it's very powerful. The typical usage is whey you are adding some custom modifications to the recipe from other layer (located above layer where original recipe is) by *.bbappend because (e.g): you are not the maintainer of original recipe or the modifications are only used in your project (then it should be located in your meta-layer). But you can also bbappend the recipe on the same layer. BitBake parses all layers and then 'creates' an output and executes it. More in chapter Execution from BitBake man.
inherit.
The inherit mechanism can be used to inherit *.bbclass where common tasks for some specific purpose are defined so you don't need to write them on your own, e.g: you use inherit cmake or inherit autotools to your recipe when it needs to provide output for sources that are built correspondingly by CMake (and you have CMakeLists.txt defined) or autotools (Makefile.am etc.).
The definitions of classes provided by OpenEmbedded are located under /meta/classes/ if you are using Yocto release with poky.
You can check them and you will see that for example autotools.bbclass has defined (among others) task: autotools_do_configure() so you don't need to write it from the scratch.
However, you can redefine it in your recipe (by just providing your own definition of this function). If the recipe can't be changed, then you can simply create a *.bbappend file and write your own function do_configure() which will override the function from *.bbclass. Just like in OO languages such as C++ or Java.

Perl shallow syntax check? ie. do not check syntax of imports

How can I perform a "shallow" syntax check on perl files. The standard perl -c is useful but it checks the syntax of imports. This is sometimes nice but not great when you work in a code repository and push to a running environment and you have a function defined in the repository but not yet pushed to the running environment. It fails checking a function because the imports reference system paths (ie. use Custom::Project::Lib qw(foo bar baz)).
It can't practically be done, because imports have the ability to influence the parsing of the code that follows. For example use strict makes it so that barewords aren't parsed as strings (and changes the rules for how variable names can be used), use constant causes constant subs to be defined, and use Try::Tiny changes the parse of expressions involving try, catch, or finally (by giving them & prototypes). More generally, any module that exports anything into the caller's namespace can influence parsing because the perl parser resolves ambiguity in different ways when a name refers to an existing subroutine than when it doesn't.
There are two problems with this:
How to not fail -c if the required modules are missing?
There are two solutions:
A. Add a fake/stub module in production
B. In all your modules, use a special catch-all #INC subroutine entry (using subs in #INC is explained here). This obviously has a problem of having the module NOT fail in real production runtime if the libraries are missing - DoublePlusNotGood in my book.
Even if you could somehow skip failing on missing modules, you would STILL fail on any use of the identifiers imported from the missing module or used explicitly from that module's namespace.
The only realistic solution to this is to go back to #1a and use a fake stub module, but this time one that has a declared and (as needed) exported identifier for every public interface. E.g. do-nothing subs or dummy variables.
However, even that will fail for some advanced modules that dynamically determine what to create in their own namespace and what to export in runtime (and the caller code could dynamically determine which subs to call - heck, sometimes which modules to import).
But this approach would work just fine for normal "Java/C-like" OO or procedural code that only calls statically named predefined public subs, methods and accesses exported variables.
I would suggest that it's better to include your code repository in your syntax check. perl -I/path/to/working/code/repo/local_perl/ -c or set PERL5LIB=/path/to/working/code/repo/local_perl/ prior to running perl -c. Either option should allow you to check against your working code, assuming you have it in a directory structure similar to your live code.
I guess you could make stubs for the missing libraries in your home folder.
Have you looked into PPI? I think it does follow imports, however it could perhaps be more easily modified to guess what looks like a function name.