What diff between babel-polyfill and babel-es2015? Does babel-es2015 include the babel-polyfill?
Babel-polyfill
This module accomplishes emulating ES2015 by assigning methods on the global (like: Promise and WeakMap) and to prototypes (like Array.prototype.includes). For example: if your environment doesn’t have a Promise object, once you require babel-polyfill, you know have the Promise object, because it was added to the global scope.
Babel-runtime
This module does something very similar, but it doesn’t change the global namespace or pollute prototypes. Instead, babel-runtime can be included as a dependency of your application, just like any other module, and you can include the ES2015 method from the module.
For example: the Promise object, can be included by using:
include-promise
require('babel-runtime/core-js/promise');
This however, is too much work, so we use babel-plugin-transform-runtime which we can add to our babel config to automatically rewrite your code such that you write your code using the Promise API and it will be transformed to use the Promise-like object exported by babel-runtime.
babel-config
{
"plugins": ["transform-runtime"]
}
more details can find here
So when to use what?
If you don’t care about polluting the global scope, babel-polyfill is
the better option: the runtime does not work with instance methods
such as myArray.includes(15).
If you are writing a public module: use babel-runtime
If you are writing an app then babel-polyfill is the better option
There will be cases, when you don’t need either of these, if you’re
writing a library where size matters, you may be better of avoiding
them.
Related
While compiling and Xcode swift project for MacOS, not used functions are removed from the binary (removed by the optimizer I guess). Is there a way to tell the compiler to not remove unused functions, perhaps with a compiler option (--force-attribute?) so that even with optimization enabled those functions remain in the binary?
I know that if a global function is declared as public (public func test()) then it's not removed even if not used (Since it can be used by other modules), but I can't use public since that would export the symbol for that function.
Any suggestion?
If this is indeed removed by the optimiser, then the answer is two-fold.
The optimiser removes only things that it can prove are safely removable. So to make it not remove something, you remove something that the optimiser uses to prove removability.
For example, you can change the visibility of a function in the .bc file by running a pass that makes the functions externally callable. When a function is private to the .bc file (also called module) and not used, then the compiler can prove that nothing will ever call it. If it's visible beyong the .bc file, then the compiler must assume that something can come along later and call it, so the function has to be left alive.
This is really the generic answer to how to prevent an optimisation: Prevent the compiler from inferring that the optimisation is safe.
Writing and invoking a suitable pass is left as an exercise for the reader. Writing should be perhaps 20 lines of code, invoking… might be simple, or not, it depends on your setting. Quite often you can add a call to opt to your build system.
As I discovered, the solution here is to use a magic compiler flag -enable-private-imports (described here: "...Allows this module's internal and private API to be accessed") which basically add the function name to the list #llvm.used that you can see in the bitcode, the purpose of the list is:
If a symbol appears in the #llvm.used list, then the compiler,
assembler, and linker are required to treat the symbol as if there is
a reference to the symbol that it cannot see (which is why they have
to be named)
(cit.) As described here.
This solves my original problem as the private function even if not used anywhere and not being public, during compilation is not stripped out by the optimiser.
I have a Swift application that uses a module, and I need to call a global function that is in the application from the module, is this possible?
To perhaps explain a little better, this is a test app structure:
CallbackTestApp contains a function foo(), I would like to call it from Module1 or File, will swift allow this?
edit #1
More details have been requested on what is the background of my issue, hopefully, this will not turn out to be an XY situation.
There's a tool developed by my company that process the application source* code and in some places add function call (ignore the why etc, have to be generic here.). Those function calls are exactly to foo() which then does some magic (btw, no return value and no arguments are allowed), if the application does not use modules or if modules are excluded from the processing then all is fine (Linker does not complain that the function is not defined), if there are modules then nothing works since I did not found a way to inject foo() (yet).
*Not exactly the source code, actually the bitcode is processed, the tool get the source, use llvm toolchain to generate bitcode, do some more magic and then add the call to foo() by generating it's mangled name and adding a swiftcall
Not actually sure those additional details will help.
How can I perform a "shallow" syntax check on perl files. The standard perl -c is useful but it checks the syntax of imports. This is sometimes nice but not great when you work in a code repository and push to a running environment and you have a function defined in the repository but not yet pushed to the running environment. It fails checking a function because the imports reference system paths (ie. use Custom::Project::Lib qw(foo bar baz)).
It can't practically be done, because imports have the ability to influence the parsing of the code that follows. For example use strict makes it so that barewords aren't parsed as strings (and changes the rules for how variable names can be used), use constant causes constant subs to be defined, and use Try::Tiny changes the parse of expressions involving try, catch, or finally (by giving them & prototypes). More generally, any module that exports anything into the caller's namespace can influence parsing because the perl parser resolves ambiguity in different ways when a name refers to an existing subroutine than when it doesn't.
There are two problems with this:
How to not fail -c if the required modules are missing?
There are two solutions:
A. Add a fake/stub module in production
B. In all your modules, use a special catch-all #INC subroutine entry (using subs in #INC is explained here). This obviously has a problem of having the module NOT fail in real production runtime if the libraries are missing - DoublePlusNotGood in my book.
Even if you could somehow skip failing on missing modules, you would STILL fail on any use of the identifiers imported from the missing module or used explicitly from that module's namespace.
The only realistic solution to this is to go back to #1a and use a fake stub module, but this time one that has a declared and (as needed) exported identifier for every public interface. E.g. do-nothing subs or dummy variables.
However, even that will fail for some advanced modules that dynamically determine what to create in their own namespace and what to export in runtime (and the caller code could dynamically determine which subs to call - heck, sometimes which modules to import).
But this approach would work just fine for normal "Java/C-like" OO or procedural code that only calls statically named predefined public subs, methods and accesses exported variables.
I would suggest that it's better to include your code repository in your syntax check. perl -I/path/to/working/code/repo/local_perl/ -c or set PERL5LIB=/path/to/working/code/repo/local_perl/ prior to running perl -c. Either option should allow you to check against your working code, assuming you have it in a directory structure similar to your live code.
I guess you could make stubs for the missing libraries in your home folder.
Have you looked into PPI? I think it does follow imports, however it could perhaps be more easily modified to guess what looks like a function name.
Do you use
require "name"
or
local name = require "name"
Also, do you explicitly declare system modules as local variables? E.g.
local io = require "io"
Please explain your choice.
Programming in Lua 2ed says "if she prefers to use a shorter name for a module, she can set a local name for it" and nothing about local m = require "mod" being faster than require "mod". If there's no difference I'd rather use the cleaner require "mod" declaration and wouldn't bother writing declarations for pre-loaded system modules.
Either of them works. Storing it in a local will not change the fact that the module may have registered global functions.
Some libraries work only one way, and some only work the other way
The require "name" syntax was the one introduced in lua 5.1; as a note; this call does not always return the module; but it was expected a global would be created with the name of the library (so, you now have a _G.name to use the library with). eg, earlier versions of gsl-if you did local draw = require"draw" the local draw would contain true; and shadow the global draw created by the library.
The behaviour above is encouraged by the module function ==> now relatively deprecated, any well written new code will not use it.
The local name = require"name" syntax became preferred recently (about 2008?); when it was decided that modules setting any globals for you was a bad thing.
As a point: all my libraries do not set globals, and just return a table of functions; or in some other cases, they return an function that works as an initialiser for the root object.
tldr;
In new code, you should use the latter local name = require"name" syntax; it works in the vast majority of cases, but if you're working with some older modules, they may not support it, and you'll have to just use require"module".
To answer your added question:
do you require the system modules?: no; you just assume they are already required; but I do localise all functions I use (usually grouped into lines by the module they came from), including those from external libraries. This allows you to easily see which functions your code actually relies on; as well as removing all GETGLOBALs from your bytecode.
Edit: localising functions is now discouraged. To find accidental globals use a linter like luacheck
Sample module in (my) preferred style; will only work with the local name = require"name" syntax.
local my_lib = require"my_lib"
local function foo()
print("foo")
end
local function bar()
print("bar", my_lib.new())
end
return {
foo = foo;
bar = bar;
}
I would say it mainly boils down to what you prefer, but there are some exceptions depending on what you are writing. Creating a local reference will gain you some speed, but in most cases this isn't a meaningful optimization to do. You should also point your local to the function you are using and not the module table.
If you are writing on a library then I would recommend creating local references for all the global variables you use. Even if you aren’t using the module() function, which changes the global scope for the code that follows. The reason for this is that it prevents your library from calling globals which have been altered after the library was loaded.
Doing local io = require’io’ will ensure you that you get the table from package.loaded.io, even if _G.io has been replaced by another table. I generally don’t do this myself. I expect io to already be loaded and unmodified when I write Lua.
You also have to remember that there are several ways to write a Lua module. Some modules don't return their module table, while others don't create any global variable. The most common solution is to both create and return a global, but you can't always rely on this.
In my project I'm currently preparing a step-by-step move from legacy code to new, properly-designed and tested modules. Since not every fellow programmer follows closely what I do, I would like to emit warnings when old code is used. I would also strongly prefer being able to output recommendations on how to port old code.
I've found two ways of doing it:
Attribute::Deprecated, which is fine for functions, but rather cumbersome if a complete module is deprecated. Also, no additional information apart from warnings.
Perl::Critic::Policy::Modules::ProhibitEvilModules for modules or maybe a custom Perl::Critic rule for finer deprecation on function or method level. This method is fine, but it's not immediately obvious from code itself that it's deprecated.
Any other suggestions or tricks how to do this properly and easy?
For methods and functions, you can just replace the body of the function with a warning and a call to the preferred function.
perl perllexwarn gives the following example:
package MyMod::Abc;
sub open {
warnings::warnif("deprecated",
"open is deprecated, use new instead");
new(#_);
}
sub new {
# ...
}
1;
If you are deprecating a whole module, put the warning in a BEGIN block in the module.
You can also put the warnings in the import method (e.g. Win32::GUI::import): It all depends on exactly what you want to do.