As performance is always important I was wondering how to best include polyfills. I'd prefer to use codesplitting and then lazyload each polyfill if the current environment doesn't that support it.
Doing this manually requires more work which I would happily put in. eslint-plugin-compat would be a good helper in this case I guess.
Automating this process with sth. like babel-preset-env reduces the effort but does it bring the same performance benefits as doing it "manually"?
From my understanding, babel-preset-env is transpiling all that stuff out of the box but when you use useBuiltIns it adding the corresponding polyfills, right? -> https://babeljs.io/docs/plugins/preset-env/#usebuiltins
But then again, does it always load all polyfills (files/modules) and checks within those if the polyfill is needed or does it load the polyfill modules conditionally if the fn isn't available on the window or prototype?
If you could point me in a direction or explain the way babel-preset-env works that would be great.
Help is much appreciated :)
Related
I mentioned on Twitter that I was moving from es6-shim to babel. Someone else mentioned:
the shims are still needed even with babel. they fix broken builtins, ones babel's output uses.
So:
Does babel need es6-shim or similar?
If it does, why doesn't babel require these things as a dependency?
Answers with references preferred over 'yes / no' with no supporting arguments!
Babel, at its core, does a single thing: convert syntax from one form to another.
Some of Babel's syntax transformations introduce dependencies on ES6 library functionality. It doesn't concern itself with how that functionality got there because:
The system might already provide it
The user might only want to load specific pieces of a library
There are many polyfills and the user might have a specific one it wants to use.
It is the developers job to ensure that the transpiled code is running in an environment where all the functions it needs actually exist.
Babel should work fine with es6-shim if you'd like to keep using it.
Babel also exposes babel/polyfill as a dead simple way to load a polyfill, which loads core-js, another polyfill like es6-shim. Just:
require('babel/polyfill');
Some Babel transformations rely on objects or methods that may not be available in your runtime environment and which you therefore would want to polyfill for those environments. Those dependencies are documented at https://babeljs.io/docs/usage/caveats/.
Babel ships with a polyfill that satisfies all of those requirements that you can opt-in to if you want, and doesn't attempt to automatically insert polyfills for the reasons that #loganfsmyth explained.
I've been diving into some of the more advanced features of powershell modules and manifests recently, with a view to handling scenarios more advanced than just a basic export of a few functions. It sounds like it should be obvious, but I'm struggling to find a nice solution for sharing common 'helper' type functions across several large non trivial modules. In particular, I'm looking for a solution that:
Allows sharing of 'helper' type functions without necessarily being exported by anyone
Allow installation via PsGet from a local repo path
Let me go into some of the challenges I see.
First of all, as far as I can tell, PsGet does not handle module dependencies well. This implies sharing between modules is going to be a struggle. Maybe a solution to this is to avoid PsGet, and use a custom script to 'install' modules to the local module path, which might be more tolerant of dependencies and load order.
My point about not using module exports to share helper functions also seems to be an issue. The reason I can see for this is desiring aliases, helpers etc for common internal actions (needed inside useful functions), that are either useless or unsafe to expose. For example, a nice brief alias for getting the local script path (commonly used, noisier than it should be). Or I recently made a nice simple wrapper around PromptForChoice with fewer options. Maybe this whole thing isn't a real issue. But I can't help but feel that shipping a 'utils' module that exports low level functions that are useful inside real modules, but not to an end user, seems like the wrong way to go.
What I've been playing with is a small build structure that tests and then packs modules, and I want to get some code sharing possible. I've been looking for an alternative using ScriptsToProcess in the manifest, but these seem to be absolute paths, not relative.
Imagine a folder structure:
modules
utils
console_helpers.ps1
moduleA
moduleA.psm1
moduleA.psd1
moduleB
moduleB.psm1
moduleB.psd1
packed_modules
moduleA.zip
moduleB.zip
What I was considering was that you could list relative paths in each ScriptsToProcess, and then my pack phase will go and drag those relative paths in to each zip.
Is this a horrible crazy idea? Am I right that ps modules and PsGet really don't have decent dependency support? I would love to hear feedback from anyone who has looked into this space. I think the answer I'm hoping to get in rough priority might be:
Here's an example of sharing code without exposing it (probably a build/pack level solution)
Here's how to make module dependencies work nicely, using PsGet
Here's how to make module dependencies work nicely, but you can't use PsGet
Just expose everything from modules
This is a terrible idea and you're terrible
Thanks!
UPDATE as suggested by CalebB
Here's another example to illustrate what I'm trying to resolve. I find it useful to wrap up '&' style execution of commands with a wrapper function, to deal with stuff like checking exit codes etc. If i'm building half a dozen modules, many of them will want to make use of that helper (obviously).
My options today seem to be put it in a module and export it, but maybe I don't want it exported, I want more of a . source style access. And if I've got a family of modules all trying to use this stuff, the options for module dependency management are limited (PsGet limitation etc).
If I'm 'building' all the modules at once (with some decent psake and pester infrastructure), maybe I can use a hack at this point to embed scripts into my zipped modules to 'solve' all these problems?
Allows sharing of 'helper' type functions without necessarily being exported by anyone
Mhm... what is wrong with dot sourcing the scripts you need within particular module ? You could :
Keep your folder structure and symlink the desired functions into module folder.
Try to use AbsolutePath with ScriptProcess that has "relative part" in it, for example %PSScriptRoot%\..\utils (not tried in that context but generally works). If not, u can always add preprocessor to fix paths for you if it doesn't work
Delete undesired imported elements manually via function:, alias:, and var: provider.
Import extra utilities only when u use them then remove them at the end ? If the desire is that user can't see them you can encrypt them.
Here's how to make module dependencies work nicely, but you can't use PsGet
Chocolatey uses NuGet so it handles dependencies and can load from the local store. As a benefit, OneGet supports it which is something everybody will use eventually.
I've posted the solution I've come up with on github. I've rolled in a few other features I want when building modules, but the key solution for this question here uses reading and updating the psd1 of each module.
You include scripts that you want to embed in the NestedModules property of your manifest. My build phase will find each script and copy it into the module folder for packing and zipping. The manifest that ships in the package has the script paths converted to the now local file name.
I'm still not sure of this is ideal, but it seems to be a nice compromise to deal with the issues here.
A key issue I encountered along the way was that the ScriptsToProcess list is executed literally at module import time, so it is only useful for bootstrapping the import of your functionality. The NestedModules property is actually the list of additional scripts you want to be . sourced and available when your module is used.
Can go run dynamically in order to be used for a plugin based application ?
In eclipse, we can create some plugins that Eclipse can run dynamically.
Would the same thing be possible in Go ?
I'll argue that those are two separate problems :
having dynamic load
having plugins
The first one is simply no : A Go program is statically linked, which means you can't add code to a running program. And which also means you must compile the program to let it integrate plugins.
Fortunately, you can define a program accepting plugins in Go as in most languages, and Go, with interfaces and fast compilation doesn't make that task hard.
Here are two possible approaches :
Solution 1 : Plugin integrated in the main program
Similarly to Eclipse plugins, we can integrate the "plugins" in the main program memory, by simply recompiling the program. In this sense we can for example say that database drivers are plugins.
This may not feel as simple as in Java, as you must have a recompilation and you must in some point of your code import the "plugin" (see how it's done for database drivers) but, given the standardization of Go regarding directories and imports, it seems easy to handle that with a simple makefile importing the plugin and recompiling the application.
Given the ease and speed of compilation in Go, and the standardization of package structure, this seems to me to be a very viable solution.
Solution 2 : Separate process
It's especially easy in Go to communicate and to handle asynchronous calls. Which means you could define a solution based on many process communicating by named pipes (or any networking solution). Note that there is a rpc package in Go. This would probably be efficient enough for most programs and the main program would be able to start and stop the plugin processes. This could very well feel similar to what you have in Eclipse with the added benefits of memory space protection.
A last note from somebody who wrote several Eclipse plugins : you don't want that mess; keep it simple.
Go 1.8 supports plugins (to be released soon Feb 2017.)
https://tip.golang.org/pkg/plugin/
As dystroy already said, it's not possible to load packages at runtime.
In the future (or today with limitations) it may be possible to have this feature with projects like go-eval, which is "the beginning of an interpreter for Go".
A few packages I found to do this:
https://golang.org/pkg/net/rpc/
https://github.com/hashicorp/go-plugin
https://github.com/natefinch/pie
In order to provide nice URLs between parts of our app we split everything up into several modules which are compiled independently. For example, there is a "manager" portion and an "editor" portion. The editor launches in a new window. By doing this we can link to the editor directly:
/com.example.EditorApp?id=1
The EditorApp module just gets the value for id and loads up the document.
The problem with this is ALL of the code which is common between the two modules is duplicated in the output. This includes any static content (graphics), stylesheets, etc.
And another problem is the compile time to generate JavaScript is nearly double because we have some complex code shared between both modules which has to be processed twice.
Has anyone dealt with this? I'm considering scrapping the separate modules and merging it all back into one compile target. The only drawback is the URLs between our "apps" become something like:
/com.example.MainApp?mode=editor&id=1
Every window loads the main module, checks the value of the mode parameter, and and calls the the appropriate module init code.
I have built a few very large applications in GWT, and I find it best to split things up into modules, and move the common code into it's own area, like you've done. The reason in our case was simple, we had some parts of our application that were very different to the rest, so it made sense from a compile size point of view. Our application compiled down to 300kb for the main section, and about 25-40kb for other sections. Had we just put them all in one the user would have been left with a 600kb download, which for us was not acceptable.
It also makes more sense from a design and re-usability point of view to seperate things out as much as possible, as we have since re-used a lot of modules that we built on this project.
Compile time is not something you should generally worry about, because you can actually make it faster if you have seperate modules. We use ant to build our project, and we set it to only compile the GWT that has changed, and during development to only build for one browser, typical compile times on our project are 20 seconds, and we have a lot of code. You can see and example of this here.
One other minor thing: I assume you know that you don't have to use the default GWT paths that it generates? So instead of com.MyPackage.Package you could just put it into a folder with a nice name like 'ui' or something. Once compiled GWT doesn't care where you put it, and is not sensitive to path changes, because it all runs from the same directory.
From my experience building GWT apps, there's a few things to consider when deciding on whether you want multiple modules (with or without entry points), or all in one: download time (Javascript bundle size), compile time, navigation/url, and maintainability/re-usability.
...per download time, code splitting pretty much obviates the need to break into different modules for performance reasons.
...per compile time, even big apps are pretty quick to compile, but it might help breaking things up for huge apps.
...per navigation/url, it can be a pain to navigate from one module to another (assuming different EntryPoints), since each module has it's own client-side state...and navigation isn't seamless across modules.
...per maintainability/re-usability, it can be helpful from an organization/structure perspective to split into separate modules (even if there's only one EntryPoint).
I wrote a blog post about using GWT Modules, in case it helps.
Ok. I really get the sense there really is no "right" answer because projects vary so much. It's very much dependent on the nature of the application.
Our main build is composed of a number of in-house modules and 3rd party modules. They are all managed in seperate projects. That makes sense since they are used in different places.
But having more than one module in a single project designed to operate as one complete application seems to have overcomplicated things. The original reason for the two modules was to keep the URL simple when opening different screens in a new window. Even though had multiple build targets they all use a very large common subset of code (including a custom XML/POJO marshalling library).
About size... for us, one module was 280KB and the other was just over 300KB.
I just got finished merging everything back into one single module. The new combined module is around 380KB. So it's actually a bit less to download since most everyone would use both screens.
Also remember there is perfect caching, so that 380KB should only ever downloaded once, unless the app is changed.
Are there any libraries out there that do this? Playing around with Common Lisp it seems like this would be one of the most useful things to lower barrier of entry for newcomers. ASDF seems mostly designed for deployment, not for rapid prototyping and development. Following threads on comp.lang.lisp it seems like people agree that CL's package system is powerful, but lacks the ease of something like Python's dead simple module system. It is FAIL in the sense that it's designed for power not usability.
Glad to know if I'm wrong. If I'm right, I'm stunned that noone has tried to build a Python module-like system on top of ASDF.
Zach Beane wrote how he nowadays starts new Common Lisp projects by using Quicklisp and Quickproject. This might be along the lines you want.
Not sure if it's ready for prime time or whether it fits your requirements at all, but here's a link to XCVB.
I don't know. I mostly use ASDF for my in-development compilation needs. Once you notice that you'd benefiot from more than one source file, open <projectname>.asd, slap in a basic ASDF system definition template and start slapping filenames in. As and when you notice a cross-file dependency, update the dependency list.
But, then, I use the exact same method dealing with Makefiles (yes, I know there are automatic dependency checkers that can do it for you, but since I mostly code on my own, it's easier to just amend the Makefile/ASDF definition as I go).
In SBCL, there's a hook on REQUIRE that checks for ASDF systems, so you end up with something that is about as convenient as Python's import, but somehow I suspect that is not what you meant.
This may not be the answer you want, but clearly you have some idea of what you want in a module system. Have you considered creating one yourself? That is, taking your limited domain, your limited requirements, your environment and simply pounding out whatever abstractions will quickly make your life easier?
That's one of the key benefits of Lisp I'm sure you know, is that these simple abstractions and little tools are typically very easy to craft in Lisp.
I'm not suggesting solving everyone who has a problem with the package system or ASDF, I'm simply suggesting solving your own problem as you understand it, which is likely simpler and smaller than some more powerful larger scope.
There is Mudballs now, too.
If you're looking for a piece of software to add this functionality to then it's a good bet.
If you want a command line tool that just uses bash to generate new common lisp project directory and file layouts, you may find one that I created for myself useful: lispproject. If it doesn't match your needs, go ahead and fork it or the repo it gets it templates from to suit your needs: lisp-project-template. Look at the sh file in lispproject repo to see how the templates are used. Also, please note that you may need to adjust the calls to sed to fit your platform as I am using this on macOS. Alternative sed calls are in the main script but just commented out if you need them.
it's designed for power not usability
that's how most Lisp gurus like it.