I mentioned on Twitter that I was moving from es6-shim to babel. Someone else mentioned:
the shims are still needed even with babel. they fix broken builtins, ones babel's output uses.
So:
Does babel need es6-shim or similar?
If it does, why doesn't babel require these things as a dependency?
Answers with references preferred over 'yes / no' with no supporting arguments!
Babel, at its core, does a single thing: convert syntax from one form to another.
Some of Babel's syntax transformations introduce dependencies on ES6 library functionality. It doesn't concern itself with how that functionality got there because:
The system might already provide it
The user might only want to load specific pieces of a library
There are many polyfills and the user might have a specific one it wants to use.
It is the developers job to ensure that the transpiled code is running in an environment where all the functions it needs actually exist.
Babel should work fine with es6-shim if you'd like to keep using it.
Babel also exposes babel/polyfill as a dead simple way to load a polyfill, which loads core-js, another polyfill like es6-shim. Just:
require('babel/polyfill');
Some Babel transformations rely on objects or methods that may not be available in your runtime environment and which you therefore would want to polyfill for those environments. Those dependencies are documented at https://babeljs.io/docs/usage/caveats/.
Babel ships with a polyfill that satisfies all of those requirements that you can opt-in to if you want, and doesn't attempt to automatically insert polyfills for the reasons that #loganfsmyth explained.
Related
The current Babel plugin for decorators is based on the corresponding tc39 proposal. However, it only follows what is explicitly proposed, despite the fact that the above proposal also includes extensions that should be looked at, including:
Decorators on functions
Parameter decorators
let/const decorators
A few metadata plans that are no longer relevant
Is there a recommended plugin or another build system that allows the use of at least the first three points?
If not, I'd assume the only way would be to write a custom plugin, however considering it would be so similar to the earlier plugin, it would be easier to fork it. The problem is that it seems a lot more integrated into babel than other public transformers, so how would someone go about this?
I am looking for a way of controlling translation format for Purescript code when target platform is JavaScript.
"spago bundle-app" generates JavaScript code for ES5 version.
spago/pulp/purs --help doesn't tell much.
googling by keywords like "Purescript codegen target ES6" is not helpful either.
Some discussions regarding ES6 and Purescript popped up among results but nothing practically useful.
I've found lebab tool translating ES5 to ES6,
but I guess it is not right way to go.
The PureScript compiler creates mostly ES3 code. This is on purpose because Ecma Script is strictly backwards compatible and ES3 code runs in a ES5-, ES2015-, ES2016- (and so on) environment. This means that code created by the PureScript compiler runs even in older browsers.
If you are coming from TypeScript or Babel, you might be used to being able to choose the target. This is because these compilers work with plugins that run after each other. But the PureScript compiler does not have such a feature (since it is not transforming JS to JS like these specific compilers).
So what can be the benefit of targeting a newer version? Code size, performance and features. If you want to use new ES2015 features in your FFI code there is great news: You can make use of these features now in PureScript 0.13. There are also talks about making the compiler target newer JavaScript environments in the future for the benefits mentioned above. If someone would have to support older environments they could still add Babel to their toolchain. But PureScript is a small community project and neither performance nor code size are of a very high priority for the project (and if they were, there would probably be other optimisations that could yield much greater results).
I am new for ReactJS. Should I go with JSXTransformer or Babel for writing ReactJS code?
I understand that ReactJS implementation is not depend on JSXTransformer / babel. We can use ReactJS without these two too, but still I want about this with respect to ReactJS.
This is opinion based so should probably be closed.
Pretty sure the React team have deprecated the use of the JSX Transformer (outside of in-browser development usage) in favour of Babel. Babel now supports everything that React needs (and more) in a convenient and standard way and should be considered the preferred method of JSX transformation.
More information on React tooling can helpfully be found at their tooling page.
Matt Styles is right, it's beeing deprecated:
from here
JSXTransformer is another tool we built specifically for consuming JSX
in the browser. It was always intended as a quick way to prototype
code before setting up a build process. It would look for
tags with type="text/jsx" and then transform and run. This ran the
same code that react-tools ran on the server. Babel ships with a
nearly identical tool, which has already been integrated into JS Bin.
We'll be deprecating JSXTransformer, however the current version will
still be available from various CDNs and Bower.
However it is great to learn the basic of react, focusing on component methods, passing props, etc.. with a easy integration.
But i believe you won't be able to require any node modules, wich will block you soon or later.
To learn React and the node environnement, I suggest you to make a few tutorials, and to test and read the code of simple boilerplates project like these:
react-hot-boilerplate
react-transform-boilerplate
I've been diving into some of the more advanced features of powershell modules and manifests recently, with a view to handling scenarios more advanced than just a basic export of a few functions. It sounds like it should be obvious, but I'm struggling to find a nice solution for sharing common 'helper' type functions across several large non trivial modules. In particular, I'm looking for a solution that:
Allows sharing of 'helper' type functions without necessarily being exported by anyone
Allow installation via PsGet from a local repo path
Let me go into some of the challenges I see.
First of all, as far as I can tell, PsGet does not handle module dependencies well. This implies sharing between modules is going to be a struggle. Maybe a solution to this is to avoid PsGet, and use a custom script to 'install' modules to the local module path, which might be more tolerant of dependencies and load order.
My point about not using module exports to share helper functions also seems to be an issue. The reason I can see for this is desiring aliases, helpers etc for common internal actions (needed inside useful functions), that are either useless or unsafe to expose. For example, a nice brief alias for getting the local script path (commonly used, noisier than it should be). Or I recently made a nice simple wrapper around PromptForChoice with fewer options. Maybe this whole thing isn't a real issue. But I can't help but feel that shipping a 'utils' module that exports low level functions that are useful inside real modules, but not to an end user, seems like the wrong way to go.
What I've been playing with is a small build structure that tests and then packs modules, and I want to get some code sharing possible. I've been looking for an alternative using ScriptsToProcess in the manifest, but these seem to be absolute paths, not relative.
Imagine a folder structure:
modules
utils
console_helpers.ps1
moduleA
moduleA.psm1
moduleA.psd1
moduleB
moduleB.psm1
moduleB.psd1
packed_modules
moduleA.zip
moduleB.zip
What I was considering was that you could list relative paths in each ScriptsToProcess, and then my pack phase will go and drag those relative paths in to each zip.
Is this a horrible crazy idea? Am I right that ps modules and PsGet really don't have decent dependency support? I would love to hear feedback from anyone who has looked into this space. I think the answer I'm hoping to get in rough priority might be:
Here's an example of sharing code without exposing it (probably a build/pack level solution)
Here's how to make module dependencies work nicely, using PsGet
Here's how to make module dependencies work nicely, but you can't use PsGet
Just expose everything from modules
This is a terrible idea and you're terrible
Thanks!
UPDATE as suggested by CalebB
Here's another example to illustrate what I'm trying to resolve. I find it useful to wrap up '&' style execution of commands with a wrapper function, to deal with stuff like checking exit codes etc. If i'm building half a dozen modules, many of them will want to make use of that helper (obviously).
My options today seem to be put it in a module and export it, but maybe I don't want it exported, I want more of a . source style access. And if I've got a family of modules all trying to use this stuff, the options for module dependency management are limited (PsGet limitation etc).
If I'm 'building' all the modules at once (with some decent psake and pester infrastructure), maybe I can use a hack at this point to embed scripts into my zipped modules to 'solve' all these problems?
Allows sharing of 'helper' type functions without necessarily being exported by anyone
Mhm... what is wrong with dot sourcing the scripts you need within particular module ? You could :
Keep your folder structure and symlink the desired functions into module folder.
Try to use AbsolutePath with ScriptProcess that has "relative part" in it, for example %PSScriptRoot%\..\utils (not tried in that context but generally works). If not, u can always add preprocessor to fix paths for you if it doesn't work
Delete undesired imported elements manually via function:, alias:, and var: provider.
Import extra utilities only when u use them then remove them at the end ? If the desire is that user can't see them you can encrypt them.
Here's how to make module dependencies work nicely, but you can't use PsGet
Chocolatey uses NuGet so it handles dependencies and can load from the local store. As a benefit, OneGet supports it which is something everybody will use eventually.
I've posted the solution I've come up with on github. I've rolled in a few other features I want when building modules, but the key solution for this question here uses reading and updating the psd1 of each module.
You include scripts that you want to embed in the NestedModules property of your manifest. My build phase will find each script and copy it into the module folder for packing and zipping. The manifest that ships in the package has the script paths converted to the now local file name.
I'm still not sure of this is ideal, but it seems to be a nice compromise to deal with the issues here.
A key issue I encountered along the way was that the ScriptsToProcess list is executed literally at module import time, so it is only useful for bootstrapping the import of your functionality. The NestedModules property is actually the list of additional scripts you want to be . sourced and available when your module is used.
Considering that I am developing an end-user software program (as an uberjar) I am wondering what my options are to make it possible for the user to download a plugin and load that during runtime.
The plugin(s) should come compiled and without source code, so sth. like load is not an option.
What existing libraries (or ways of Java...?) exist to build this on?
EDIT: If you are not sure I would also be satisfied with a way that costs a reboot/-start of the main-program. However, what is important is that the source-code won't be included in any JAR file (neither main application nor plugin-jars, see :omit-source of Leiningen documentation).
To add a jar during runtime, use pomegranate which lets you add a .jar file to the classpath. Plugins of your software should be regular Clojure libs that follow certain conventions that you need to establish:
Make them provide (e. g. in an edn) a symbol to an object implementing a constructor/destructor mechanism such as the Lifecycle protocol in Stuart Sierras component library. In runtime, require and resolve that symbol, start the resulting object and hand it over to rest your programs plugin coordination facilities.
Provide a public API in your program that allows the plugins to interact with it in ways that you coordinate asynchronously e. g. with clojure.core.async (don't let one plugin block the entire program).
Make sure that the plugins have a coordinated way to expose their functionality to each other only if they desire so to enable a high degree of modularity among your plugins. Make sure that your plugin loader is capable of detecting dependencies among plugins and is capable of loading and unloading them in the right order.
I've not tried it myself, but you should in theory be able to get OSGi to work with Clojure.
There's a Clojure / OSGi integration library here:
https://github.com/aav/clojure.osgi
If I were to attempt to role my own solution, I would try using tools.namespace to load and unload plugins. I'm not entirely sure it will work, but it's certainly in the right direction. I think one key piece is that the plugin jars will have to be "installed" in a location that's already on the classpath.
Again, this is only the start of one possible solution. I haven't tried doing this.