Why shouldn't babel-node be used in production? - babeljs

The babel-node docs carry a stern warning:
Not meant for production use
You should not be using babel-node in production. It is unnecessarily heavy, with high memory usage due to the cache being stored in memory. You will also always experience a startup performance penalty as the entire app needs to be compiled on the fly.
Let's break this down:
Memory usage – huh? All modules are 'cached' in memory for the lifetime of your application anyway. What are they getting at here?
Startup penalty – how is this a performance problem? Deploying a web app already takes several seconds (or minutes if you're testing in CI). Adding half a second to startup means nothing. In fact if startup time matters anywhere, it matters more in development than production. If you're restarting your web server frequently enough that the startup time is an issue, you've got much bigger problems.
Also, there is no such warning about using Babel's require hook (require('babel-register')) in production, even though this presumably does pretty much exactly the same thing as babel-node. For example, you can do node -r babel-register server.js and get the same behaviour as babel-node server.js. (My company does exactly this in hundreds of microservices, with no problems.)
Is Babel's warning just FUD, or am I missing something? And if the warning is valid, why doesn't it also apply to the Babel require hook?
Related: Is it okay to use babel-node in production
– but that question just asks if production use is recommended, and the answers just quote the official advice, i.e. "No". In contrast, I am questioning the reasoning behind the official advice.

babel-node
The production warning was added to resolve this issue :
Without the kexec module, you can get into a really ugly situation where the child_process dies but its death or error never bubbles up. For more info see https://github.com/babel/babel/issues/2137.
It would be great if the docs on babel-node explained that it is not aimed for production and that without kexec installed that it has bad behaviour.
(emphasis mine)
The link for the original issue #2137 is dead, but you can find it here.
So there seems to be two problems here :
"very high memory usage on large apps"
"without kexec installed that it has bad behaviour"
These problems lead to the production warning.
babel-register
Also, there is no such warning about using Babel's require hook (require('babel-register')) in production
There may be no warning but it is not recommanded either. See this issue :
babel-register is primarily recommended for simple cases. If you're running into issues with it, it seems like changing your workflow to one built around a file watcher would be ideal. Note that we also never recommend babel-register for production cases.

I don't know enough about babel's and node's internals to give a full answer; some of this is speculation, but the caching babel-node would do is not the same thing as the cache node does.
babel-node's cache would be another cache on top of node's require cache, and it would have to, at best, cache the resulting source code (before it's fed to node).
I believe node's cache, after evaluating a module, will only cache things reachable from the exports, or, rather, the things that are not reachable anymore will be eventually GCed.
The startup penalty will depend on the contents of your .babelrc, but you're forcing babel to do the legwork to translate your entire source code every time it is executed. Even if you implement a persistent cache, babel-node would still need to do a cache fetch and validation for each file of your app.
In development, more appropriate tools like webpack in watch mode can, after the cold start, re-translate only modified files, which would be much faster than even a babel-node with perfectly optimized cache.

Related

Troubleshooting "Out of Memory Error: Metaspace" in Play for Scala

While working in development mode in Play for Scala (2.5.x), after around three hours of changing code and hot-deploying, Play hungs with the error java.lang.OutOfMemoryError: Metaspace.
After some research, the problem seems to be that the application instantiates Java objects (such as factories and connections) that Play is not aware of, and when Play restarts these objects stay in memory causing leaks. The solution is to clean up components when Play shuts down as explained here, or to destroy them after they are used.
The problem is that I clean up all these objects and still get OutOfMemoryError. I tried with Java's jconsole to find out what classes are creating the leak and how much memory they are taking, but couldn't find much. Any ideas how to deal with this situation? I don't want to simply increase memory without knowing what's going on.
PS: This seems to be a common issue, it would be great if Play itself provided the tools to detect the problem.
Unfortunately, this problem seems unevitable, currently. Although, it got better in Play 2.6, I still run into this.
And this has nothing to do with components not getting cleaned up, the metaspace is the place where classes are loaded. Play (dynamically) creates a lot of classes (e.g. anonymous classes) when compiling and each of those classes adds to the metaspace, which eventually gets filled up.
My suggestion would be to just increase the -XX:MaxMetaspaceSize until you can work for a few hours without this exception. Then, restart sbt once in a while. I use 500 MB currently, which seems to be fine (it is usually 128MB with the SBT launcher):
sbt -J-XX:MaxMetaspaceSize=500m
This is usually no problem in production, since you have a fixed number of classes loaded (no compilation in production).

Ionic 2 / Ionic 3 - Garbage Collection

I'm trying to get a better understanding of ionic2 and ionic3.
How does the Garbage Collection work in ionic?
What gets cached and when?
How can we clear this cache?
How do we set up elements for G.C.?
Do we even need to setup elements for G.C?
Can we/Do we need to setup pages for G.C.?
Like seen in this picture (source):
Some of the memory gets G.C'd when going to a new page. However the memory is still significantly higher than before any video had been played.
OK I'm gonna give this one a try:
Ionic itself has not much to do with GC, there are no scheduled runs of a task that cleans up behind you. The only thing ionic (or more specifically the dev team behind ionic) has to do is to design and implement their UI components in way they do not eat up too much memory and also realease unused memory. Especially with Virtual-Scroll there have been issues with memory-leaks and so on.
So lets go a level deeper: Angular! Same point as with ionic. The devs of Angular are responsible for how much memory is used by their framework. But Angular provides a very useful method ngOnDestroy(). Why is this method important to you as an app developer? Because it gives you the chance to clean up behind yourself. This method is called just before your component is destroyed, what does that mean? You do not need your allocated objects, arrays, video-elements (set src='' and then call load()), etc. anymore and you can release the memory. This and this are good reads on how to free memory. However as the docs for onDestory() mention you only have to release memory that is not cleaned up by the automic GC (subscriptions, media-elements, ...). Which brings us to the next level:
Javascript/Browser: This is where the "real" GC happens. Javascript uses a mark-and-sweep garbage collecotor (all modern browsers ship with one), you can read about it here. It runs every now and then and releases every object that is unreachable/not referenced anymore, to explicitly mark an object for GC use the delete keyword. The following image visualizes the mark and sweep process:
Image taken from this article, it explains how javascript memory management works in very great detail, I strongly
recommend reading it.
And of course you always have the native GC of Java/Obj-C which cleans up the native part of the app.

Wait for eglSwapBuffers posting to complete

I need to know when posting completes after eglSwapBuffers. I was thinking eglWaitNative might halt execution until positing is complete, but I find it unclear reading the spec, chapter 3.8:
https://www.khronos.org/registry/egl/specs/eglspec.1.5.pdf
It would appear eglWaitNative is used to synchronizing "native" rendering API such as Xlib and GDI. However as far as I know eglSwapBuffers might be running on top of Wayland which can´t render shit. Still, it would seem reasonable to believe the EGL_CORE_NATIVE_ENGINE engine always points out the "marking engine" doing buffer swaps...
From 3.10.3 I read:
Subsequent client API commands can be issued immediately, but will not
be executed until posting is completed.
I suppose I could do something like this but I´d rather use "pure" egl if possible:
eglSwapBuffers(...);
glClear(...); // "Dummy" command.
My project is using OpenGL Safety Critical profile 1.0.1, EGL 1.3 and some vendor specific extensions. Sync objects are not available.

OSB: Analyzing memory of proxy service

I have multiple proxies in a message flow.Is there a way in OSB by which I can monitor the memory utilization of each proxy ? I'm getting OOM, want to investigate which proxy is eating away all/most memory.
Thanks !
If you're getting OOME then it's either because a proxy is not freeing up all the memory it uses (so will eventually fail even with one request at a time), or you use too much memory per invocation and it dies over a certain threshold but is fine under low load. Do you know which it is?
Either way, you will want to generate a heap dump on OOME so you can investigate what's going on. It's annoying but sometimes necessary. A colleague had to do that recently to fix some issues (one problem was an SB-transport platform bug, one was a thread starvation issue due to a platform work manager bug, the last one due to a Muxer bug when used in exalogic).
If it just performs poorly under load, then you'll need to do the usual OSB optimisations, like use fewer Assign steps (but assign more variables per step), do a lot more in xquery rather than proxy steps, especially loops that don't need a service callout, since they can easily be rolled into a for loop in xquery; you know, all the standard stuff.

NUnit CPU usage very high - How to resolve?

On numerous unrelated, projects the CPU usage of NUnit has often ended up being about 50% even when I'm not running my tests. From other information I've read this is supposedly more to do with my code than Nunit.
Does anyone know how I can isolate the problems in my code that will be causing this and fix them?
Thanks
I have the same problem and it seems to be rather consistently affecting only one test project doing integration testing (calling web services, checking stuff over HTTP, etc). I'm very careful to dispose of networked objects (with using(...){ }), so I don't quite understand why NUnit should continue to use 90% CPU days after the test is done with and all objects in use by the test should be disposed of.
The really strange thing is that while running the test, NUnit uses no more than 10%-50% CPU. It's only after the test has completed that CPU usage surges and stays constantly at 80%-100% forever. It's really strange. Reloading or closing the test project (File > Close) doesn't help either. NUnit itself needs to be closed.