How much slower is it really to run Babel standalone in browser compared to precompile? - babeljs

I don't know if anyone did a benchmark, but it would be interesting to see if Babel standalone is 2x slower, 10x or 100x slower to run in the browser compared to precompile the code. I know it's not recommended, that's not what this question is about.
Say using a React component with JSX works as a benchmark?

Related

How to skip/ignore babel optimization for specific modules

I'm currently working on a preact app that connects with firebase.
My problem is this
Since adding firebase to the project watch build times get seriously affected, jumping from an average of 10s to almost a minute.
From what I read online this is merely an information and not an error (kinda obvious with the 'Note:' bit).
Question: Is there any way to disable optimization for specific modules?
Thanks
Enabling compact mode on its own will not have any obvious performance benefits, especially not in the magnitude you're looking for.
Babel will still parse and transpile every module you throw at it -- what compact mode does is to give whitespace to the output. It makes it human readable. And that's all that warning is saying, that the module is so large that Babel is enabling compact mode itself for that module.

Why does Eclipse load faster than Canopy?

I get that Canopy "is its own ecosystem" as my friend told me (when he explained to me that I didn't have Python on my computer and that's why my command line instructions weren't doing anything). Could this be the reason that it takes longer to load than Eclipse, which (I guess) I downloaded separately from Java? Or is it some other reason?
Good question! Fast loading was not a design goal of Canopy. Most Canopy users leave it open for many hours at a time, so load time (while always nice, of course!) was not as high a priority as having it include a robust scientific Python distribution. The technical reason why it loads slowly (especially when the OS's cache is "cold") is that it opens and interprets many tens of Python modules, some quite complex. Unlike C, Python is usually not compiled, so the startup time of Python programs is typically slower, though when important, there are ways to counteract that.

Analyzing coverage of numba-wrapped functions

I've written a python module, much of which is wrapped in #numba.jit decorators for speed. I've also written lots of tests for this module, which I run (on Travis-CI) with py.test. Now, I'm trying to look at the coverage of these tests, using pytest-cov, which is just a plugin that relies on coverage (with hopes of integrating all of this will coveralls).
Unfortunately, it seems that using numba.jit on all those functions makes coverage think that the functions are never used -- which is kind of the case. So I'm getting basically no reported coverage with my tests. This isn't a huge surprise, since numba is taking that code and compiling it, so the code itself really never is used. But I was hoping there'd be some of that magic you see with python some times...
Is there any useful way to combine these two excellent tools? Failing that, is there any other tool I could use to measure coverage with numba?
[I've made a minimal working example showing the difference here.)
The best thing might be to disable the numba JIT during coverage measurement. That relies on you trusting the correspondence between the Python code and the JIT'ed code, but you need to trust that to some extent anyway.
Not that this answers the question, but I thought I should advertise another way that someone might be interested in working on. There's probably something really beautiful that could be done using llvm-cov. Presumably, this would have to be implemented within numba, and the llvm code would have to be instrumented, which would require some flag somewhere. But since numba knows about the correspondence between lines of python code and llvm code, there must be something that could be implemented by somebody more clever than I am.

How reliable is HtmlUnitDriver?

Obviously, the answer to the question depends on a number of environmental factors.
In general, I'm wondering what people's experiences are with HtmlUnitDriver as a reliable tool that can be "trusted" to navigate a website basically the same way other browsers do.
Of course, I realize "the way other browsers do" is pretty nebulous; naturally every browser will have its quirks. But I am on a project where we have hundreds of acceptance test scenarios (written in JBehave) and using FirefoxDriver and InternetExplorerDriver, running all of them takes over two hours, which is kind of rough from a continuous integration standpoint. So I'm wondering if it's at least feasible that we could switch our acceptance tests over to use HtmlUnitDriver and expect much faster times with mostly the same behavior (and perhaps we could expect a handful of tests to fail using HtmlUnitDriver and specifically run those tests with a browser-based driver).
Our UI uses GWT, which may or may not complicate things (I don't know).
Basically, in others' experience, does HtmlUnitDriver operate about as well as another browser, or is it really only appropriate for very simple HTML websites with minimal JavaScript and should not be used for an enterprise web application?
From my experiences with using HtmlUnitDriver I would say that if you don't use it as your baseline browser when writing your tests then converting them to use it becomes a bit of a nightmare. This is especially true when it comes to javascript heavy sites.
The main reason for this is the obvious underlying use of htmlunit which, by default, uses the Rhino javascript engine. In the past I've always had to specify that HtmlUnitDriver start htmlunit using Firefox's javascript engine. This, for the most part, solved the javascript issues I was finding while running tests using HtmlUnitDriver.
One of the biggest issues I faced when it came to using the same test code for each browser was if, on the site under test, the UI developers had assigned javascript events such as onClick() to html elements such as a <span>.
The reason for this is that if you were to use WebDriver's .click() method on a WebElement representing the <span>, then htmlunit would not do anything (it expects an onClick() to be called on elements such as an <input>).
To get around this I had to manually call a click() event in javascript. You can do this either by using WebDriver's JavascriptExecutor or by using a WebDriverBackedSelenium and Selenium's .fireEvent() method.
So if your site uses such events then I'd say switching to use HtmlUnitDriver could be a big task.
Despite this, I actually use HtmlUnitDriver for all my tests. However, I went through the pains of discovering all of the above a while back, so now use HtmlUnitDriver as my baseline browser when writing tests.

GCC/XCode speedup suggestions?

I have a stock mac-mini server (2.53GHz, 4GB, 5400RPM drive) which I use for iphone development. I am working on a smallish app. The compilation seems to be taking longer and longer (now about 20min). The app is about 20 small files totaling 4000 lines. It links to C++/Boost.
What suggestions do you have to speed up the compilation process? Cheaper will be better.
Sounds like the system is swapping, which is a common problem when compiling C++ that is STL heavy. GCC can be a complete memory pig in such a situation and two GCCs at once can easily consume all memory.
Try:
defaults write com.apple.Xcode PBXNumberOfParallelBuildSubtasks 1
And see if that helps.
If it doesn't, then the other answers may help.
As others have noted, that's way in excess of what you should be expecting, so it suggests a problem somewhere. Try toggling distributed builds -- depending on your setup it may improve things or slow them down (for a while we had a situation where one machine was servicing everyone else's builds but not its own). If you have a 1Gb network then you may get a bit of extra performance from distributed builds (especially if you have a couple of spare macminis). Also enable precompiled headers, and put both boost and cocoa headers in it.
If things are still slow then it's worth running activity monitor to ensure that you don't have some other process interfering, or even Instruments to get some more detailed information. You could find that you're trying to compile files on a network drive for instance, which will certainly slow things down. Also inside Xcode try running the option to preprocess a file and see what's in the file contents (available via right-click) to check that you're not compiling in loads of extra code.
Also with boost, try slimming it down to find only the bits you need. The bcp tool that comes in the distribution will copy only the packages you ask it to, plus all the dependencies, so you're not building stuff you won't use.
Include all your boost headers in the precompiled header for your project.
I concur with Eiko. We're compiling projects that are vastly larger (including boost libraries) in much shorter time.
I'd start by checking logs. You can also view the raw compiler output in the build window (Shift+CMD+B or in the Build menu->Build Results). Xcode just makes straight gcc or llvm calls, so this may give you a better idea of what's actually happening during the build process.
There is something messed up. We are compiling bigger projects in fractions of the time.