How to skip/ignore babel optimization for specific modules - babeljs

I'm currently working on a preact app that connects with firebase.
My problem is this
Since adding firebase to the project watch build times get seriously affected, jumping from an average of 10s to almost a minute.
From what I read online this is merely an information and not an error (kinda obvious with the 'Note:' bit).
Question: Is there any way to disable optimization for specific modules?
Thanks

Enabling compact mode on its own will not have any obvious performance benefits, especially not in the magnitude you're looking for.
Babel will still parse and transpile every module you throw at it -- what compact mode does is to give whitespace to the output. It makes it human readable. And that's all that warning is saying, that the module is so large that Babel is enabling compact mode itself for that module.

Related

Evaluate the impact of upgrade SAPUI5 libraries

I am an SAP Fiori developer and have been reading some days about the best way for planning an upgrade with the maximum level of guarantees to avoid unexpected errors.
I know that for evaluating the impact of changes in our applications, I have to read “What's New” and do an accurate read and analysis of "Changelog."
My idea is to create a human step by step procedure, and if we do all the steps the impact will be evaluated with a very high percentage of coverage. Because I have assumed that there isn’t an automatic process for evaluating that, is it true?
We have a table with which controls and components are in every application and view/controller for evaluating the “direct” impact of upgrades.
Table
My doubt is how I can be sure about if a fix could generate a wrong behavior. I will explain it with an example: “1.71.21 - [FIX] format/NumberFormat: parse special values ‘00000’”, analyzing it, I know that in sap.ui.comp.smarttable.SmartTable is using “Number format” for displaying the num. records in the header title but with the API is impossible to have the knowledge. This is only an example but reading the "Changelog," a lot of doubts like this appear and are also and more complicated to associate with.
To give you more info, I have thought to use the CDN with the new version, but this could position us in a scenario where we should do manual testing of everything and look for errors, warnings, and wrong behavior.
How did you analyze an upgrade before doing it, and how did you avoid doing human testing of everything, with the risk of forgetting things? Are you using some tool?
Thanks in advance
Best regards
Upgrading the library carries a comparable risk of introducing defects as regular development. Therefore, conventional quality assurance guidelines will apply to an upgrade: setting up and doing your tests manually or automatically.

Analyzing coverage of numba-wrapped functions

I've written a python module, much of which is wrapped in #numba.jit decorators for speed. I've also written lots of tests for this module, which I run (on Travis-CI) with py.test. Now, I'm trying to look at the coverage of these tests, using pytest-cov, which is just a plugin that relies on coverage (with hopes of integrating all of this will coveralls).
Unfortunately, it seems that using numba.jit on all those functions makes coverage think that the functions are never used -- which is kind of the case. So I'm getting basically no reported coverage with my tests. This isn't a huge surprise, since numba is taking that code and compiling it, so the code itself really never is used. But I was hoping there'd be some of that magic you see with python some times...
Is there any useful way to combine these two excellent tools? Failing that, is there any other tool I could use to measure coverage with numba?
[I've made a minimal working example showing the difference here.)
The best thing might be to disable the numba JIT during coverage measurement. That relies on you trusting the correspondence between the Python code and the JIT'ed code, but you need to trust that to some extent anyway.
Not that this answers the question, but I thought I should advertise another way that someone might be interested in working on. There's probably something really beautiful that could be done using llvm-cov. Presumably, this would have to be implemented within numba, and the llvm code would have to be instrumented, which would require some flag somewhere. But since numba knows about the correspondence between lines of python code and llvm code, there must be something that could be implemented by somebody more clever than I am.

Eclipse plugin to measure programmer performance/stats

Does anyone know of an Eclipse plugin that can give me some stats about my behaviour/usage of the Eclipse IDE?
There are quite a few things I would like to know:
How often/when do I invoke the "Build All" command (through Ctrl+B)
How often does compilation fail/succeed (+ number of errors/warnings)
How often do I hit Backspace? (I do that way to often; If pressing that key would give a nasty sound I would in time learn to type correctly in the first place)
How many characters/lines of code that I typed do I delete (possibly quite immediately)
How (effective/efficient/...) is my Mouse/Keyboard/IDE usage? (Kinda like measuring APM in StarCraft; this could be fun)
If there is no such Eclipse plugin around, how complex and time consuming would It be to write a plugin that can accomplish the above?
edit:
I am interested in these usage stats since I noticed that a good IDE and a fast computer strongly influenced my behaviour when coding over the past years.
I use Content Assist all the time, its very practical but now I notice, that I would be totally unable to get anything done without it.
Hitting Backspace has become almost a reflex :) and pressing Ctrl+B almost too.
I now routinely hit Ctrl+B to do an incremental build sometimes even after changing a few lines, so that the compiler gives immediate feedback and since compiling is quite fast nowadays it works quite well. This keeps me from actually having to think on my own, when I do stuff wrong, I now rely on the compiler to spot the errors, I have become worse at spotting them myself since I don't really need to do that any longer.
Did you guys notice these changes in yourself too?
The best tool in this area(in my opinion) for eclipse is Lack of Progress Bar ,it doesn't have all the feature that you request, but allow to measure the developer performance and the bottlenecks of the development process.
What is Lopb?
Lack of Progress Bar (Lopb) is an Eclipse plugin that tracks how long
developers wait for background jobs to complete. By benchmarking the
performance of background jobs, Lopb provides developers with metrics
on how much of their day was wasted due to overhead introduced by the
development tools and infrastructure that they depend on or access
through their IDE.
Have a look at mousefeed, it will help you move from using the mouse to keyboard. Not sure if it can keep stats of your usage.
As for other stats, look at Usage Data Collector, it will keep track of all Eclipse usage, favorite view, perspective usage, most common errors, etc.
Dont know of anyone that actually keeps track of how you type in the editor, but why would you want that? Part of the development process is to change things alot imho. Focus on the end result instead and look at some static code analyzers.

GCC/XCode speedup suggestions?

I have a stock mac-mini server (2.53GHz, 4GB, 5400RPM drive) which I use for iphone development. I am working on a smallish app. The compilation seems to be taking longer and longer (now about 20min). The app is about 20 small files totaling 4000 lines. It links to C++/Boost.
What suggestions do you have to speed up the compilation process? Cheaper will be better.
Sounds like the system is swapping, which is a common problem when compiling C++ that is STL heavy. GCC can be a complete memory pig in such a situation and two GCCs at once can easily consume all memory.
Try:
defaults write com.apple.Xcode PBXNumberOfParallelBuildSubtasks 1
And see if that helps.
If it doesn't, then the other answers may help.
As others have noted, that's way in excess of what you should be expecting, so it suggests a problem somewhere. Try toggling distributed builds -- depending on your setup it may improve things or slow them down (for a while we had a situation where one machine was servicing everyone else's builds but not its own). If you have a 1Gb network then you may get a bit of extra performance from distributed builds (especially if you have a couple of spare macminis). Also enable precompiled headers, and put both boost and cocoa headers in it.
If things are still slow then it's worth running activity monitor to ensure that you don't have some other process interfering, or even Instruments to get some more detailed information. You could find that you're trying to compile files on a network drive for instance, which will certainly slow things down. Also inside Xcode try running the option to preprocess a file and see what's in the file contents (available via right-click) to check that you're not compiling in loads of extra code.
Also with boost, try slimming it down to find only the bits you need. The bcp tool that comes in the distribution will copy only the packages you ask it to, plus all the dependencies, so you're not building stuff you won't use.
Include all your boost headers in the precompiled header for your project.
I concur with Eiko. We're compiling projects that are vastly larger (including boost libraries) in much shorter time.
I'd start by checking logs. You can also view the raw compiler output in the build window (Shift+CMD+B or in the Build menu->Build Results). Xcode just makes straight gcc or llvm calls, so this may give you a better idea of what's actually happening during the build process.
There is something messed up. We are compiling bigger projects in fractions of the time.

When generating code, what language should you generate?

I've worked on a number of products that make use of code generation. It seems to be the only way to achieve both a high degree of user-customizability and high execution speed.
The downside is that we are requiring users to install a compiler (primarily on MS Windows).
This has been an on-going headache, because vendors like MS keep obsoleting compilers, and some users tend to have more than one compiler installed.
We're considering using GNU C, and possibly C++, but even there, there are continual version issues.
I've considered possibly generating assembly language, in an effort to get off the compiler-version-treadmill, but assembly languages are all machine-specific.
Ideally there would be some way to produce generated code that would be flexible, run fast, and not expose us to the whims of third-party providers.
Maybe I'm overlooking something simple, like Java. Any ideas would be appreciated. Thanks.
If you're considering C and even assembler, take a look at LLVM first: http://llvm.org
I might be missing some context here, but could you just pin yourself to a specific version? E.g., .NET 2.0 can be installed side by side with .NET 1.1 and .NET 3.5, as well as other versions that will come out in the future. So as long as your code makes use of a specific version of a compiler, what's the problem?
I've considered possibly generating assembly language, in an effort to get off the compiler-version-treadmill, but assembly languages are all machine-specific.
That would be called a compiler :)
Why don't you stick to C90?
I haven't heard much of severe violations of standards from gcc's side, if you don't use extensions.
And you can always distribute a certain version of gcc along with your product, say, 4.3.2, giving an option to users to use their own compiler at their own risk.
As long as all code is generated by you (i. e. you don't embed your instructions into other's code), there shouldn't be any problems in testing against this version and using it to compile your libraries.
If you want to generate assembly language code, you may take a look at asmjit.
One option would be to use a language/environment that provides access to the compiler in code; For example, here is a C# example.
Why not ship a GNU C compiler with your code generator? That way you have no version issues, and the client can constantly generate code that is usable.
It sounds like you're looking for LLVM.
Start here: The Code Generation conference
In the spirit of "might not be to late to add my 2 cents" as in #Alvin's answer's case, here is something I'd think about: if your application is meant to last for some years, it is going to face several changes in how applications and systems work.
For instance, let's say you were thinking about this 10 years ago. I was watching Dexter back then, but I guess you actually have memories of how things were at that time. From what I can tell, multithreading was not much of an issue to developers of 2000, and now it is. So Moore's law broke for them. Before that people didn't even care about what will happen in "Y2K".
Speaking of Moore's law, processors are indeed getting quite fast, so maybe certain optimizations won't be even that necessary. And possibly the array of optimizations will be much bigger, some processors are getting optimizations for several server-centric stuff (XML, cryptography, compression and regex! I am surprised such things can get done on a chip) and also spend less energy (which is probably very important for warfare hardware...).
My point being that focusing on what exist today as a platform for tomorrow is not a good idea. Make it work today, and surely it will work tomorrow (backward-compatibility is especially valued by Microsoft, Apple is not bad it seems and Linux is very liberal about making it work as you want).
There is, yes, one thing that you can do. Attach your technology to something that just won't (likely) die, such as Javascript. I'm serious, Javascript VMs are getting terribly efficient nowdays and are just going to get better, plus everyone loves it so it's not going to dissappear suddenly. If needing more efficiency/features, maybe target the CRL or JVM?
Also I believe multithreading will become more and more of an issue. I have a gut feeling the number of processor cores will have a Moore's law of their own. And architectures are more than likely to change, from the looks of the cloud buzz.
PS: In any case, I belive C optimizations of the past are still quite valid under modern compilers!
I would stick to that language that you use for generating that language. You can generate and compile Java code in Java, Python code in Python, C# in C#, and even Lisp in Lisp, etc.
But it is not clear whether such languages are sufficiently fast for you. For top speed I would choose to generate C++ and use GCC for compilation.
Why not use something like SpiderMonkey or Rhino (JavaScript support in Java or C++). You can export your objects to JavaScript namespaces, and your users don't have to compile anything.
Embed an interpreter for a language like Lua/Scheme into your program, and generate code in that language.