Tradeoffs for using soft permutations in GWT? - gwt

I am exploring using soft permutations in my build of GWT because total file system size of the compiled app is important to me (read: sum of all permutations). Aside from increasing the file size the user has to download and potential runtime performance decreases, is there any other drawbacks to using soft permutations? Any loss of localization functionality (number formatting and the like)?
For clarification, this is what I am calling soft permutations.
Thanks in advance.

I don't think there are others, except that there might be incompatibilities with existing generators/linkers (I recently proposed a patch to GWT using soft permutations and it got rolled back at least twice, before being revisited with a runtime check but no soft permutation).
See commits r9970 through r10257.

Related

Evaluate the impact of upgrade SAPUI5 libraries

I am an SAP Fiori developer and have been reading some days about the best way for planning an upgrade with the maximum level of guarantees to avoid unexpected errors.
I know that for evaluating the impact of changes in our applications, I have to read “What's New” and do an accurate read and analysis of "Changelog."
My idea is to create a human step by step procedure, and if we do all the steps the impact will be evaluated with a very high percentage of coverage. Because I have assumed that there isn’t an automatic process for evaluating that, is it true?
We have a table with which controls and components are in every application and view/controller for evaluating the “direct” impact of upgrades.
Table
My doubt is how I can be sure about if a fix could generate a wrong behavior. I will explain it with an example: “1.71.21 - [FIX] format/NumberFormat: parse special values ‘00000’”, analyzing it, I know that in sap.ui.comp.smarttable.SmartTable is using “Number format” for displaying the num. records in the header title but with the API is impossible to have the knowledge. This is only an example but reading the "Changelog," a lot of doubts like this appear and are also and more complicated to associate with.
To give you more info, I have thought to use the CDN with the new version, but this could position us in a scenario where we should do manual testing of everything and look for errors, warnings, and wrong behavior.
How did you analyze an upgrade before doing it, and how did you avoid doing human testing of everything, with the risk of forgetting things? Are you using some tool?
Thanks in advance
Best regards
Upgrading the library carries a comparable risk of introducing defects as regular development. Therefore, conventional quality assurance guidelines will apply to an upgrade: setting up and doing your tests manually or automatically.

Using <collapse-all-properties /> in production?

From my understanding, with collapse-all-properties in gwt.xml, the compiler produces one permutation for all browsers. And the resulting files are 15% to 20% larger.
Other than the increased file size, are there other reasons why I shouldn't use collapse-all-properties for production?
For example, does it strip browser-dependent logic and css, thus causing the app to potentially work and/or look differently than when compiled with default permutations?
In my app, I noticed about 100KB size increase in cache.js and combined 50KB increase of all deferredJs files with collapse-all-properties.
But when combined with gzip, code splitting and caching, the benefit of smaller file size seems trivial compared to the significant fast compilation time and general ease of use.
Got me wondering if I could use it for production.
There is no reason you can't use it in production, aside from the reasons you've already stated, and if you expect most of your users to usually arrive with populated caches (app doesn't change often, and most users bring up the app frequently), then you are correct that the size point is less meaningful. There is still a cost to loading a large JS app into memory and building all of the required methods, but I suspect that it is not meaningful when compared to loading an extra 100kb from the server.
I don't believe that collapse-all-properties by itself disables your split points (deferredJs files), or perhaps I misunderstood and you were saying that the split points grew by around 50kb.
If the performance seems acceptable on the lowest power machine you expect your users to be running the app on, I wouldn't be concerned - no need to optimize for cases that don't really apply to you.
I would be somewhat wary of additional locales (esp when new images are used for different locales), and 'form factor'-based properties (probably want to keep mobile/tablets fast at the cost of build times). I would also look into disabling unused browsers - while most modern browsers are converging on just a handful of required implementations, older browsers are still around which require extra code and additional ways to handle features like clientbundle (can't inline images into a data url). If you are able to remove those browsers from your application, you may recover a large amount of that increase you are seeing.
There has been discussion in the general GWT roadmap of removing permutations entirely, since they are largely a holdover from the time when each browser behaved very differently from others, but while we still have IE8/9 support, it will be difficult. A future modern-browser-only GWT will likely leave permutations behind entirely, and encourage solving problems like locales in a different way.

GWT dev mode performance problems?

I'm working on a small GWT project in Intellij using their built-in support. Dev mode functions, but the performance is really spotty, and I can only reload the app a handful of times before getting OutOfMemoryError (using -Xmx512M).
What should I be able to expect out of dev mode? Do others experience consistent reload times and long running processes?
I'm running GWT 2.2 with IDEA 10.0.3. My app is small, but I do include several other modules like Activity, Place, Resources, Guava Collect + Base, UiBinder, Gin Inject, etc. I believe the performance problems started before many of these dependencies were added, though.
Thanks!
You can try to increase PermGen memory size via: -XX:MaxPermSize=256m. It should help. I had the same problem, analyzed what's becoming exhausted with Visual VM and it turned out that PermGen was the problem. Of course -Xmx also helps.

When to use the XCode Distributed Build Feature

I work in a small iPhone development team, in our office, we have at least 4 copies of XCode running on the network at any one time. Contemplating getting everyone to have it running.
We're networked together using a standard WIFI Switch, so network speed and latency isn't as good as wired network...
Just wondering, is there any real time gain to be had on using distributed builds? Once it passes the relevant data back and forth over the network. At least for relatively small projects.
it depends on your project, its dependencies, and the amount of data that must be transferred.
15-20 seconds is not terrible. certainly, there is more work overall to be done. it may be a good idea for everyone to farm it out to a very fast Mac Pro, rather than to each other if you're using dual cores (that info was not given).
as far as project configuration: if you have a bunch of dependent libraries in your projects, then it may help to disable precompiled headers. much of the equation is the average number of dependencies, and the number of objects to generate.
at 15-20 seconds, it would help many developers to write so they optimize their build times before farming out. if it were a few minutes, then you may want to jump straight into distributed builds with a 8 or 12 core.
one easily overlooked aspect of slow builds on small projects: disable static analysis per build, and just run it manually every hour two, fixing every issue then.
otherwise, your project could probably be divided into smaller projects/libraries. chances are, you won't always be editing the same dependencies.
assuming compilation, linking, etc are where the time is spent at this point: much of the rest falls into the typical issues involved with building c and c++ programs. minimize your dependencies and include graphs. it's actually quite easy to accomplish with objc; since much of the interfaces use objc-types, you can use forwards.
if your library is small (e.g. less than 50 objects generated), then you may also gain a speedup by not using precompiled headers. if everything already depends on your inclusion of 12 system frameworks included by the pch.... then try it in the next project.
of course, you could just try timing a clean rebuild, a build with generated pch files, and several incremental builds in order to come to a conclusion.

GCC/XCode speedup suggestions?

I have a stock mac-mini server (2.53GHz, 4GB, 5400RPM drive) which I use for iphone development. I am working on a smallish app. The compilation seems to be taking longer and longer (now about 20min). The app is about 20 small files totaling 4000 lines. It links to C++/Boost.
What suggestions do you have to speed up the compilation process? Cheaper will be better.
Sounds like the system is swapping, which is a common problem when compiling C++ that is STL heavy. GCC can be a complete memory pig in such a situation and two GCCs at once can easily consume all memory.
Try:
defaults write com.apple.Xcode PBXNumberOfParallelBuildSubtasks 1
And see if that helps.
If it doesn't, then the other answers may help.
As others have noted, that's way in excess of what you should be expecting, so it suggests a problem somewhere. Try toggling distributed builds -- depending on your setup it may improve things or slow them down (for a while we had a situation where one machine was servicing everyone else's builds but not its own). If you have a 1Gb network then you may get a bit of extra performance from distributed builds (especially if you have a couple of spare macminis). Also enable precompiled headers, and put both boost and cocoa headers in it.
If things are still slow then it's worth running activity monitor to ensure that you don't have some other process interfering, or even Instruments to get some more detailed information. You could find that you're trying to compile files on a network drive for instance, which will certainly slow things down. Also inside Xcode try running the option to preprocess a file and see what's in the file contents (available via right-click) to check that you're not compiling in loads of extra code.
Also with boost, try slimming it down to find only the bits you need. The bcp tool that comes in the distribution will copy only the packages you ask it to, plus all the dependencies, so you're not building stuff you won't use.
Include all your boost headers in the precompiled header for your project.
I concur with Eiko. We're compiling projects that are vastly larger (including boost libraries) in much shorter time.
I'd start by checking logs. You can also view the raw compiler output in the build window (Shift+CMD+B or in the Build menu->Build Results). Xcode just makes straight gcc or llvm calls, so this may give you a better idea of what's actually happening during the build process.
There is something messed up. We are compiling bigger projects in fractions of the time.