From my understanding, with collapse-all-properties in gwt.xml, the compiler produces one permutation for all browsers. And the resulting files are 15% to 20% larger.
Other than the increased file size, are there other reasons why I shouldn't use collapse-all-properties for production?
For example, does it strip browser-dependent logic and css, thus causing the app to potentially work and/or look differently than when compiled with default permutations?
In my app, I noticed about 100KB size increase in cache.js and combined 50KB increase of all deferredJs files with collapse-all-properties.
But when combined with gzip, code splitting and caching, the benefit of smaller file size seems trivial compared to the significant fast compilation time and general ease of use.
Got me wondering if I could use it for production.
There is no reason you can't use it in production, aside from the reasons you've already stated, and if you expect most of your users to usually arrive with populated caches (app doesn't change often, and most users bring up the app frequently), then you are correct that the size point is less meaningful. There is still a cost to loading a large JS app into memory and building all of the required methods, but I suspect that it is not meaningful when compared to loading an extra 100kb from the server.
I don't believe that collapse-all-properties by itself disables your split points (deferredJs files), or perhaps I misunderstood and you were saying that the split points grew by around 50kb.
If the performance seems acceptable on the lowest power machine you expect your users to be running the app on, I wouldn't be concerned - no need to optimize for cases that don't really apply to you.
I would be somewhat wary of additional locales (esp when new images are used for different locales), and 'form factor'-based properties (probably want to keep mobile/tablets fast at the cost of build times). I would also look into disabling unused browsers - while most modern browsers are converging on just a handful of required implementations, older browsers are still around which require extra code and additional ways to handle features like clientbundle (can't inline images into a data url). If you are able to remove those browsers from your application, you may recover a large amount of that increase you are seeing.
There has been discussion in the general GWT roadmap of removing permutations entirely, since they are largely a holdover from the time when each browser behaved very differently from others, but while we still have IE8/9 support, it will be difficult. A future modern-browser-only GWT will likely leave permutations behind entirely, and encourage solving problems like locales in a different way.
Related
In trying to get our webapp (written in GWT) set up for automated testing, I've read/heard that using ensureDebugId() to set element id's causes the app to take a performance hit. Of course setting element attributes would cause a small peformance hit for the final application, but does ensureDebugId() really cause noticable differences in performance? Does it do anything else under the hood besides set element id's?
It does no cause any noticeable performance hit in our 100+ screen application.
Do not micro optimize your application before even writing the code.
Turn on the ensureDebugId and Profile your application.
Turn off the ensureDebugId and profile your application.
If you truly find it affecting performance in the intolerable range then take a call.
ensureDebugId will only cause performance issues if you render tons of elements in your UI, I dont think they are going to be noticeable though. If you have such as UI, probably you will notice performance problems due to other causes.
Anyway, any unneeded single thing you remove from you app will improve your app.
I would enable ensureDebugId only for the test environment, so you could have a different compilation profile to test your app with jenkins or whatever continuous-integration SW you were using, and another profile to produce your deliverable product.
This thread in the gwt list could be useful if you have not read yet.
I am exploring using soft permutations in my build of GWT because total file system size of the compiled app is important to me (read: sum of all permutations). Aside from increasing the file size the user has to download and potential runtime performance decreases, is there any other drawbacks to using soft permutations? Any loss of localization functionality (number formatting and the like)?
For clarification, this is what I am calling soft permutations.
Thanks in advance.
I don't think there are others, except that there might be incompatibilities with existing generators/linkers (I recently proposed a patch to GWT using soft permutations and it got rolled back at least twice, before being revisited with a runtime check but no soft permutation).
See commits r9970 through r10257.
I work in a small iPhone development team, in our office, we have at least 4 copies of XCode running on the network at any one time. Contemplating getting everyone to have it running.
We're networked together using a standard WIFI Switch, so network speed and latency isn't as good as wired network...
Just wondering, is there any real time gain to be had on using distributed builds? Once it passes the relevant data back and forth over the network. At least for relatively small projects.
it depends on your project, its dependencies, and the amount of data that must be transferred.
15-20 seconds is not terrible. certainly, there is more work overall to be done. it may be a good idea for everyone to farm it out to a very fast Mac Pro, rather than to each other if you're using dual cores (that info was not given).
as far as project configuration: if you have a bunch of dependent libraries in your projects, then it may help to disable precompiled headers. much of the equation is the average number of dependencies, and the number of objects to generate.
at 15-20 seconds, it would help many developers to write so they optimize their build times before farming out. if it were a few minutes, then you may want to jump straight into distributed builds with a 8 or 12 core.
one easily overlooked aspect of slow builds on small projects: disable static analysis per build, and just run it manually every hour two, fixing every issue then.
otherwise, your project could probably be divided into smaller projects/libraries. chances are, you won't always be editing the same dependencies.
assuming compilation, linking, etc are where the time is spent at this point: much of the rest falls into the typical issues involved with building c and c++ programs. minimize your dependencies and include graphs. it's actually quite easy to accomplish with objc; since much of the interfaces use objc-types, you can use forwards.
if your library is small (e.g. less than 50 objects generated), then you may also gain a speedup by not using precompiled headers. if everything already depends on your inclusion of 12 system frameworks included by the pch.... then try it in the next project.
of course, you could just try timing a clean rebuild, a build with generated pch files, and several incremental builds in order to come to a conclusion.
UPDATE : I also found ncache which seems useful and also came to know that stackoverflow uses redis for caching. I also have come across memcached and seems one of the better alternatives.
I have found this but I needed to know what are the ways in which I can cache some of my LINQ queries and use them efficiently. I found Output cache in asp.net mvc are there other ways to do caching?
I am kind of a newbie and never done caching before so I would appreciate if anyone can point me in the right direction here? Mainly I want answer to when would be caching necessary and how to go about doing caching in asp.net mvc ?
In my experience application level caching is rarely the correct approach to fixing performance issues and it nearly always causes more problems than it solves.
Before you embark on any caching you should first:
(i) profile your application and the queries it makes to see if you can address them directly (e.g. query patterns that are too wide (fetching columns that aren't displayed) or too deep (e.g. fetching more rows than you display), too frequent (lazy loading might be causing more round trips than you need), too expensive (poor table design might mean more joins than you need), or the tables themselves might not be indexed correctly;
(ii) take a holistic look at your web site and the user experience to see how you can improve the perceived performance (e.g. set proper browser-level cache cache headers on static content). Using AJAX might and a paged grid view like jQGrid might eliminate many database accesses while a user is paging through records because the rest of the page content is not changing.
After you've exhausted fixing the real problem you may then then be ready to consider caching.
But before you do, make a simple calculation: how much does an upgraded server cost vs the development and testing time you will spend implementing caching and tracking down odd stale-cache issues? Sometimes it's cheaper just to upgrade...
I have a stock mac-mini server (2.53GHz, 4GB, 5400RPM drive) which I use for iphone development. I am working on a smallish app. The compilation seems to be taking longer and longer (now about 20min). The app is about 20 small files totaling 4000 lines. It links to C++/Boost.
What suggestions do you have to speed up the compilation process? Cheaper will be better.
Sounds like the system is swapping, which is a common problem when compiling C++ that is STL heavy. GCC can be a complete memory pig in such a situation and two GCCs at once can easily consume all memory.
Try:
defaults write com.apple.Xcode PBXNumberOfParallelBuildSubtasks 1
And see if that helps.
If it doesn't, then the other answers may help.
As others have noted, that's way in excess of what you should be expecting, so it suggests a problem somewhere. Try toggling distributed builds -- depending on your setup it may improve things or slow them down (for a while we had a situation where one machine was servicing everyone else's builds but not its own). If you have a 1Gb network then you may get a bit of extra performance from distributed builds (especially if you have a couple of spare macminis). Also enable precompiled headers, and put both boost and cocoa headers in it.
If things are still slow then it's worth running activity monitor to ensure that you don't have some other process interfering, or even Instruments to get some more detailed information. You could find that you're trying to compile files on a network drive for instance, which will certainly slow things down. Also inside Xcode try running the option to preprocess a file and see what's in the file contents (available via right-click) to check that you're not compiling in loads of extra code.
Also with boost, try slimming it down to find only the bits you need. The bcp tool that comes in the distribution will copy only the packages you ask it to, plus all the dependencies, so you're not building stuff you won't use.
Include all your boost headers in the precompiled header for your project.
I concur with Eiko. We're compiling projects that are vastly larger (including boost libraries) in much shorter time.
I'd start by checking logs. You can also view the raw compiler output in the build window (Shift+CMD+B or in the Build menu->Build Results). Xcode just makes straight gcc or llvm calls, so this may give you a better idea of what's actually happening during the build process.
There is something messed up. We are compiling bigger projects in fractions of the time.