I was wondering if there is an 'easy' way to see what the memory footprint is, used by the objects created by JSF. For instance, I have some #SessionScoped objects and some #ViewScoped objects when going to a certain page.
I would like to know how much KB (or MB) they are using. This way, we can make an estimated calculation of the memory footprint per user of JSF.
I am using Eclipse and EAP 7 together with JSF 2.3. I tried using jvisualvm but no specific class information and size in memory is available. I do remember long time ago we had some tool to visualize this kind of information.
Any ideas on how to find out? I guess some Eclipse plugins can work, but I am totally new to this area and have no clue about the better ones...
I personally use JVisualVM which is found in every JDK /bin folder installation. You can attach to your running process and watch the memory and objects and their size.
There is no magic bullet to profiling. If you have a SessionSxoped bean named FooBean I open up VisualVm and I go to the Classes tab and filter by FooBean. I then use my load testing tool to simulate real world use and I monitor how many instances of Foo? How much heap are those instances taking? Are they being garbage collected etc.
It takes a little bit of learning but you will get better at it the more you use it.
Related
I would like to ask when to use ECS. My logic is telling me, that I should use it only when I need to control a larger amount of objects. Am I right, or should I use it everywhere?
My logic is telling me, that I should use it only when I need to
control a larger amount of objects.
Not really but it helps when it's used to control many objects. This is not the only or main objective of ECS. See the list reasons why ECS should be used below:
Performance
Memory Management
Build Size
It's not only used when you are working with many objects, it's also used to manage the memory. It reduces the amount of memory required when ECS is used than when the Unity's component system is used. Also, it reduces the build size too. With ECS, you can reduce the build size of your project. ECS makes it possible to use Unity's API as modules, You can decide which modules to include in the build therefore reducing the project size. For example, you can decide to remove the physics system if you don't need it in your project.
Finally, ECS is really important when it comes to building lightwight projects. For example, if you want to build a lightweight game or interactive program that would run on other smaller devices as an Ad, ECS should be used here since it will reduce the size of the Ad, loading time, the memory footprint. You should watch this video and visit the Unity's ECS page to learn more about it.
So,when should you use ECS?
When you need performance, want to conserve memory, reduce the build size of your project or create a lightweight program. Of-course there are many other uses of it. It's still new and we will find out more about it very soon.
I'm trying to find a way to rid redundant compilation and js from a client's GWT code. Problem is that they have a multiple EntryPoint site and a massive model that gets compiled for every module. We're talking about 30 GWT modules and entry points each compiling the entire model package of the app separately. It takes about 15 minutes on my 8 core monster just to GWT compile this beast. And yes, compilation is parallellized and uses all cores (can hardly move my mouse in Ubuntu :) )
To change the architecture to a single module is not really an option I think. Is there no way to have inherits be shared between modules? The modules aren't necessarily that big all of them, but the problem is again that all inherits are compiled redundantly for each module. This of course has negative effects for the end user as well since every page basically has to load the entire model-js again and again.
According to
http://www.gwtproject.org/doc/latest/DevGuideOrganizingProjects.html#DevGuideModuleXml
the suggestion still seems to be to just make one great monolithic module. Isn't there any better way?
Any tips highly appreciated!
As it is said in the GWT Documentation you refer to, GWT mechanism to face the problem of avoiding redundant code is merging all modules in just a a super-gwt-module which includes all sub-modules you have in your applications.
I suppose you are producing a module for a different page or feature at your website, so using a unique module, as I say, implies that you will need a mechanism to run the appropriate application-code per page, based on the url or something.
You can take advantage of using code-splitting, so your modules will be EntryPoints instead of RunAsyncCallbacks, and each module will be compiled in one js fragment which will be loaded asynchronously.
Note that you will include the same javascript fragment in all pages, and this will load other fragments depending on the page.
The advantages of this solution are many:
You only have one compilation process. It could take a long time, but for sure it will take much less than compiling all modules individually because redundant code will be compiled once.
You can maintain different .gwt.xml, one to continue developing the individual modules with its own EntryPoint, and another without EntryPoint which will be inherited by your super-module.
Once compiled, the first fragment loaded (shared by all apps) would be very small, and it will be cached just once, so all apps would load very fast.
Many of the code shared by the modules (gwt-core, jre, etc), could go to the first fragment and would be shared by all the modules, decreasing the final downloaded size of each app.
This is an out-of-the-box solution, gwt compiler makes a good job splitting the code, merging shared code to intermediate modules, and adding the methods to load fragments asynchronously when demanded.
Java ecosystem facilitates modular apps (dependencies, maven, etc).
Otherwise, if you continue wanting individual modules, the way to compile all of them is what you are actually doing: executing gwt compiler once per module (and permutation). You can improve your compilation time, though, having a continuous integration cluster like Jenkins and running jobs in parallel, or using more brute force (memory, cpu, ...).
As you probably know GWT compiles each module into one big JavaScript file and optimizes everything based on all available information about everything in the whole module. This is why you need to compile everything for each module.
One solution might be to do create one big module, but use code splitting similar to the module structure. Than you don't get one very large monolithic JavaScript file, but 'modules' are loaded as needed.
Did you try compiling with less localworkers, instead of using all possible available cores? I've had the best results with localworker set to 4 (even on a 6-core machine).
Can any one please explain possible ways to happen memory leaks while using GWT in development mode as well live mode?
I refer the following question
How to resolve a memory leak in GWT?
as found the below link in one of the answers ..
https://developers.google.com/web-toolkit/articles/dom_events_memory_leaks_and_you
They are mostly deal with Widget creations and browser events. is there any other possible block holes to happen memory leaks like while doing RPC's ..using much rendering methods ..etc etc??
The link you listed talked about how GWT avoids the kinds of memory leaks that browsers are known for - cases where nothing anywhere is referencing the dom element or the widget, but the browser still won't free it from memory. The only case you need to be careful for is when you create your own widgets that hold other widgets - in that situation you need to be careful about invoking attach and detach methods on each child. Sticking with known-good containers and panels will totally save you from this, as well as never invoking attach or detach methods directly.
Beyond that, memory leaks are just like every other kind of development - we need to be careful about keeping references of things you aren't using any more. I disagree with some of the answers in the linked question - Java memory leak tools aren't great at helping to track those objects over Dev Mode, since they think about the JVM and Java objects, not Browsers and JavaScript.
Instead, compile with style set to PRETTY, and use a tool like Chrome's Inspector. That can be used to look at objects that are consuming memory, and tell what is holding on to them. The strategy is the same as with any other heap analysis tool (JProfiler, VisualVM, etc). And the standard rules for writing code apply - if you are holding on to an object you don't need, null it out or remove it from the collection that is holding it. Clearly if you still need it, keep it - and instead something higher up should letting go of that.
I'm working on a small GWT project in Intellij using their built-in support. Dev mode functions, but the performance is really spotty, and I can only reload the app a handful of times before getting OutOfMemoryError (using -Xmx512M).
What should I be able to expect out of dev mode? Do others experience consistent reload times and long running processes?
I'm running GWT 2.2 with IDEA 10.0.3. My app is small, but I do include several other modules like Activity, Place, Resources, Guava Collect + Base, UiBinder, Gin Inject, etc. I believe the performance problems started before many of these dependencies were added, though.
Thanks!
You can try to increase PermGen memory size via: -XX:MaxPermSize=256m. It should help. I had the same problem, analyzed what's becoming exhausted with Visual VM and it turned out that PermGen was the problem. Of course -Xmx also helps.
I have a stock mac-mini server (2.53GHz, 4GB, 5400RPM drive) which I use for iphone development. I am working on a smallish app. The compilation seems to be taking longer and longer (now about 20min). The app is about 20 small files totaling 4000 lines. It links to C++/Boost.
What suggestions do you have to speed up the compilation process? Cheaper will be better.
Sounds like the system is swapping, which is a common problem when compiling C++ that is STL heavy. GCC can be a complete memory pig in such a situation and two GCCs at once can easily consume all memory.
Try:
defaults write com.apple.Xcode PBXNumberOfParallelBuildSubtasks 1
And see if that helps.
If it doesn't, then the other answers may help.
As others have noted, that's way in excess of what you should be expecting, so it suggests a problem somewhere. Try toggling distributed builds -- depending on your setup it may improve things or slow them down (for a while we had a situation where one machine was servicing everyone else's builds but not its own). If you have a 1Gb network then you may get a bit of extra performance from distributed builds (especially if you have a couple of spare macminis). Also enable precompiled headers, and put both boost and cocoa headers in it.
If things are still slow then it's worth running activity monitor to ensure that you don't have some other process interfering, or even Instruments to get some more detailed information. You could find that you're trying to compile files on a network drive for instance, which will certainly slow things down. Also inside Xcode try running the option to preprocess a file and see what's in the file contents (available via right-click) to check that you're not compiling in loads of extra code.
Also with boost, try slimming it down to find only the bits you need. The bcp tool that comes in the distribution will copy only the packages you ask it to, plus all the dependencies, so you're not building stuff you won't use.
Include all your boost headers in the precompiled header for your project.
I concur with Eiko. We're compiling projects that are vastly larger (including boost libraries) in much shorter time.
I'd start by checking logs. You can also view the raw compiler output in the build window (Shift+CMD+B or in the Build menu->Build Results). Xcode just makes straight gcc or llvm calls, so this may give you a better idea of what's actually happening during the build process.
There is something messed up. We are compiling bigger projects in fractions of the time.