I´m new to GWT and have a few questions about memory usage in GWT.
Is it possible to detect, how much memory is left in the gwt client (browser)?
Is there an event if the browsers memory gets low, as a signal to free resources?
Is there a known approximation value how much memory can be used in different browsers, especially mbile ones?
Tnx
I have not come across a browser api to do so. You can approach this problem in another way by designing an app low memory footprint. The profiling techniques to achieve a performance optimal app would be as follows -
Track memory footprint of GWT app for dev windows by a primitive approach. Open/Navigate Task Manager -> Performance -> PF usage .
Use memory profiler from chrome
A. https://developers.google.com/chrome-developer-tools/docs/profiles
B. https://developers.google.com/chrome-developer-tools/docs/memory-analysis-101
Use memory profiler from firefox - Javascript memory profiler for Firefox
Your GWT code will be compiled and sent to client as JavaScript code. JS is constrained in a sandbox and doesn't provide way to find out how much memory is used by client browser.
But you can apply a trick and try to estimate weight of content of page and measure its load speed. So you can get very rough evaluation of performance client browser.
Related
I have a webpage I'm trying to debug in Chrome v67.0.3396.62. Intermittently the page becomes unresponsive, and both Windows Task Manager and Chrome's built in Task Manager show 100% CPU usage [on a single core].
This problem does not happen in any reliable or reproducible way - I can go days without a problem, then have issues for hours and have to end the tab process in order to regain use of it every time the page loads.
I have somehow managed to capture a performance profile using Chrome's Dev tools performance tab during a time that this problem was occurring. However, the profile shows the process to be mostly idle.
What could cause 100% cpu usage that is not captured by the dev tools performance profiling?
(I have already poured over my own code for infinite loops/recursion, but haven't found anything suspect. I am using jquery, bootstrap and popper, but they're all being served from reputable CDNs and match their integrity hashes, so I'm at a loss how to debug any further. Any suggestions would be much appreciated.)
In the Chrome browser's developer tools, there are various ways to throttle your network connection to emulate different connection types.
Those menus for selecting a connection type used to show the speeds and latency that would be used to simulate each connection type.
Now, as of at least Chrome 64, the useful information about speed and latency has been removed.
I tried duplicating the numbers from the first image for Regular 3G and Good 3G in my own custom profiles to see if they matched the Slow 3G and Fast 3G presets from the first image but I got significantly different results for the DOMContentLoaded and Load event times between the presets and my custom profiles.
DevTools tech writer and developer advocate here. The history behind the change is that DevTools now tries to emulate the real conditions of what a fast 3G network really feels like. Previously, DevTools showed you upload / download / RTT values, as you have shown in your screenshot of the old UI. But those values were misleading. They may be technically correct, but when DevTools was benchmarked against other throttling tools, DevTools didn't throttle enough. E.g. if you loaded a page with tool A that throttles for 3G, and then loaded that same page with DevTools (also throttling for 3G), the page loaded faster with DevTools.
So DevTools doesn't show the exact values anymore, but if you measure the load performance of DevTools against other throttling tools, you can see that they all perform similarly now.
The reason that DevTools doesn't show values anymore is that they don't map to reality well. For example, maybe you look up that a certain connection speed is defined as X download rate, Y upload rate, and Z RTT rate. So you put those values into DevTools. Those values aren't going to approximate the real-world conditions well. DevTools is going to load faster than the real-world experience. It's better to benchmark how a certain page really loads on that connection, and then tweak the input values until DevTools loads your benchmark page at around the same amount of time as your real-world benchmark.
Of course, another approach would be to get a Chrome engineer to tweak Chrome's throttling engine so that the values you input actually do map to reality well. But for whatever reason, that's not happening.
Since it's possible to add custom throttles, I'm aware that we need to update the DevTools UI to explain this limitation. In other words, when you create custom throttles, you should benchmark a page and then tweak DevTools inputs until it matches the benchmark, rather than relying on the values.
Hope that makes sense.
Did a simple Google search:
https://www.google.com/search?q=advantages+of+ionic+lazy+loading
And couldn't really find a detailed description of the advantages of lazy loading. Anyone care to explain?
Long story short: (startup)-performance!
The underlying problem:
When you do a cold start of your app (no resume) the webview engine needs to load, parse and interpret a lot of javascript to become useable. The top high-end devices are mostly capable of doing so in a kind of acceptable timeframe but on hardware which is a few years old or simply not equipped with enough CPU power this may take a while.
Another problem (especially when developing PWAs) is network speed, with WiFi or 4G it is no problem (but also far from ideal!) at all to quickly download a few MBs of javascript. But on a slow 3G connection you can go and drink a coffee while waiting until your app becomes interactive.
Lazy-loading to the rescue!
So how can we minimize the effort to make the app interactive faster? We split up our heavy main bundle into many smaller bundles. So if we start our app now, only the bare minimum of javascript needed for the first page has to be fetched and parsed. Every time we need a specific feature (a page) we do the loading just in time (lazy) instead of ahead of time (eager). By always just fetching a small chunk of javascript when needed the performance gain will be huge for some devices but will definitly noticeable on every device.
If you implement lazy-loading in Ionic3 your code also becomes more modular and maintainable because you will create a self-contained Angular module for every IonicPage and by pushing a string onto the nav-stack instead of an actual page instance you remove a lot of dependencies (imports) in your code.
I am using instrument "Allocations" to monitor the app's memory. The vm tracker statistics puzzled me. Why there is so much dirty memory(for my app, reach 32M), I googled about this and knew that the dirty memory should be clear first when app received memory warning in background mode.
Could you tell me about the meaning of vm checker statistics? And how can I handle the dirties:VM_ALLOCATE, Core Animation.
Thanks in advance!
There is often very little you can do about VM usage directly; much of that will be due to use of various system APIs, etc...
Your time will be far more productively spent by focusing on the objects in the Allocations instrument itself and work towards both eliminating any leaks (accretion, too, not just leaks) and reducing allocation bandwidth.
What are the best practices, tricks, and tutorials for using XCode's performance tools, such as the Leak Monitor and the CPU sampler, for someone trying to debug and enhance performance of an iPhone application?
Thanks!
It depends entirely on the application and on what you are trying to do. Are you trying to optimize the whole application or are you focused on a particular problem area? Are you trying to reduce memory usage, reduce CPU usage, and/or make the app more responsive?
Before you start the performance analysis, use the static analyzer to analyze your code. It will often find memory management problems that would lead to leaks that would cause your app to potentially crash on the device.
Once all of the analyzer identified problems have been fixed, the best approach is to start by identifying perceived performance problems. That is, focus on performance problems that the user would notice. Then analyze those. If you can get away with it, do the analysis on the app running in the simulator as the turnaround time is faster.
If the problem is one of bloat, use Object Alloc and Leaks to figure out why.
If it is one of laggy/sluggish behavior, use the CPU tools to figure out where the cycles are going. Keep in mind, though, that sluggish behavior may not be because of CPU usage, but may be because the main event loop is blocked by something, most likely incorrect concurrency patterns. In that case, you'll see all samples on the main thread in some kind of a lock or wait function.
Beyond that, you'll need to identify specific scenarios to yield specific answers.
use instruments in that use
object allocation
activity monitor,
leaks
memoer monitor
and test your app