What does Fast 3G actually mean? - google-chrome-devtools

In the Chrome browser's developer tools, there are various ways to throttle your network connection to emulate different connection types.
Those menus for selecting a connection type used to show the speeds and latency that would be used to simulate each connection type.
Now, as of at least Chrome 64, the useful information about speed and latency has been removed.
I tried duplicating the numbers from the first image for Regular 3G and Good 3G in my own custom profiles to see if they matched the Slow 3G and Fast 3G presets from the first image but I got significantly different results for the DOMContentLoaded and Load event times between the presets and my custom profiles.

DevTools tech writer and developer advocate here. The history behind the change is that DevTools now tries to emulate the real conditions of what a fast 3G network really feels like. Previously, DevTools showed you upload / download / RTT values, as you have shown in your screenshot of the old UI. But those values were misleading. They may be technically correct, but when DevTools was benchmarked against other throttling tools, DevTools didn't throttle enough. E.g. if you loaded a page with tool A that throttles for 3G, and then loaded that same page with DevTools (also throttling for 3G), the page loaded faster with DevTools.
So DevTools doesn't show the exact values anymore, but if you measure the load performance of DevTools against other throttling tools, you can see that they all perform similarly now.
The reason that DevTools doesn't show values anymore is that they don't map to reality well. For example, maybe you look up that a certain connection speed is defined as X download rate, Y upload rate, and Z RTT rate. So you put those values into DevTools. Those values aren't going to approximate the real-world conditions well. DevTools is going to load faster than the real-world experience. It's better to benchmark how a certain page really loads on that connection, and then tweak the input values until DevTools loads your benchmark page at around the same amount of time as your real-world benchmark.
Of course, another approach would be to get a Chrome engineer to tweak Chrome's throttling engine so that the values you input actually do map to reality well. But for whatever reason, that's not happening.
Since it's possible to add custom throttles, I'm aware that we need to update the DevTools UI to explain this limitation. In other words, when you create custom throttles, you should benchmark a page and then tweak DevTools inputs until it matches the benchmark, rather than relying on the values.
Hope that makes sense.

Related

How to do CPU profiling / Performance profiling for startup phase of Flutter?

My flutter application is a bit slow when booting, so I need to know what is happening during the startup. More specifically, I want to use the Performance and CPU Profiler tabs of Flutter DevTools.
However, to use those DevTools, I need to open the DevTools page, click "record", and then click "start recording" / "refresh". However, I am not quick enough (and IMHO it is also not a good idea) to quickly do these steps when the app is starting up, in order to capture information about startup.
Thus I wonder what I should do? Thanks!
P.S. This does not work, because that does not talk about CPU profiling. This also does not work, because it is not a "real" restart and many states are not re-inited, etc.
P.S. A sample screenshot of Performance and CPU Profiler tab:

Advantages of Ionic lazy loading

Did a simple Google search:
https://www.google.com/search?q=advantages+of+ionic+lazy+loading
And couldn't really find a detailed description of the advantages of lazy loading. Anyone care to explain?
Long story short: (startup)-performance!
The underlying problem:
When you do a cold start of your app (no resume) the webview engine needs to load, parse and interpret a lot of javascript to become useable. The top high-end devices are mostly capable of doing so in a kind of acceptable timeframe but on hardware which is a few years old or simply not equipped with enough CPU power this may take a while.
Another problem (especially when developing PWAs) is network speed, with WiFi or 4G it is no problem (but also far from ideal!) at all to quickly download a few MBs of javascript. But on a slow 3G connection you can go and drink a coffee while waiting until your app becomes interactive.
Lazy-loading to the rescue!
So how can we minimize the effort to make the app interactive faster? We split up our heavy main bundle into many smaller bundles. So if we start our app now, only the bare minimum of javascript needed for the first page has to be fetched and parsed. Every time we need a specific feature (a page) we do the loading just in time (lazy) instead of ahead of time (eager). By always just fetching a small chunk of javascript when needed the performance gain will be huge for some devices but will definitly noticeable on every device.
If you implement lazy-loading in Ionic3 your code also becomes more modular and maintainable because you will create a self-contained Angular module for every IonicPage and by pushing a string onto the nav-stack instead of an actual page instance you remove a lot of dependencies (imports) in your code.

Chrome DevTools Network Waterfall - gaps between requests?

I've been doing some refactoring on a slow running web application, and managed to reduce the number of requests and the size of the downloads to help improve the situation. Now the loading time is consistently shorter. However, consistently before there was hardly any time elapsed before the last 2 requests. Now consistently there is a gap.
Q1: What do these 'gaps' indicate in Chrome Network view?
Q2: Looking at the screenshots, the DOMContentLoaded time vs. the overall Finish time, are there any conclusions I can draw that could help me optimise further?
Record the page load in the Performance panel. See Get Started With Analyzing Runtime Performance to get the gist of how to use the panel. Understanding the network bottleneck can also help get you oriented.
However, you'll want to press the Reload page button (like Sam does in the "understanding the network bottleneck" video) instead of the Record button to record the page load performance, as the "get started with analyzing runtime performance" instructs you to do.
Once you've got a recording, the Main section shows you all of the main thread activity that occurs while the page is loading. The Network section shows you all of the Network requests. You'll probably be able to visually verify that there's a bunch of JavaScript work going on during the gap that you're seeing in your screenshots.
If it's still not clear to you, post a screenshot of your Performance panel recording and I'll help you decode the results.

Memory usage in gwt

I´m new to GWT and have a few questions about memory usage in GWT.
Is it possible to detect, how much memory is left in the gwt client (browser)?
Is there an event if the browsers memory gets low, as a signal to free resources?
Is there a known approximation value how much memory can be used in different browsers, especially mbile ones?
Tnx
I have not come across a browser api to do so. You can approach this problem in another way by designing an app low memory footprint. The profiling techniques to achieve a performance optimal app would be as follows -
Track memory footprint of GWT app for dev windows by a primitive approach. Open/Navigate Task Manager -> Performance -> PF usage .
Use memory profiler from chrome
A. https://developers.google.com/chrome-developer-tools/docs/profiles
B. https://developers.google.com/chrome-developer-tools/docs/memory-analysis-101
Use memory profiler from firefox - Javascript memory profiler for Firefox
Your GWT code will be compiled and sent to client as JavaScript code. JS is constrained in a sandbox and doesn't provide way to find out how much memory is used by client browser.
But you can apply a trick and try to estimate weight of content of page and measure its load speed. So you can get very rough evaluation of performance client browser.

High latency in an iPhone mmorpg

Right now I'm trying to make a mmorpg for the iPhone. I have it set up so that the iPhone requests for the player positions several times a second. How it does this is that the client sends a request using asynchronous NSURLConnection to a php page that loads the positions from a mysql database and returns it in json. However, it takes about .5 seconds from when the positions are requested to when they actually get loaded. This seems really high, are there any obvious things that could cause this?
Also, this causes the player movement on the client to be really choppy too. Are there any algorithms or ways to reduce the choppiness of the player movement?
Start measuring how long the database query takes when you run it outside your iPhone.
Then measure how long it takes when you send the same http request from something other than your iPhone(It's e.g. a 10-15 line c# program to figure this out).
If none of the above show any sort of significant latency, the improvements need to be done on the iPhone side. Some things to look out for:
GPRS/3G has rather high latency
GPRS/3G has rather high bit error rates - meaning there's going to be quite a few dropped packets now and then which will cause tcp to retransmit and you'll experience even higher latency
HTTP has a lot of overhead.
JSON adds a lot of overhead.
Maybe you'll need to come up with a compact binary format for your messages, and drop HTTP in favor of a custom protocol - maybe even revert to UDP
The above points generally don't apply, but they do if you need to provide a smooth experience over high latency,low bandwidt, flaky connections.
At the very least, make sure you're not setting up a new TCP connection for every request. You need to use http keep-alive.
I don't have any specific info on player movement algorithms, but what is often used is some sort of movement prediction.
You know the direction the player is moving, you can derive the speed if it's not always constant - this means you can interpolate over time and guess his new position, adjust the on screen position while you're querying for the actual position, and adjust back to the actual position when you get the query response.
The trick is to always interpolate over time within certain boundaries. If your prediction was a bit off compared to what the query returned, don't immediatly snap the positon back to the real position. Do interpolation between the current position and the desired postion over a handful of frames.
On the server side you should be using some system that keep running and that keeps the database connection open all the time. Preferably it would also cache things instead of requesting them from database all the time.
Also, do not make a new HTTP request for every update. It would be best if you hadn't need to use HTTP at all, as it really isn't suitable for realtime comminunication.
GPRS typically has 600 ms ping time, 3G has 300 ms and HSPA has 100 ms. See which mode is being used. Notice that some devices (I don't know of iPhone) drop from HSPA to regular 3G for power-saving reasons whenever there is not enough traffic to justify the faster mode.
As for position, a rather common practice is to apply a linear prediction, i.e. make the character continue movement in current direction, at the current speed, even when no data from server is available yet.
Most importantly: benchmark/profile to see where the latencies are. Is it your server, the network connection or the application.
Loading the player positions that fast has downsides.
It hammers your server.
3G isn't really meant to support low-latency applications
So I don't see a mmorpg working without some necessary shortcuts at this time, e.g. extrapolating paths based on their velocity and position. Loading positions will not work as fast as you want, especially with a server based on PHP of all things.
Either way, when developing for a mobile platform you're going to have to make sacrifices in terms of features versus a fully-featured desktop implementation.
I might also reimplement some of the more critical stuff if not the whole server in a faster language, e.g. C++.