How is Google PageSpeed Round Trip represented in Chrome Dev Tools Network Tab - google-chrome-devtools

In Google PageSpeed Insights under mobile render-blocking JavaScript and CSS section, the comments may refer to the number of server Round Trips before content is displayed.
What does a server Round Trip look like in Google Chrome Developer Network Tools? When I look at the network overview, it looks like the TCP requests are grouped into blocks of 10. Is a Round Trip one of these blocks of 10? If so, is the Round Trip measure only applicable to the start of a website load as the blocks of 10 start to merge as the various elements take different times to load.

According to the official documentation,
Time spent waiting for the initial response, also known as the Time To First Byte. This time captures the latency of a round trip to the server in addition to the time spent waiting for the server to deliver the response.
This is displayed as a green bar (TTFB) in the resourcing timing views.

Related

Difference values between PageSpeed Insights and Google Search Console - Speed (experimental)

I like your website and it does a good job, but when I analyze my website in PageSpeed Insights, I get a 96 for mobile and a 98 for desktop, and when I look in Google Search Console (GSC), it rates my mobile website as moderate, presumably between 50-89, and the desktop as "not enough data".
Why is there that much of a difference between PageSpeed Insights and GSC? And is Google ranking my site poorly because GSC looks to be getting a poor score? Does the location of my server make any difference to the score? Should it be near the Search Console's server to receive a better score/rank?
So the issue you are experiencing is because of how PSI processes data to calculate your score vs how Search Console does.
If real world data is available Search Console will prioritise that over the simulated data to calculate your scores (which makes sense), PSI will always use the speeds it calculates under 'lab data'.
The real world data is more accurate but you need to read it correctly to know how to improve it.
The 3 bars (green, orange and red) show data as follows for First Contentful Paint (FCP) and First Input Delay (FID):-
FCP green: less than 1 second
FCP orange: 1 to 3 seconds
FCP red: over 3 seconds
and
FID green: less than 100ms
FID orange: 100ms to 300ms
FID red: over 300ms.
These are calculated for the 75th percentile for FCP and 95th percentile for FID. (although not technically correct think of it as 3 in 4 people will have this experience or better for FCP and 19/20 people will have a better experience than shown for FID).
This is where you get a 'moderate' score in Search console.
The average break down for FCP is around 23%, 58% 19% respectively. You get 36%, 45%, 19% so you are pretty close to the average.
Similar story for FID.
What to look at
You have quite a variance on FCP, there are lots of possible causes of this but the most likely ones are:-
You have a lot of visitors from other countries and aren't using a CDN (or at least not to it's full potential).
The site is receiving spikes in traffic and your server is hitting capacity, check your resource logs / fault logs on the server.
Your JavaScript is CPU heavy (230ms FID says it might be) and part of your page render depends on the JS to load. The simulated runs do a 4 times CPU slowdown, in the real world some mobile devices can be up to 6-8 times slower than desktop PCs so JS differences start to add up quickly.
Test it in the real world
Simulated tests are great but they are artificial at the end of the day.
Go and buy a £50 android device and test your site on 4G and 3G and see how the site responds.
Another thing to try is open up Dev tools and use the performance tab. Set 'network' to '3G' and 'CPU' to '6x slowdown' and observe how the site loads. (after pressing the record button and refreshing the page).
If you have never used this tab before you may need to search for a couple of tutorials on how to interpret the data but it will show JS bottle necks and rendering issues.
Put some load time monitoring JS into the page and utilise your server logs / server monitoring software. You will soon start to see patterns (is it certain screen sizes that have an issue? Is your caching mechanism not functioning correctly under certain circumstances? Is your JS misbehaving on certain devices?)
All of the above have one thing in common, more data to pinpoint issues that a synthetic test cannot find.
Summary / TL;DR
Search console uses real world data when you have enough of it, PSI always uses lab data from the run you just completed.
PSI is a useful tool but is only there for guidance, if Search console says your site is average you need to examine your speed using other real world methods for bottlenecks.

how can we we replicate a user experience of a e-commerce application with multiple images without loading images?

1.Through jmeter recorded the script with out images.
2.Run the script by keeping 10 users.
3.Jmeter will show the execution and response time.
But how can we justify and show evidence to top level managemnt that even though with out capturing images the application response time is same as live user experience.
The better way to do it would be to check the option "Retrieve All Embedded Resources" in: Thread Group Right Click -> Sampler -> HTTP Request -> Advanced -> Retrieve All Embedded Resources so all resources all loaded.
If you don't want to measure the embedded resources response times, for example if you are using a CDN or 3rd parties, you can use a "View Results Table" and enable the option "Child samples". This way you can see the response time from the main requests and from the embedded resources separately.
The issue is, secondary requests are made in parallel threads, so the sum of the response times is larger than the response time registered by the Transaction Controller. To avoid this you can select the option "Parallel downloads. Number", next to "Retrieve All Embedded Resources" in "HTTP Request" and enter the number of "parallel downloads".
Moreover you may link to follow blog :
https://www.redline13.com/blog/
Comparing the two directly is not an apples-to-apples comparison, because they measure different things. Actually, Load Tester measures a lot of the same things for each, but what is generally considered the most important metric – Page Duration – is actually measuring a different aspect of performance in each case.
Virtual Browsers:
Virtual Browsers work at the HTTP layer – they send the same HTTP messages to the server that real browsers would send. The Page Duration measures the time from the beginning of the first request that is sent to the server to the end of the last response for a resource on that page.
Our Virtual Browsers(JMeter) will use the same number of connections to the server as a real browser. And it will distribute the requests amongst those connections in a very similar way: it will use inactive connections first, connections remain open for a while, etc. When done correctly, the target application cannot tell the difference between our Virtual Browser and a human operating a real browser.
Real Browsers:
well, REAL browsers that are driven by our virtual user instead of a human user. The driving takes place via APIs into the browser that are designed for automation (Eg : JMeter Selenium Web Driver)
For example, a Go To URL step instructs the browser to navigate to a URL. The Duration of that step measures from the time the command is sent to the browser until the browser reports completion (or failure). In the case of the Go To URL command, the command is complete when the browser fires the “On Load” event. This step will include the amount of time to get all the resources from the server – which is what is measured by the Virtual Browsers. It will ALSO include the amount of time the browser takes to render the page on the screen, which is not measured by the Virtual Browsers (since they never render the page).

website load time differs for more than 2 second on various try

I am working on to figure out the performance bottleneck of a website. I am using the chrome empty cache and hard reload option and hitting it using the incognito mode(with no extension enabled).
For determining the page load time, using the network tab of chrome tools and it reports huge variation when i hit same page using the same(empty cache and hard reload) option.
For ex:-
Below is the first hit, which shows load time is 8.2 sec with 25 requests and 477KB data :-
Just after when i again hit(2nd hit), i get the same no of request and size but load time increases to 9.25 sec.
And in 3rd hit , it reduces to just 6.89 second.
Now, my question is that i am doing the same thing, then why the load time varies a lot.
Maybe you have some requests (styles, scripts, img) to 3rd party servers which have different time for download each time ( because 3rd party server may be more heavy loaded in that specific moment).
If thats the case and your code depends (on load event) on them - it is pretty possible that you received different page load times.

Chrome hangs after certain amount of data transfered - waiting for available socket

I've got a browser game and I have recently started adding audio to the game.
Chrome does not load the whole page and gets stuck at "91 requests | 8.1 MB transferred" and does not load any more content; and it even breaks the website in all other tabs, saying Waiting for available socket.
After 5 mins (exactly) the data are loaded.
This does not happen on any other browser.
Removing one MP3 file (the latest added one) fixed the problem, so is it perhaps a data limit problem?
Explanation:
This problem occurs because Chrome allows up to 6 open connections by default. So if you're streaming multiple media files simultaneously from 6 <video> or <audio> tags, the 7th connection (for example, an image) will just hang, until one of the sockets opens up. Usually, an open connection will close after 5 minutes of inactivity, and that's why you're seeing your .pngs finally loading at that point.
Solution 1:
You can avoid this by minimizing the number of media tags that keep an open connection. And if you need to have more than 6, make sure that you load them last, or that they don't have attributes like preload="auto".
Solution 2:
If you're trying to use multiple sound effects for a web game, you could use the Web Audio API. Or to simplify things, just use a library like SoundJS, which is a great tool for playing a large amount of sound effects / music tracks simultaneously.
Solution 3: Force-open Sockets (Not recommended)
If you must, you can force-open the sockets in your browser (In Chrome only):
Go to the address bar and type chrome://net-internals.
Select Sockets from the menu.
Click on the Flush socket pools button.
This solution is not recommended because you shouldn't expect your visitors to follow these instructions to be able to view your site.
Looks like you are hitting the limit on connections per server. I see you are loading a lot of static files and my advice is to separate them on subdomains and serve them directly with Nginx for example.
Create a subdomain called img.yoursite.com and load all your images
from there.
Create a subdomain called scripts.yourdomain.com and load all your JS and CSS files from there.
Create a subdomain called sounds.yoursite.com and load all your MP3s from there... etc..
Nginx has great options for directly serving static files and managing the static files caching.
The message:
Waiting for available socket...
is shown, because you've reached a limit on the ssl_socket_pool either per Host, Proxy or Group.
Here are the maximum number of HTTP connections which you can make with a Chrome browser:
The maximum number of connections per proxy is 32 connections. This can be changed in Policy List.
Maximum per Host: 6 connections.
This is likely hardcoded in the source code of the web browser, so you can't change it.
Total 256 HTTP connections pooled per browser.
Source: Enterprise networking for Chrome devices
The above limits can be checked or flushed at chrome://net-internals/#sockets (or in real-time at chrome://net-internals/#events&q=type:SOCKET%20is:active).
Your issue with audio can be related to Chrome bug 162627 where HTML5 audio fails to load and it hits max simultaneous connections per server:proxy. This is still active issue at the time of writing (2016).
Much older issue related to HTML5 video request stay pending, then it's probably related to Issue #234779 which has been fixed 2014. And related to SPDY which can be found in Issue 324653: SPDY issue: waiting for available sockets, but this was already fixed in 2014, so probably it's not related.
Other related issue now marked as duplicate can be found in Issue 401845: Failure to preload audio metadata. Loaded only 6 of 10+ which was related to the problem with the media player code leaving a bunch of paused requests hanging around.
This also may be related to some Chrome adware or antivirus extensions using your sockets in the backgrounds (like Sophos or Kaspersky), so check for Network activity in DevTools.
simple and correct solution is put off preload your audio and video file from setting and recheck your page your problem of waiting for available socket will resolved ...
if you use jplayer then replace preload:"metadata" to preload:"none" from jplayer JS file ...
preload:"metadata" is the default value which play your audio/video file on page load thats why google chrome showing "waiting for available socket" error
Our first thought is that the site is down or the like, but the truth is that this is not the problem or disability. Nor is it a problem because a simple connection when tested under Firefox, Opera or services Explorer open as normal.
The error in Chrome displays a sign that says "This site is not available" and clarification with the legend "Error 15 (net :: ERR_SOCKET_NOT_CONNECTED): Unknown error". The error is quite usual in Google Chrome, more precisely in its updates, and its workaround is to restart the computer.
As partial solutions are not much we offer a tutorial for you solve the fault in less than a minute.
To avoid this problem and ensure that services are normally open in Google Chrome should insert the following into the address bar: chrome: // net-internals (then give "Enter"). They then have to go to the "Socket" in the left menu and choose "Flush Socket Pools" (look at the following screenshots to guide http://www.fixotip.com/how-to-fix-error-waiting-for-available-sockets-in-google-chrome/)
This has the problem solved and no longer will experience problems accessing Gmail, Google or any of the services of the Mountain View giant. I hope you found it useful and share the tutorial with whom they need or social networks: Facebook, Twitter or Google+.
Chrome is a Chromium-based browser and Chromium-based browsers only allow maximum 6 open socket connections at a time, when the 7th connection starts up it will just sit idle and wait for one of the 6 which are running to stop and then it will start running.
Hence the error code ‘waiting for available sockets’, the 7th one will wait for one of those 6 sockets to become available and then it will start running.
You can either
Clear browser cache & cookies (https://geekdroids.com/waiting-for-available-socket/#1_Clear_browser_cache_cookies)
Flush socket pools (https://geekdroids.com/waiting-for-available-socket/#2_Flush_socket_pools)
Flush DNS (https://geekdroids.com/waiting-for-available-socket/#3_Flush_DNS)

any significance in fiddler to ‘Aggregate Session’ time and Sequence (clock) time being very different?

We have a ASP.NET C# website (not MVC) that consists of a half dozen Pages. One of them is very large, consisting of several 3rd party controls (from Telerik, FarPointSpread and a few from the Ajax Control Toolkit) and about 15,000 lines of code.
This particular page, which is invoked by way of a response.redirect command from a prior page, has always loaded very slowly. Once we click on the button it takes quite a while (perhaps 10 seconds) for the new page to appear. While this is not especially acceptable what is much worse is that, when the new page actually does load, it takes quite a bit more time (perhaps yet another 10 seconds) for the various elements of the page (drop down lists, buttons, scroll bars and the like) to become available to the user.
Recently we started to use Fiddler to try and get some statistical information to help us improve on this. One of my associates, who has access to one of our Web Servers, has been using fiddler to monitor the performance of this program. His findings are:
• Our compression routines seem to be working. Much of the static information required by the program is coming from cache.
• Some of the images come back with a return code of 401, but these images are eventually made available.
• Fiddler reports an ‘Aggregate Session’ time of approximately 4 seconds.
• It also reports a ‘Sequence (clock)’ time of approximately 16 seconds.
• When we use fiddler to acquire statistics for any of the other programs (which are all much smaller and don’t have the issues this larger program has) we do not see the large difference between the ‘Aggregate Session’ time and the ‘Sequence (clock)’ time
Sequence time being longer than the aggregate session time means that the client is idle for some period of time and not making requests, perhaps due to slow-running JavaScript on the client.
The 401s are from HTTP Authentication requests from the server.