Difference values between PageSpeed Insights and Google Search Console - Speed (experimental) - google-search-console

I like your website and it does a good job, but when I analyze my website in PageSpeed Insights, I get a 96 for mobile and a 98 for desktop, and when I look in Google Search Console (GSC), it rates my mobile website as moderate, presumably between 50-89, and the desktop as "not enough data".
Why is there that much of a difference between PageSpeed Insights and GSC? And is Google ranking my site poorly because GSC looks to be getting a poor score? Does the location of my server make any difference to the score? Should it be near the Search Console's server to receive a better score/rank?

So the issue you are experiencing is because of how PSI processes data to calculate your score vs how Search Console does.
If real world data is available Search Console will prioritise that over the simulated data to calculate your scores (which makes sense), PSI will always use the speeds it calculates under 'lab data'.
The real world data is more accurate but you need to read it correctly to know how to improve it.
The 3 bars (green, orange and red) show data as follows for First Contentful Paint (FCP) and First Input Delay (FID):-
FCP green: less than 1 second
FCP orange: 1 to 3 seconds
FCP red: over 3 seconds
and
FID green: less than 100ms
FID orange: 100ms to 300ms
FID red: over 300ms.
These are calculated for the 75th percentile for FCP and 95th percentile for FID. (although not technically correct think of it as 3 in 4 people will have this experience or better for FCP and 19/20 people will have a better experience than shown for FID).
This is where you get a 'moderate' score in Search console.
The average break down for FCP is around 23%, 58% 19% respectively. You get 36%, 45%, 19% so you are pretty close to the average.
Similar story for FID.
What to look at
You have quite a variance on FCP, there are lots of possible causes of this but the most likely ones are:-
You have a lot of visitors from other countries and aren't using a CDN (or at least not to it's full potential).
The site is receiving spikes in traffic and your server is hitting capacity, check your resource logs / fault logs on the server.
Your JavaScript is CPU heavy (230ms FID says it might be) and part of your page render depends on the JS to load. The simulated runs do a 4 times CPU slowdown, in the real world some mobile devices can be up to 6-8 times slower than desktop PCs so JS differences start to add up quickly.
Test it in the real world
Simulated tests are great but they are artificial at the end of the day.
Go and buy a £50 android device and test your site on 4G and 3G and see how the site responds.
Another thing to try is open up Dev tools and use the performance tab. Set 'network' to '3G' and 'CPU' to '6x slowdown' and observe how the site loads. (after pressing the record button and refreshing the page).
If you have never used this tab before you may need to search for a couple of tutorials on how to interpret the data but it will show JS bottle necks and rendering issues.
Put some load time monitoring JS into the page and utilise your server logs / server monitoring software. You will soon start to see patterns (is it certain screen sizes that have an issue? Is your caching mechanism not functioning correctly under certain circumstances? Is your JS misbehaving on certain devices?)
All of the above have one thing in common, more data to pinpoint issues that a synthetic test cannot find.
Summary / TL;DR
Search console uses real world data when you have enough of it, PSI always uses lab data from the run you just completed.
PSI is a useful tool but is only there for guidance, if Search console says your site is average you need to examine your speed using other real world methods for bottlenecks.

Related

PageSpeed insights went through upgrade as on 27th May 2020

https://developers.google.com/speed/pagespeed/insights/
On 27th May 2020 I had used page speed where I got a pretty good score for desktop (90+) and for mobile around (85+), but as on 28th May 2020 the metrics seems to drastically changed, I can see the PageSpeed has new version (v6) but no proper release notes are provided here https://developers.google.com/speed/docs/insights/release_notes.
Anyone had faced similar issue and found that google pagespeed did undergo certain upgrades then please provide me some references if possible.
After some digging I managed to find the draft of the proposed scoring weights.
https://web.dev/performance-scoring/?utm_source=lighthouse&utm_medium=wpt
The large shift in scores is down to how the weightings have changed
Lighthouse v6
First Contentful Paint 15%
Speed Index 15%
Largest Contentful Paint 25%
Time to Interactive 15%
Total Blocking Time 25%
Cumulative Layout Shift 5%
Lighthouse v5
First Contentful Paint 20%
Speed Index 27%
First Meaningful Paint 7%
Time to Interactive 33%
As you can see there is a massive shift to emphasis on Total Blocking Time (TBT) (JavaScript execution time mainly) and when the Largest Contentful Paint (LCP) occurs (presumably as this indicates large shifts in the page layout / visible content that may be distracting / is a good indicator of when the above the fold content is fully loaded (as opposed to showing a 'spinner')).
They have also added a third new metric Cumulative Layout Shift (CLS), a metric that works out how much the page layout 'moves around'. This has a low weighting at the moment but I imagine it is part of a larger plan to ensure all late-loading assets are captured that affect the 'above the fold' content and may cause frustration (trying to click a link to find an advert loaded in and moved it for example).
These huge changes in weightings and introduction of new metrics are the cause of massive score decreases you may experience.
I can confirm that my site that used to score 99 or 100 now only scores 87 so it is indeed a large shift in how they are scoring. They also now seem to take into account SVG rendering as my site is 100% SVG driven yet scores low on the LCP, this is something they did not initially take into account with the LCP stats.
For now focus on the two articles I linked on TBT and LCP as those are the new metrics they are choosing to emphasise making up 50% of your score.
Update
As OP pointed out in the comments the main changes for the new PSI v6 are located here

What is the expected Google home query request frequency for a device?

I have a Google smart home app released that supports various light bulb brands. I have a user with 5 Phillip Hue light bulbs and there are approximately 1360 query state requests per day for the 5 bulbs. Is this frequency of query requests common and expected for all devices?
That's one query request every ~5 minutes.
https://developers.google.com/actions/smarthome/develop/process-intents#QUERY
It is normal for Google to periodically send QUERY intents to your service to ensure that the data in Home Graph is up to date. You can mitigate this process by making sure that you have implemented Report State to publish all relevant state changes to Google in real time, as this also directly updates the state in Home Graph.
The actual frequency is a bit more difficult to pin down as it relates to not only how often you report state for devices, but also user activity on those devices. Generally speaking, the more often you report state to Google the less you should see QUERY polling.
We are also actively working on ways to reduce the need for QUERY polling, so in the future you should see the frequency of this reduced so long as you have Report State implemented for all your users' devices.

How is Google PageSpeed Round Trip represented in Chrome Dev Tools Network Tab

In Google PageSpeed Insights under mobile render-blocking JavaScript and CSS section, the comments may refer to the number of server Round Trips before content is displayed.
What does a server Round Trip look like in Google Chrome Developer Network Tools? When I look at the network overview, it looks like the TCP requests are grouped into blocks of 10. Is a Round Trip one of these blocks of 10? If so, is the Round Trip measure only applicable to the start of a website load as the blocks of 10 start to merge as the various elements take different times to load.
According to the official documentation,
Time spent waiting for the initial response, also known as the Time To First Byte. This time captures the latency of a round trip to the server in addition to the time spent waiting for the server to deliver the response.
This is displayed as a green bar (TTFB) in the resourcing timing views.

Wordpress in waiting state

I built a website for someone and I used https://gtmetrix.com to get some analytics, mainly because the wait time is huge (~20 sec) without having any heavy images. Please find attached a screenshot here:
http://img42.com/05yvZ
One of my problems is that it takes quite a long time to perform the 301 redirect. Not sure why, but if someone has a key to the solution I would really appreciate. At least some hints to search would be nice.
The second problem is after the redirection, the waiting time is still huge. As expected I have a few plugins. Their javascripts are called approx. 6 secs after the redirection. Would someone please show me some directions on where to search please?
P.S. I have disabled all plugins and started from a naked plain Twenty Eleven theme, but I still have waiting times during redirection and smaller delay after redirection.
Thanks in advance
But a few suggestions:
1 and 2.) If the redirect is adding noticeable delays; test different redirect methods. There are several approaches to this -- including HTML meta and server side (ie PHP) methods -- I typically stick to server side; if it's showing noticeable delays using a server side method, this may be a great indicator that your experiencing server issues - and may be very well your server all along causing your speed issues; contact your host provider.
3.) Take a look at the size of your media. Images and Video; also Flash if your using any. Often cases it's giant images that were just sliced / saved poorly and not optimized for web in a image editing software like PhotoShop. Optimize your images for web and re-save them at a lower weight to save significant on load time. Also, many cases nowadays and you can avoid using clunky images in general by building the area out using pure CSS3. (ie. Odd repeatable .gifs to create gradients or borders etc.)

How should I benchmark a system to determine the overall best architecture choice?

This is a bit of an open ended question, but I'm looking for an open ended answer. I'm looking for a resource that can help explain how to benchmark different systems, but more importantly how to analyze the data and make intelligent choices based on the results.
In my specific case, I have a 4 server setup that includes mongo that serves as the backend for an iOS game. All servers are running Ubuntu 11.10. I've read numerous articles that make suggestions like "if CPU utilization is high, make this change." As a new-comer to backend architecture, I have no concept of what "high CPU utilization" is.
I am using Mongo's monitoring service (MMS), and I am gathering some information about it, but I don't know how to make choices or identify bottlenecks. Other servers serve requests from the game client to mongo and back, but I'm not quite sure how I should be benchmarking or logging important information from them. I'm also using Amazon's EC2 to host all of my instances, which also provides some information.
So, some questions:
What statistics are important to log on a backend setup? (CPU, RAM, etc)
What is a good way to monitor those statistics?
How do I analyze the statistics? (RAM usage is high/read requests are low, etc)
What tips should I know before trying to create a stress-test or benchmarking script for my architecture?
Again, if there is a resource that answers many of these questions, I don't need an explanation here, I was just unable to find one on my own.
If more details regarding my setup are helpful, I can provide those as well.
Thanks!
I like to think of performance testing as a mini-project that is undertaken because there is a real-world need. Start with the problem to be solved: is the concern that users will have a poor gaming experience if the response time is too slow? Or is the concern that too much money will be spent on unnecessary server hardware?
In short, what is driving the need for the performance testing? This exercise is sometimes called "establishing the problem to be solved." It is about the goal to be achieved-- because if there is not goal, why go through all the work of testing the performance? Establishing the problem to be solved will eventually drive what to measure and how to measure it.
After the problem is established, a next set is to write down what questions have to be answered to know when the goal is met. For example, if the goal is to ensure the response times are low enough to provide a good gaming experience, some questions that come to mind are:
What is the maximum response time before the gaming experience becomes unacceptably bad?
What is the maximum response time that is indistinguishable from zero? That is, if 200 ms response time feels the same to a user as a 1 ms response time, then the lower bound for response time is 200 ms.
What client hardware must be considered? For example, if the game only runs on iOS 5 devices, then testing an original iPhone is not necessary because the original iPhone cannot run iOS 5.
These are just a few question I came up with as examples. A full, thoughtful list might look a lot different.
After writing down the questions, the next step is decide what metrics will provide answers to the questions. You have probably comes across a lot metrics already: response time, transaction per second, RAM usage, CPU utilization, and so on.
After choosing some appropriate metrics, write some test scenarios. These are the plain English descriptions of the tests. For example, a test scenario might involve simulating a certain number of games simultaneously with specific devices or specific versions of iOS for a particular combination of game settings on a particular level of the game.
Once the scenarios are written, consider writing the test scripts for whatever tool is simulating the server work loads. Then run the scripts to establish a baseline for the selected metrics.
After a baseline is established, change parameters and chart the results. For example, if one of the selected metrics is CPU utilization versus the number of of TCP packets entering the server second, make a graph to find out how utilization changes as packets/second goes from 0 to 10,000.
In general, observe what happens to performance as the independent variables of the experiment are adjusted. Use this hard data to answer the questions created earlier in the process.
I did a Google search on "software performance testing methodology" and found a couple of good links:
Check out this white paper Performance Testing Methodology by Johann du Plessis
Have a look at the Methodology section of this Wikipedia article.