Need Help: Can you identify the Desktop CLS (Cumulative Layout Shift) on my URL? - google-search-console

URL: tinyurl.com/36utmdnc
In Google Search Console, I am seeing thousands of errors for Core Web Vitals, particularly the CLS in Desktop mode. I've made a number of changes, such as removing all ads, tweaking the CSS, HTML and other code. I've been making changes for a year actually, but I've really tried tackling this problem over the past 2-3 months, since "Page Experience" is showing a big reduction in "good URLs" and I believe it's now affecting my traffic volume from Google Search. It's been weeks now and the CLS is not changing in Search Console. I've tried validating for a couple of months now.
The example URL above (a product page) has a CLS score of 0.33 according to Google Search Console / Core Web Vitals. It seems all of the errors are on my product pages like the example above. I've ran tests in PageSpeed Insights, which shows a CLS of "0". I understand the reports shown in Google Search Console are from "Chrome User Experience report", which is different than PageSpeed Insights "lab setting".
Here are things I've done:
Opened Chrome Developer Tools and clicked the checkbox for "Layout Shift Regions", which flash blue during a layout shift. Also checked the "Core Web Vitals" to enable to overlay that adds up CLS throughout the entire browsing experience.
Opened the Network Tab and throttled the speed down to "very slow tests" and carefully watched for layout shifts while the page loads slowly.
Carefully read the guides on web.dev/cls/ , ran Chrome's LightHouse test, ran tests on various sites like webvitals.dev/cls , defaced.dev/tools/layout-shift-gif-generator/ , webpagetest.org/webvitals, etc.
Manually tested using different screen resolution widths/heights using Chrome Developer Tools (800px width, 1400px width, 8000px height, etc).
Asked tech-savvy friends/users to also check and help identify the CLS.
I can't find anything that could cause a large CLS of 0.33. The number of CLS errors in Search Console is staying steady, going up and down by about 100 URLs every day, but the same example URL above has been stuck there for months. So I was hoping someone with knowledge could find it or identify the underlying issue.
Thanks

I cannot see a CLS score of 0.33 for that page, and agree that testing it myself shows very little CLS.
You are correct that PageSpeed Insights is lab-based, but it also shows the field-based data at the top, and in this case it is saying it does not have enough field-based data for this URL and so is displaying the origin-level data for all the pages for your site.
Google Search Console also shows field-based data, but groups pages it thinks might be similar and gives them all the same CLS scores. This is done at a bit more lower-level grouping than the whole origin so it may group all your product pages together for example, if it has data for some of them.
This page for example has a hover effect that gives a huge CLS when it's used - though can't quite see why as finding it difficult to trigger manually in dev tools.
It is possible those pages are the ones with the actual CLS issue and they are being lumped in with your good page under the product pages group. I would investigate if you can reduce the CLS on that hover effect.
Separately, I notice you are lazy loading your images but not specifying width and height on them. This can lead CLS if the images are not loaded by the time that area scrolls into view for example this test when linking to a bit at the bottom of the page. It is recommended to always include image dimensions on lazy-loaded images (and in fact on all images!) to avoid this. This could be another reason for your high CLS, which is not as evident in lab-based tools that typically only load the top of the page without lazy-loaded content.
I also recommend you read the Debugging Web Vitals in the Field and also the more advanced Measure and debug performance with Google Analytics 4 and BigQuery for additional information as to how to answer why your field-based metrics may be different than what you can observe.

Related

EWW: very slow opening of certain pages

I'm using EWW Emacs browser for opening various remote pages(mostly documentation) which is very handy most of the time.
I'm still trying to understand why certain pages take 4-5-6+ seconds to be rendered in eww(which take <1s in Chrome, for comparison).
For my tasks, I only care about the loaded content - images and fancy styles are not needed.
Is there any simple way to speed it up?
Like setting readable mode/disabling images before calling eww? If that's possible at all.
Update from a few weeks later
I made a few experiments and from what I found the biggest contributing factor in my case is when a page has lots of third-party fonts.
I wasn't able to find a way to disable font fetching in eww source code so probably true "text based" browser like w3m was a better solution in the first place.
Any clarification comments and answers are still very welcome.

How can I download the statistics (or time series) of Facebook messages exchanged between myself and my significant other?

I have tried searching for a way to download some basic stats, or even a time series plot, that tallies the amount of facebook messenger activity between myself and my significant other. This is a much needed step to expedite some paperwork we need for a government application. Scrolling through a multi-year history and counting which days we used Messenger seems tortuous. Even if there was a way to download a vector of timestamps of messages between us, that would help a lot. I could code my own plot for the paperwork. Some blogs have mentioned using the "download a copy of your facebook data" from the support inbox, but I do not see that link on the appropriate page anymore. Does anyone know where it moved to?
Thank you.
I haven't tried this myself, but hopefully it works:
Go to your settings (the little downward pointing arrow, then choose settings),and right below all the options within the main box there should be a message saying "Download a copy of your Facebook data.", click the link, start the archive and apparently it should work. But whether or not it has timestamps and such, is a mystery for me (but likely).
edit: If this does not work, depending on your computer/volume of messages, what you could also do is open the mobile browser messenger (to reduce processing power needed), and start scrolling up within your conversation right up until to the top, then ctrl+a and copy everything to a word document. But obviously this would not work for conversations in the multiple thousand message territory

Testing Facebook Messenger Scan Code

Facebook recently announced the introduction of messenger codes which can be used to add new contacts and, more importantly, communicate directly with businesses and business pages (which is why I'm interested in it).
It took me ages to find it but on the bottom left of the messages tab on my Facebook page I have the option to download my code in three different sizes - clicking the disc will open a modal window where you can click the Download button and choose from 300, 600 or 1000px PNG file downloads.
NOTE: While they are PNG files the background is not transparent which seems like a bit of an oversight to me but hey ho that's what Photoshop is for I guess.
The problem is that while I can download my code I can't find any way to test it on printed materials (or even electronically at the moment!). The scanning feature doesn't seem to have been rolled out for me yet (I tried re-installing the Messenger app to see if I got a newer version but that didn't work) and nor for anyone I know (I'm in the UK). The codes are bespoke to Messenger so can't be scanned or tested using any other app.
I'm probably too far ahead of the game but is there any way I can test to see if my code scans correctly, or anywhere I can go to find out? I would like to use it on some promotional material which is likely to be long term materials that I don't want to have to update in the near future (several years, by which time it's likely these codes will be more commonplace).
I also need to know what the redundancy is like. For example the high redundancy QR codes I generate can have up to 30% of the code covered while still being usable, which is great for design purposes. I can't find any official documentation as yet for these codes at all, let alone what is required, what the spec. is etc.
I know the most likely option is 'sit and wait' but I really would rather not if possible. I've never been very patient...
Thanks
UPDATE: My Messenger app has now been updated so I can test, but I'm leaving this here in case anyone knows of another way to test perhaps? If someone doesn't have Messenger on their phone for example.

Mysterious severe performance issue on mobile Safari for just one web page

I have a very large (as in feature-rich) responsive website. It consists of over 150 different UI pages, and so far both rendering and performance on mobile are fine (I'm using an iPhone5 to test, and occasionally other devices).
Except for one page, which I am coding now. Here's the temporary dev URL:
http://www.jungledragon.org/apps/jd3/daylight
On Mobile Safari, this page performance extremely poorly:
- It takes several seconds to load, much slower than all other pages
- Once loaded, a touch scroll can take 5-10 secs to do anything
- Mobile Safari as a whole becomes non responding or close to it
I'm trying to troubleshoot the root cause of the issue, but no luck so far. I cannot reproduce this on any desktop browser using a small viewport, not even on desktop Safari. On the desktop, I've inspected several web debuggers to check for any long-running processes, but found none.
Some explanation on what the page does:
It will try to detect your current location (using alerts I discovered this takes little time)
Based on your current location and the current date, it will calculate the sun times for the day. This too is nearly instant
Based on the suntimes, it will dynamically generate a table, and then finally show it on screen
Here's the what I am seeing in detail on mobile Safari:
The server response is fine, the page loads quickly and shows the site header soon
Next, the content body is blank and stays blank for several seconds (which I cannot explain)
Finally, the suntimes table renders.
This completes the page, yet as of this point, the page as well as the browser are extremely sluggish, scrolling takes forever, and Safari controls are nearly irresponsive. It looks and feels as if the browser can crash any moment.
Based on my research so far, and given fine performance in all other pages on the site, I'm totally in the dark on what causes this.
Edit: Using BrowserStack I did some more tests:
iPhone 4S: no issues
iPhone 5S: no issues
Galaxy SII: no issues
HTC One X: no issues
iPhone 5: same issue as above
So I'm not seeing the issue on any desktop browser, and on no mobile device except for the iPhone 5 (iOS7).
Edit2: adding more findings and explanation based on comments received:
The issue does not seem animation-related. For this I have a number of proof points. A simple proof point is the page does not do any visual rendering that is much different from any of the other 100+ pages on the site which have no performance issue.
The 2nd proof point can be explained by understanding what is going on in this specific page. What happens is this:
The system will detect the current user's time and location. For now assume that the user actually allows location sharing. Using a simple alert, I've been able to proof that location detection is not the bottleneck.
Based on the user's time and location, the daylight periods are calculated. This is done by using the Suncalc JS library (https://github.com/mourner/suncalc).
The Suncalc library returns an array of daylight periods for the given date and location. I render that array as a table with colored background rows. That is all.
Rendering a table with 12 rows and different background colors is not likely to cause such enormous issues. My theory therefore lies in step 2 being the root cause. The Suncalc library has a lot of advanced math in it. I am thinking (without evidence yet) that either my mobile processor is horrible at those kind of operations, and/or the specific calculation for some reason cause a peak in memory usage (or even a leak).
As an additional proof point: once the page is loaded on mobile, use the right arrow next to the date to navigate to "tomorrow". Again you will see the extremely bad performance. During that step, there is no network activity, no location detection, nothing, just calculations and some very simple rendering. This validates my theory that perhaps the issue lies in the calculation.
Sadly, it looks like native Javascript profilers on that platform are non-existent. You may also want to try the Javascript Microtime function referenced in this answer. You will need to seed your script with calls at points where you think the bottleneck might be.
Just ran this through Chrome remote debugger (https://developers.google.com/chrome-developer-tools/docs/remote-debugging) on my S3, and it looks like Modernizr's cancelZoom function (showing up in jd3_0006.js) is getting called recursively too many times or by too broad a selector. I've uploaded the profiles into dropbox: https://www.dropbox.com/s/kubxk44smm6qqkx/jungledragon_debug..zip
You can import them into Chrome's debugger on the "Profiles" tab.
I believe your performance problem centers around the use of navigator.geolocation.getCurrentPosition() in your runMap() function
if (urlDate != null) {
urlPos(latitude,longitude);
} else {
if (navigator.geolocation) {
$(".img-loading").show(100);
navigator.geolocation.getCurrentPosition(successPos, errorPos{maximumAge:600000,timeout:10000});
} else {
errorPos('');
}
}
Consider using watchPosition() instead with a callback which will not halt processing of the script thread. You can cancel the watchPostion() update by using clearWatch()
So I've played with this some more, and ran the "Timeline" feature on Chrome (load this file into your chrome timeline tool: https://www.dropbox.com/s/2vpl6z1ntuk3aqj/TimelineRawData-20140328T105820.json), and it looks like this might be your main problem.
Your scripts and libs (including loading Google Maps and jQuery) are getting evaluated AFTER parsing the HTML and running Google Analytics because they are at the bottom of the body, not head. Unless you have a very good reason to do that, I would recommend moving those to the head.
There seems to be a separate problem with scrolling, but perhaps it will be resolved by this change.

Facebook struggles to scrape one domain

I have already checked out this question, and it sounds like he's describing the same exact problem as me except for a few things:
I'm not running on https
80% of the time I try to debug, I get this message " Error parsing input URL, no data was scraped."
The scraper works perfectly on a different domain, but same server, same theme with almost identical content. Every time I try a domain it scrapes it perfectly including the image
During the 20% that it actually scrapes my page, I am having the same issue in the above link. It is reading my thumbnail, yet showing a blank image. The link brings me to a working image but it doesn't want to show anything.
The weird part is it worked completely fine about 10 months ago when I updated this blog on a daily basis. The only difference is I've switched servers recently. While that would explain a possibility, the other domain switched as well and doesn't have this problem.
I am at a loss why my links either show no image at all in facebook or give me the:
Domain Link
Domain
(no image, no description)
Very frustrating situation. Does anyone have any suggestions?
Update:
I have 6 domains...
When I moved servers recently, I found the new server wasn't prepared to compress the pages, so my blog posts looked crazy. This forced me to turn compression 'off' on WP Super Cache on my main blog. I also did it to my 2nd highest traffic blog figuring I'd get to the other 4 later.
Well, now those first two blogs appear to work fine in the facebook debugger, but the remaining 4 have troubles. The tricky part is, I completely removed WP Super Cache from one site and still had trouble fetching the data.
So while it seems logically it should have been a WP Super Cache issue, continuing to have errors despite removing it leads me to believe now? I'm still so baffled.
Update:
Ok, I loaded Chrome and IE, and both were able to pull the data with ease. The google snippet tool also worked great. I am going to try posting a link to my facebook fan page via chrome and see if it works correctly.
I did clear my FF cache and it didn't change, but I am still confused why one domain works ok while the other does not. Either way, if adding in Chrome works, I'll stick with that for now.
Any other suggestions?
Cache should not make any problem. If a browser can see your page, so can facebook debugger.
See if some 500 error is there. Try from different browser, clearing the browser cache etc. Try google rich snippet and see if a custom search engine is scrapping it fine.
PS: It will be nicer if you post url.