Google Console - Error: Clickable elements too close together - Affected pages: 0 - google-search-console

Google's automatic mobile issue detection is telling me that my site has clickable elements that are too close and that this problem affects zero pages.
I'm assuming it's a Google bug?

You can safely ignore it. I receive these same email warnings. There is absolutely nothing wrong with my page (I get a specific page listed in my warning). The warnings come, like in your case, irregularly, sometimes the site presents 0 errors, sometimes it is this 1 error, even though the site hasn't been changed in a few years. The whole Google webmaster tools thing has been bad quality from day one.

Related

Need Help: Can you identify the Desktop CLS (Cumulative Layout Shift) on my URL?

URL: tinyurl.com/36utmdnc
In Google Search Console, I am seeing thousands of errors for Core Web Vitals, particularly the CLS in Desktop mode. I've made a number of changes, such as removing all ads, tweaking the CSS, HTML and other code. I've been making changes for a year actually, but I've really tried tackling this problem over the past 2-3 months, since "Page Experience" is showing a big reduction in "good URLs" and I believe it's now affecting my traffic volume from Google Search. It's been weeks now and the CLS is not changing in Search Console. I've tried validating for a couple of months now.
The example URL above (a product page) has a CLS score of 0.33 according to Google Search Console / Core Web Vitals. It seems all of the errors are on my product pages like the example above. I've ran tests in PageSpeed Insights, which shows a CLS of "0". I understand the reports shown in Google Search Console are from "Chrome User Experience report", which is different than PageSpeed Insights "lab setting".
Here are things I've done:
Opened Chrome Developer Tools and clicked the checkbox for "Layout Shift Regions", which flash blue during a layout shift. Also checked the "Core Web Vitals" to enable to overlay that adds up CLS throughout the entire browsing experience.
Opened the Network Tab and throttled the speed down to "very slow tests" and carefully watched for layout shifts while the page loads slowly.
Carefully read the guides on web.dev/cls/ , ran Chrome's LightHouse test, ran tests on various sites like webvitals.dev/cls , defaced.dev/tools/layout-shift-gif-generator/ , webpagetest.org/webvitals, etc.
Manually tested using different screen resolution widths/heights using Chrome Developer Tools (800px width, 1400px width, 8000px height, etc).
Asked tech-savvy friends/users to also check and help identify the CLS.
I can't find anything that could cause a large CLS of 0.33. The number of CLS errors in Search Console is staying steady, going up and down by about 100 URLs every day, but the same example URL above has been stuck there for months. So I was hoping someone with knowledge could find it or identify the underlying issue.
Thanks
I cannot see a CLS score of 0.33 for that page, and agree that testing it myself shows very little CLS.
You are correct that PageSpeed Insights is lab-based, but it also shows the field-based data at the top, and in this case it is saying it does not have enough field-based data for this URL and so is displaying the origin-level data for all the pages for your site.
Google Search Console also shows field-based data, but groups pages it thinks might be similar and gives them all the same CLS scores. This is done at a bit more lower-level grouping than the whole origin so it may group all your product pages together for example, if it has data for some of them.
This page for example has a hover effect that gives a huge CLS when it's used - though can't quite see why as finding it difficult to trigger manually in dev tools.
It is possible those pages are the ones with the actual CLS issue and they are being lumped in with your good page under the product pages group. I would investigate if you can reduce the CLS on that hover effect.
Separately, I notice you are lazy loading your images but not specifying width and height on them. This can lead CLS if the images are not loaded by the time that area scrolls into view for example this test when linking to a bit at the bottom of the page. It is recommended to always include image dimensions on lazy-loaded images (and in fact on all images!) to avoid this. This could be another reason for your high CLS, which is not as evident in lab-based tools that typically only load the top of the page without lazy-loaded content.
I also recommend you read the Debugging Web Vitals in the Field and also the more advanced Measure and debug performance with Google Analytics 4 and BigQuery for additional information as to how to answer why your field-based metrics may be different than what you can observe.

Mysterious severe performance issue on mobile Safari for just one web page

I have a very large (as in feature-rich) responsive website. It consists of over 150 different UI pages, and so far both rendering and performance on mobile are fine (I'm using an iPhone5 to test, and occasionally other devices).
Except for one page, which I am coding now. Here's the temporary dev URL:
http://www.jungledragon.org/apps/jd3/daylight
On Mobile Safari, this page performance extremely poorly:
- It takes several seconds to load, much slower than all other pages
- Once loaded, a touch scroll can take 5-10 secs to do anything
- Mobile Safari as a whole becomes non responding or close to it
I'm trying to troubleshoot the root cause of the issue, but no luck so far. I cannot reproduce this on any desktop browser using a small viewport, not even on desktop Safari. On the desktop, I've inspected several web debuggers to check for any long-running processes, but found none.
Some explanation on what the page does:
It will try to detect your current location (using alerts I discovered this takes little time)
Based on your current location and the current date, it will calculate the sun times for the day. This too is nearly instant
Based on the suntimes, it will dynamically generate a table, and then finally show it on screen
Here's the what I am seeing in detail on mobile Safari:
The server response is fine, the page loads quickly and shows the site header soon
Next, the content body is blank and stays blank for several seconds (which I cannot explain)
Finally, the suntimes table renders.
This completes the page, yet as of this point, the page as well as the browser are extremely sluggish, scrolling takes forever, and Safari controls are nearly irresponsive. It looks and feels as if the browser can crash any moment.
Based on my research so far, and given fine performance in all other pages on the site, I'm totally in the dark on what causes this.
Edit: Using BrowserStack I did some more tests:
iPhone 4S: no issues
iPhone 5S: no issues
Galaxy SII: no issues
HTC One X: no issues
iPhone 5: same issue as above
So I'm not seeing the issue on any desktop browser, and on no mobile device except for the iPhone 5 (iOS7).
Edit2: adding more findings and explanation based on comments received:
The issue does not seem animation-related. For this I have a number of proof points. A simple proof point is the page does not do any visual rendering that is much different from any of the other 100+ pages on the site which have no performance issue.
The 2nd proof point can be explained by understanding what is going on in this specific page. What happens is this:
The system will detect the current user's time and location. For now assume that the user actually allows location sharing. Using a simple alert, I've been able to proof that location detection is not the bottleneck.
Based on the user's time and location, the daylight periods are calculated. This is done by using the Suncalc JS library (https://github.com/mourner/suncalc).
The Suncalc library returns an array of daylight periods for the given date and location. I render that array as a table with colored background rows. That is all.
Rendering a table with 12 rows and different background colors is not likely to cause such enormous issues. My theory therefore lies in step 2 being the root cause. The Suncalc library has a lot of advanced math in it. I am thinking (without evidence yet) that either my mobile processor is horrible at those kind of operations, and/or the specific calculation for some reason cause a peak in memory usage (or even a leak).
As an additional proof point: once the page is loaded on mobile, use the right arrow next to the date to navigate to "tomorrow". Again you will see the extremely bad performance. During that step, there is no network activity, no location detection, nothing, just calculations and some very simple rendering. This validates my theory that perhaps the issue lies in the calculation.
Sadly, it looks like native Javascript profilers on that platform are non-existent. You may also want to try the Javascript Microtime function referenced in this answer. You will need to seed your script with calls at points where you think the bottleneck might be.
Just ran this through Chrome remote debugger (https://developers.google.com/chrome-developer-tools/docs/remote-debugging) on my S3, and it looks like Modernizr's cancelZoom function (showing up in jd3_0006.js) is getting called recursively too many times or by too broad a selector. I've uploaded the profiles into dropbox: https://www.dropbox.com/s/kubxk44smm6qqkx/jungledragon_debug..zip
You can import them into Chrome's debugger on the "Profiles" tab.
I believe your performance problem centers around the use of navigator.geolocation.getCurrentPosition() in your runMap() function
if (urlDate != null) {
urlPos(latitude,longitude);
} else {
if (navigator.geolocation) {
$(".img-loading").show(100);
navigator.geolocation.getCurrentPosition(successPos, errorPos{maximumAge:600000,timeout:10000});
} else {
errorPos('');
}
}
Consider using watchPosition() instead with a callback which will not halt processing of the script thread. You can cancel the watchPostion() update by using clearWatch()
So I've played with this some more, and ran the "Timeline" feature on Chrome (load this file into your chrome timeline tool: https://www.dropbox.com/s/2vpl6z1ntuk3aqj/TimelineRawData-20140328T105820.json), and it looks like this might be your main problem.
Your scripts and libs (including loading Google Maps and jQuery) are getting evaluated AFTER parsing the HTML and running Google Analytics because they are at the bottom of the body, not head. Unless you have a very good reason to do that, I would recommend moving those to the head.
There seems to be a separate problem with scrolling, but perhaps it will be resolved by this change.

Facebook like button with button_count layout missing space

Since last week, my Facebook like button with count display + send button ("button_count layout" as they name it in their plugin page) is looking weird, missing the blank space there used to be between the "like" count and the "send" buttons. First I thought it had something to do with Wordpress and the plugin being used to display it since I first noticed it in my WP based site, but after investigating I've come to the conclussion that Facebook has changed the styling without advice. It is looking this way in their own plugin page, tested with different browsers and operating systems.
This is how it looks now. Notice the lack of space between like count and the send button:
This is how it used to look until one or two weeks ago:
Has anyone else noticed this change? I still have not tried to add the missing spacing by any means because I am not yet sure if the change will be permanent or if it is some sort of "bug" by Facebook. I haven't been able to find any reference using Google about this.
Well, after 9 days with no answers and having Facebook changed during this period the style of its "like with count + send" button, getting back the little gap between them, this question has lost its sense. It seems like they weren't minding a lot their "old" button since they were going to change it.
It seems also like I have such good aim making the right questions in the right moment! xD

Facebook struggles to scrape one domain

I have already checked out this question, and it sounds like he's describing the same exact problem as me except for a few things:
I'm not running on https
80% of the time I try to debug, I get this message " Error parsing input URL, no data was scraped."
The scraper works perfectly on a different domain, but same server, same theme with almost identical content. Every time I try a domain it scrapes it perfectly including the image
During the 20% that it actually scrapes my page, I am having the same issue in the above link. It is reading my thumbnail, yet showing a blank image. The link brings me to a working image but it doesn't want to show anything.
The weird part is it worked completely fine about 10 months ago when I updated this blog on a daily basis. The only difference is I've switched servers recently. While that would explain a possibility, the other domain switched as well and doesn't have this problem.
I am at a loss why my links either show no image at all in facebook or give me the:
Domain Link
Domain
(no image, no description)
Very frustrating situation. Does anyone have any suggestions?
Update:
I have 6 domains...
When I moved servers recently, I found the new server wasn't prepared to compress the pages, so my blog posts looked crazy. This forced me to turn compression 'off' on WP Super Cache on my main blog. I also did it to my 2nd highest traffic blog figuring I'd get to the other 4 later.
Well, now those first two blogs appear to work fine in the facebook debugger, but the remaining 4 have troubles. The tricky part is, I completely removed WP Super Cache from one site and still had trouble fetching the data.
So while it seems logically it should have been a WP Super Cache issue, continuing to have errors despite removing it leads me to believe now? I'm still so baffled.
Update:
Ok, I loaded Chrome and IE, and both were able to pull the data with ease. The google snippet tool also worked great. I am going to try posting a link to my facebook fan page via chrome and see if it works correctly.
I did clear my FF cache and it didn't change, but I am still confused why one domain works ok while the other does not. Either way, if adding in Chrome works, I'll stick with that for now.
Any other suggestions?
Cache should not make any problem. If a browser can see your page, so can facebook debugger.
See if some 500 error is there. Try from different browser, clearing the browser cache etc. Try google rich snippet and see if a custom search engine is scrapping it fine.
PS: It will be nicer if you post url.

DropDownList postback never finishes on iPad

I've seen several posts about DropDownLists getting cleared, or events not getting fired, but they don't seem to match this situation.
I've got (well I've reduced the problem to) a very simple asp.net website, a master page with a content page. The content page has a single DropDownList with AutoPostback set to True. The code behind updates a Label with the list's selected value. Not using UpdatePanel or AJAX (though I tried using them and I get exactly the same results). It's an intranet site using Windows authentication.
It works fine on IE and Chrome, but every time I try it on my iPad it just sits and spins. The postback appears to be happening, but either nothing's coming back (or being accepted) from the server, or the client just doesn't know how to finish things up, or I don't know what.
Sorry if this seems vague but I've spent two hours on Google and haven't come up with anything other than the fact that a simple page like this should work fine on an iPad, so I'm a little punchy.
Anybody got any pointers or ideas?
EDIT: Running this page through the remote web access portal my company uses, it works fine. So this may be an authentication problem between the iPad and IIS.
Not sure I have an answer but do you have the issue if you remove the DropDownList? If you need to build the list based on data maybe you could use a asp:repeater and build a html select list.