Why is the number of mobile-friendly pages on Google Search Console Mobile Usability report less than the number of pages indexed? - google-search-console

The Mobile Usability report for one of the websites I maintain is currently showing 215 Valid (mobile-friendly pages). At the same time, the Coverage report shows that a total of 399 pages are Valid (have been indexed).
I downloaded a list of all the URLs that have been indexed and a list of all URLs that are currently considered mobile-friendly pages. Then I compared the two lists and started checking several of the URLs that are indexed but not shown as mobile-friendly using the URL Inspection tool.
The URL Inspection results for all of the URLs that I have checked show the page as mobile-friendly. An example is shown below:
The Mobile Usability report shows 5 URLs with Errors, so I have information about 220 of the total number of indexed URLS.
I would like to understand what does it mean that there are some URLs that are currently indexed, but are not considered mobile-friendly nor have important mobile usability issues.
Additional info:
Two months ago (around November 15), the number of mobile-friendly pages had increased to 248 with no pages showing Errors. That number started to decrease until it reached the current value, but a corresponding number of errors wasn't reported.
It is like some pages were simply removed from the Mobile Usability report, but for no explicit reason.
The number of indexed pages increased by 1 during that same period of time.
There was a Google Search Update on November 25 indicating that some reports will show data primarily from mobile-first indexing. Unfortunately is still not clear to me why indexed pages > ( mobile-friendly pages + pages with mobile issues).
Is it incorrect to expect errors to show for all indexed URLs that are not considered mobile-friendly?
Thank you for taking the time to review this question.

Same problem here across multiple sites. Zero errors yet initially a rise in pages then a decrease to current levels which represents roughly half my total indexed pages. I have spent many days researching and trying a variety of tests to see if I could figure it out. All to no avail. I have changed internal links, menus, footers, sitemaps, removed javascript etc... None of the data showed any changes related to anything I have done nor is there any correlation between websites. The only thing I can see is that they seem to be prioritizing roughly according to page value (amount of internal links and proximity of links to home page), as menu items and linked pages from home page do get represented more, but I could not trigger any changes by playing with those factors. I am not seeing any mobile SERP issues related to it so I have temporarily given up.
In the end, I can only reason that Google is prioritizing getting everyone on mobile first so they are only allocating resources (caching, indexing, crawling budget) to what it deems are important pages on each website until they finish mobile first indexing all websites.
I have also been searching for others with the same problem but you are the first that I have found that has posted about it.

Google is using mobile spiders and it seems that google spiders have difficulties distinguishing mobile and desktop pages, and if not properly set, it sees less mobile pages than actually exist

Related

GitHub page can only be found on google when typing "username" AND "github" [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I have created a personal website using the Academic Theme for Hugo. I am hosting the page on GitHub.
The site works, but it is unfortunately very hard to find on google. Specifically, if I type my username followed by "github", it appears as the first result. However, it doesn't show at all if I type just my username into google. I went through until page 8 of the results.
Any help would be greatly appreciated. May be useful to know that Google Console has not found an issue. Also, the page shows up as the first result on both Bing and DuckDuckGo when typing just my username.
This is the page: https://michagermann.github.io/
This has to do with Search Engine Optimisation (SEO).
Basically how search results work is that google has bots that go through the accessible page is on the internet and compile keywords for each page it hits, these are then linked to the search phrases people use. So username + github is an easy one as that is the majority of your url, however just your username will have many other results from others that have your username in their webpages, some of them multiple times, others once but have been around for a lot longer. There are a lot of variables to SEO but there are guides which can help with this.
https://support.google.com/webmasters/answer/7451184?hl=en
Googles Starter Guide for SEO
-- edit --
I would also hazard a guess and say that google is pulling back a lot of your publications which bing and duckduckgo aren't, and as these will likely have been accessed more will I expect be higher in the search algorithm.
--- edit 2 ---
Link Building
This is very important for SEO, this is where external sites link to your sites. The easiest way to achieve this is through your personal profiles on Twitter, LinkedIn, Github ect.
Writing Blogs can also have other people link to your profile and thus increase your link building.
DO NOT PAY FOR LINK BUILDING
Link building for Google is based off of high quality sites - every site has a ranking, a low quality site will have a much lower affect on your SEO score, and thus not result in any noticeable movement. Paid link building usually involves low quality sites
Site Maps
If you have a multi-page site (Yours isn't) then site maps help search bots navigate the important pages easier, and can help increase rankings.
Meta Tags
These are extremely important, although some tags are more important than others, title(included with the element), author and description are some of the more important meta tags.
I'm not an SE optimizer and haven't done much SEO for a few years so this is from old experience and I don't guarantee it is all correct as of writing, however I expect it hasn't changed that often. SEO is a complex area and search engines have different preferences. But hopefully this helps. A lot of SEO comes through time rather than right away through link building so that is also something to keep in mind
First of all know about how the google search works....and if you simply type your name it won't show your website as such because there may be multiple highly prioritized websites are there with this name in the first place......
And if type yourname.github its nothing but the direct address of your page so that's why it is shown in the first place

Why do some pop-up ads redirect through multiple domains?

I noticed a lot of shady websites use ads with multiple redirects before showing the content of the ad.
I do not want to link to any of these (propably) illegal content distribution sites, but this effect is easily found when browsing through streaming sites for TV series and stuff like that.
Basically, it works like this:
User interaction (mostly click) opens popup
popup shows firstdomain.com without content
redirects to seconddomain.com
redirects to thirddomain.com
...
finally shows the ad, often a legit one, but this varies from sports betting to adult social media
Is there any upside to these multiple redirects? And why are they set up this way?
You're likely to be thrown from one TDS to another
A TDS is a web based gate that is able to redirect users to various
content depending on who they are. A TDS is able to make a decision on
where to send a user based on criteria such as their geo-location,
browser, operating system, and whether or not they have been sent the
malicious content already. There are many legitimate uses of TDSes,
but there are also specific TDSes (Sutra, BlackOS, NinjaTDS etc.)
written for malware actors
Also from here:
As discussed above, TDS are not malicious elements per se within the
Internet ecosystem, as they are very useful for the operation of
e-commerce and online marketing, but also constitute a good malware
distribution platform.
...
To avoid detection and make it difficult to track these downloads, it
is possible to link several TDSs between them

How to A/B test an entire website design

We're building a new website design and instead of cutting over to it 100%, we'd like to ease into it so we can test as we go. The goal would be to have users that visit http://oursite.com to either get the "old" website or the new, and we could control the percentage of who gets the new site by 10%, 50%, etc.
I'm familiar with A/B tests for pages, but not an entire website domain. We're on a LAMP stack so maybe this can be done with Apache VHosts? We have 2 cloud servers running behind a cloud load balancer in production. The new site is entirely contained in an svn branch and the current production site runs out of the svn trunk.
Any recommendations on how I can pull this off?
Thanks you!
You absolutely can do this, and it's a great way to quickly identify things that make a big difference in improving conversion rates. It's dependent on a couple of things:
Your site has a common header. You'll be A/B testings CSS files, so this will only work if there's a single CSS call for the entire site (or section of the site).
You're only testing differences in site design. In this scenario all content, forms, calls to action, etc. would be identical between the versions. It is technically possible to run separate tests on other page elements concurrently. I don't recommend this as interpretation of results gets confusing.
The A/B testing platform that you choose supports showing the same version to a visitor throughout their visit. It would be pretty frustrating to see a site's theme change every time they hit another page. Most A/B testing platforms that I've used have an option for this.
Since you're testing substantial differences between versions, I also recommend that you calculate sample sizes before you begin. This will keep you from showing the losing version to too many people, and it will also give you confidence in the results (it's a mistake to let tests run until they reach statistical significance). There are a couple of online calculators that you can use (VisualWebsiteOptimizer, Evan Miller).

[MoinMoin Wiki] Show only specific pages in RecentChanges page

How can I restrict the RecentChanges page in MoinMoin that only specific pages would be listed.
Since I have a huge Wiki site and pages are organized in a hierarchical way, I would like to let RecentChanges page show only limited pages according to different hierarchical page paths.
That's not possible with moin 1.x.
There is even a little reason for it (which might not apply in your case maybe, but applies in the general case):
Wikis live from so-called soft-security: people are seeing changes via RecentChanges and will look at them, reverting any bad changes they see (e.g. spam, malicous edit, stuff gone wrong, people playing around at wrong places, etc.).
If you reduce what they see, soft-security gets degraded as everybody is just looking at his own stuff.

Geolocation APIs: SimpleGeo vs CityGrid vs PublicEarth vs Twitter vs Foursquare vs Loopt vs Fwix. How to retrieve venue/location information?

We need to display meta information (e.g, address, name) on our site for various venues like bars, restaurants, and theaters.
Ideally, users would type in the name of a venue, along with zip code, and we present the closest matches.
Which APIs have people used for similar geolocation purposes? What are the pros and cons of each?
Our basic research yielded a few options (listed in title and below). We're curious to hear how others have deployed these APIs and which ones are ultimately in use.
Fwix API: http://developers.fwix.com/
Zumigo
Does Facebook plan on offering a Places API eventually that could accomplish this?
Thanks!
Facebook Places is based on Factual. You can use Factual's API which is pretty good (and still free, I think?)
http://www.factual.com/topic/local
You can also use unauthenticated Foursquare as a straight places database. The data is of uneven quality since it's crowdsourced, but I find it generally good. It's free to a certain API limit, but I think the paid tier is negotiated.
https://developer.foursquare.com/
I briefly looked at Google Places but didn't like it because of all the restrictions on how you have to display results (Google wants their ad revenue).
It's been a long time since this question was asked but a quick update on answers for other people.
This post, right now at least, will not go into great detail about each service but merely lists them:
http://wiki.developer.factual.com/w/page/12298852/start
http://developer.yp.com
http://www.yelp.com/developers/documentation
https://developer.foursquare.com/
http://code.google.com/apis/maps/documentation/places/
http://developers.facebook.com/docs/reference/api/
https://simplegeo.com/docs/api-endpoints/simplegeo-context
http://www.citygridmedia.com/developer/
http://fwix.com/developer_tools
http://localeze.com/
They each have their pros and cons (i.e. Google Places only allows 20 results per query, Foursquare and Facebook Places have semi-unreliable results) which can be explained a bit more in detail, although not entirely, in the following link. http://www.quora.com/What-are-the-pros-and-cons-of-each-Places-API
For my own project I ended up deciding to go with Factual's API since there are no restrictions on what you do with the data (one of the only ToS' that I've read in its entirety). Factual has a pretty reliable API, which as a user of the API you may update, modify, or flag rows of the data. Facebook Places bases their data on Factual's, just another fact to shed some perspective.
Hope I can be of help to any future searchers.
This is not a complete answer, because I havn't compared the given geolocation API, but there is also the Google Places API, which solves a similiar problem like the other APIs.
One thing about SimpleGeo: The Location API of SimpleGeo supports mainly US (and Canada?) based locations. The last time I checked, my home country Germany doesn't has many known locations.
Comparison between places data APIs is tough to keep up to date, with the fast past of the space, and with acquisitions like SimpleGeo and HyperPublic changing the landscape quickly.
So I'll just throw in CityGrids perspective as of February 2012. CityGrid provides 18M US places, allowing up to 10M requests per month for developers (publishers) at no charge.
You can search using a wide range of "what" and "where" (Cities, Neighborhoods, Zip Codes, Metro Areas, Addresses, Intersections) searches including latlong. We have rich data for each place including images, videos, reviews, offers, etc.
CityGrid also has a developer revenue sharing program where we'll pay you to display some places as well as large mobile and web advertising network.
You can also query Places via the CityGrid API using Factual, Foursquare and other places providers places and venue IDs. We aggregate data from several places data providers through our system.
Website: http://developer.citygridmedia.com/