Google Webmaster tools: Total indexed = 0 & HTML improvements not updated - google-search-console

1) Why Google Webmaster Tools shows Total indexed = 0 for my website?
When I run site:ziedireizija.lv in google, it shows 59 results.
I have add both www and non-www to my webmaster tools. I have set a preferred domain to be without www.
When I open non-www in webmaster tools it shows:
Total indexed = 0
Ever crawled = 80
Not selected = 52
What this means? Why total indexed = 0.
This is for website ziedireizija.lv
2) The second question is that Webmaster Tools has HTML improvements section.
It shows Duplicate meta descriptions = 12.
I have updated meta description for those pages. However, it still shows duplicate meta description and I do see that Google has not updated these pages (neither meta description, nor page content). It has passed some time I have done this. Why?
And also Webmaster tools shows Last updated Dec 24, 2012, however, Duplicate Meta Description = 12 and I do see that those pages were not updated.
It could be somehow related with question 1.

The urls on the HTML Improvements page don't update very quickly - often taking months to refresh.
To encourage Google to update this section simply use the 'Fetch as Google' (under 'Health' in webmaster tools) for pages that you know have been fixed and then click the 'Submit to Index' button when it appears.
Using this technique I usually see urls drop off the HTML Improvements page in a couple of days.

Related

Google tells me that http://mydomain.de.whoisbucket.com/ is not indexed

I got a report from Google Search console (sc-noreply#google.com):
Page indexing issues detected in http://sprechcomputer.de/
To the owner of http://sprechcomputer.de/:
Search Console has identified that your site is affected by 1 Page indexing issue(s). The following issues were found on your site. We recommend that you fix these issues when possible to enable the best experience and coverage in Google Search.
Top issues
Page with redirect
Fix Page indexing issues
When I open search console, it says that 4 days ago, my URL was not indexed anymore.
However, I did not change anything.
It offers "INSPECT URL" and "TEST ROBOTS.TXT BLOCKING".
I click "TEST ROBOTS.TXT BLOCKING", and it tells me that I don't have a robots.txt
I click "INSPECT URL", and it shows this:
Referring page
http://sprechcomputer.de.whoisbucket.com/
http://www.whoisbucket.com/view/sprechcomputer.de
This is the only anomaly that I can see.
These are not my domains. Is this a bug in Search Console?
What is the problem here?
Also, none of my pages are indexed, according to this screenshot:

Google Search Console: which of my AMP pages are already indexed?

In the Google Search Console there is an option to see how many AMP pages is already indexed and how many contains some errors.
When I open a certain error, I can see which one page contains problems. Can I do a similar thing for AMP-friendly indexed pages? Because now I can see e.g. that there is 20 AMP pages indexed by Google and another 15 is not.
Q: can I see which of my AMP pages are already indexed?
You can check the appearance of you AMP-Sites in the SC. Just go to "Search Analytics" and activate the "Search Appearance" > AMP Article rich results Filter

Different Google Index Information

More or less three month ago, I launched my own website. On the first day, I also verified my website for the Google Webmaster Tools, combined them with the Google Analytics Account and submitted a sitemap index file linked to five sitemap files.
But till now, I receive different Google Index Status information:
In Webmaster Tools:
Menu: Crawl -> Sitemaps: 123,861 Urls submitted, 64,313 Urls indexed
Menu: Google Index -> Index Status: 65,375 Urls indexed
When I type in google.de: “site:www.mysite.de”, then I receive 103,000 results.
When I check my website with push2check.net, I receive 110,000 Urls in Google Index.
What is wrong there? I understand that’s impossible for Google to deliver the accurate date because of the distributed processing and the result also depends on the location, where you searching from and so on. But between 65,000 and 110,000 is a huge gap. What’s the reason?
Thanks in advance!
Toby
google.de: “site:www.mysite.de”
You can search this type then google view your site all pages display index by Google.
And
Only Search
push2check.net
Then google View all result display when your website link have.
Then both result is different.

Octopress, github pages, CNAME domain and google website search

My blog was successfully transferred to octopress and github-pages. My problem though is that website's search uses google search but the result of 'search' as you can see, are pointing to the old (wordpress) links. Now these links have change structure, following default octopress structure.
I don't understand why this is happening. Is it possible for google to have stored in it's DB the old links (my blog was 1st page for some searches, but gathered just 3.000 hits / month... not much by internet's standards) and this will change with time, or is it something I'm able to change somehow?
thanks.
1.You can wait for Google to crawl and re-index your
pages, or you can use the URL Removal Request tool
to expedite removal of old pages from the index.
http://www.google.com/support/webmasters/bin/answer.py?answer=61062
According to that page, the removal process
"usually takes 3-5 business days."
Consider submitting a Sitemap:
http://www.google.com/support/webmasters/bin/answer.py?answer=40318
click here to resubmit your sitemap.
More information about Sitemaps:
http://www.google.com/support/webmasters/bin/answer.py?answer=34575
http://www.google.com/support/webmasters/bin/topic.py?topic=8467
http://www.google.com/support/webmasters/bin/topic.py?topic=8477
https://www.google.com/webmasters/tools/docs/en/protocol.html
2.Perhaps your company might consider the
Google Mini? You could set up the Mini to
crawl the site every night or even 'continuously'.
http://www.google.com/enterprise/mini/
According to the US pricing page,
the Mini currently starts at $1995 for a
50,000-document license with a year of support.
Here is the Google Mini discussion group:
http://groups.google.com/group/Google-Mini
http://www.google.com/enterprise/hosted_vs_appliance.html
(Click: "show all descriptions")
http://www.google.com/support/mini/
(Google Mini detailed FAQ)

Removing crawing from search engine on my login page

I have a login page (login.aspx) that is currently indexed in google when somebody does a search.
I have created a robots.txt file with the following:
User-agent: *
Disallow: /login.aspx
My question is how long will it take effect to where my login.aspx page will no longer be indexed by google. Is there anything else necessary to tell Google not to index my login page?
It could take up to 90 days before the index is removed from google database but realistic a week or two to update. You could also ask google to remove that page on Webmaster Tools but will work the same way as the crawler.
You might also want to log in to Google Webmaster tools and use the "Remove URL" feature from Site Configuration/crawler access and also increase the crawling speed from Site Configuration/Settings . This might help accelerate the removal of the URL.