How can i do specialize meta tags for (tags pages) in blogger? - tags

I really need to help!
I'v a blogger tech blog, and i customized robots.txt file to don't crawl into the tags of my posts to noindex it.
I want to index some tag pages of my blog, and i want specify a special meta tags (description and h1 title and keywords) for these tags pages because they have a default meta tags of the blog.
May any one help me and describe to me how can i do special meta tags for some tags pages in blogger?
I will be appreciative 🙏🏻

You can do this by using the conditional tag I gave below in the head section.
<b:if cond='data:blog.searchLabel == "YOUR-LABEL"'>
//You can add any meta tags here
</b:if>
To remove the default description tag in tag pages you can do this
search for this in your theme = data:view.description
and change it to;
<b:if cond='!data:view.isLabelSearch'>
<meta expr:content='data:view.description' name='description'/></b:if>
If there is any place where you hang out, you can ask again.

Edit the robots.txt content like this, it will definitely work, this way I've done it before;
User-agent: *
Disallow: /search
Allow: /search/label/mobiles
Also, it may not be in the live test right away, when I tried it, google had updated robot.txt 1 day later.
Set robots.txt like this and wait a bit (max 1 day) it will definitely work.
If it doesn't work, there is a second method, open a new question for it and I will answer it for you.
you should not ask different questions in one title

Related

how to copy current og:url to data href for FB comment

I've build my site with Weebly, but there's a problem when I tried to connect my website comment with FB comments. I was unable to solve for a long time, hope any expert can give me some advice. truly appreciate!
sample blog post: http://www.lifechem.tw/blog/170202
the og:url is:
<meta property="og:url" content="http://www.lifechem.tw/1/post/2017/02/170202.html" />
but the default Weebly FB comment tool is for http://www.lifechem.tw/blog/170202 instead of the current og:url
I've tried as other posts suggested
<script>document.write('<div class="fb-comments" data-href="' + document.location.href + '" data-numposts="7"></div>');</script>
but the result was the same as default Weebly tool.
I'd like to add a site-wide code into the blog footer that can copy the og:url in different blog posts.
truly appreciate!!
The location under which this content is shown in the browser is different than your og:url, so only setting that is likely not going to solve it.
But you can easily select the meta element with that property attribute value, and get the content using something like this:
document.querySelector('meta[property="og:url"]').content
(If you need support for older browsers that do not support querySelector, you could use a library like jQuery instead, or any other that support the attribute selector.)

Unable to make different meta descriptions for added Tumblr page

I finally have some time to work on some SEO on my Tumblr blog. On my main page, I have this meta tag
<meta name="description" content="{MetaDescription}" />
Now, when I created my new pages, "About", "Portoflio", etc., I obviously want to create different descriptions for these pages. But the tag above automatically gets added on my new pages. Is there a workaround on this? Or am I not doing it right?
edited: I forgot to mention... What I want in my other pages is NOT to take a snippet of the page content. I have specific shorter descriptions that I want to use. But with the tag above, it always automatically takes the snippet.

two sites, same content, how to redirect?

I've a question for you, i need to maintain two sites (let's name them example.com and yyy.com), they will be something like an alias.
I want visitors to be able to access the pages with same content via both of them.
what's the best way of doing this without getting in trouble with search engines?
I know about the 301 redirect, but i want visitors to stay on example.com or yyy.com, same name to show up in address bar, not to be redirected.
One thing you could do is to use the rel=canonical tag on the pages of the site you consider to be "the copy".
Basically, in the head section of each page's HTML you can tell which page on the "original" site has the same content.
So if (for instance) your sites are called www.yourmainsite.com and www.yoursecondsite.com, you should tag testpage.htm on yoursecondsite.com like this:
<link rel="canonical" href="http://www.yourmainsite.com/testpage.htm"/>
See here for more details.
Otherwise you can simply tell search engines not to index yoursecondsite.com in your robots.txt
User-agent: *
Disallow: /
Warning: I'm not an SEO person. I did have to implement something similar, but take my advice with a grain of salt
From the theoretical point, "Content-Location" HTTP header was invented for this as defined here and explained here.
However, search engines prefer "canonical" link tag (as in Paolo's explanation) for the same purpose because "Content-Location" header is mostly being misused by the web designers.
I would probably use both.

How to configure robots.txt to allow everything?

My robots.txt in Google Webmaster Tools shows the following values:
User-agent: *
Allow: /
What does it mean? I don't have enough knowledge about it, so looking for your help. I want to allow all robots to crawl my website, is this the right configuration?
That file will allow all crawlers access
User-agent: *
Allow: /
This basically allows all user agents (the *) to all parts of the site (the /).
If you want to allow every bot to crawl everything, this is the best way to specify it in your robots.txt:
User-agent: *
Disallow:
Note that the Disallow field has an empty value, which means according to the specification:
Any empty value, indicates that all URLs can be retrieved.
Your way (with Allow: / instead of Disallow:) works, too, but Allow is not part of the original robots.txt specification, so it’s not supported by all bots (many popular ones support it, though, like the Googlebot). That said, unrecognized fields have to be ignored, and for bots that don’t recognize Allow, the result would be the same in this case anyway: if nothing is forbidden to be crawled (with Disallow), everything is allowed to be crawled.
However, formally (per the original spec) it’s an invalid record, because at least one Disallow field is required:
At least one Disallow field needs to be present in a record.
I understand that this is fairly old question and has some pretty good answers. But, here is my two cents for the sake of completeness.
As per the official documentation, there are four ways, you can allow complete access for robots to access your site.
Clean:
Specify a global matcher with a disallow segment as mentioned by #unor. So your /robots.txt looks like this.
User-agent: *
Disallow:
The hack:
Create a /robots.txt file with no content in it. Which will default to allow all for all type of Bots.
I don't care way:
Do not create a /robots.txt altogether. Which should yield the exact same results as the above two.
The ugly:
From the robots documentation for meta tags, You can use the following meta tag on all your pages on your site to let the Bots know that these pages are not supposed to be indexed.
<META NAME="ROBOTS" CONTENT="NOINDEX">
In order for this to be applied to your entire site, You will have to add this meta tag for all of your pages. And this tag should strictly be placed under your HEAD tag of the page. More about this meta tag here.
It means you allow every (*) user-agent/crawler to access the root (/) of your site. You're okay.
I think you are good, you're allowing all pages to crawling
User-agent: *
allow:/

Facebook Post Link Image

When someone posts a link on facebook, a script usually scans that link for any images, and displays a quick thumbnail next to the post. For certain URLs though (including mine), FB doesn't seem to pick up anything, despite their being a number of images on that page.
I read up that FB prefers the "image_src" rel tag for the image the user wishes to specify, but this does not generate that thumbnail either for my site.
My url goes directly to the DNS, and is not forwarded, so I don't imagine that could be the problem either.
Does anyone have an idea as to why FB can't generate any thumbnails from my site?
The easiest way is just a link tag:
<link rel="image_src" href="http://stackoverflow.com/images/logo.gif" />
But there are some other things you can add to your site to make it more Social media friendly:
Open Graph Tags
Open Graph tags are tags that you add to the <head> of your website to describe the entity your page represents, whether it is a band, restaurant, blog, or something else.
An Open Graph tag looks like this:
<meta property="og:tag name" content="tag value"/>
If you use Open Graph tags, the following six are required:
og:title - The title of the entity.
og:type - The type of entity. You must select a type from the list of Open Graph types.
og:image - The URL to an image that represents the entity. Images must be at least 50 pixels by 50 pixels. Square images work best, but you are allowed to use images up to three times as wide as they are tall.
og:url - The canonical, permanent URL of the page representing the entity. When you use Open Graph tags, the Like button posts a link to the og:url instead of the URL in the Like button code.
og:site_name - A human-readable name for your site, e.g., "IMDb".
fb:admins or fb:app_id - A comma-separated list of either the Facebook IDs of page administrators or a Facebook Platform application ID. At a minimum, include only your own Facebook ID.
More information on Open Graph tags and details on Administering your page can be found on the Open Graph protocol documentation.
http://developers.facebook.com/docs/reference/plugins/like
I know this question is old, but I recently dealt with the exact same problem and went round and round on it for a couple weeks. Multiple searches on Google turned up a lot of useful information, but most of it was focused on Open Graph tags, which I wasn't interested in using. Turns out my site had multiple issues, but here are some of the basics.
As EightyEight said, make sure your HTML is valid - and the same goes for your javascript and server-side code (PHP, ASP, etc.). I had a small PHP error in a piece of code that was executing as a separate call to the server from the main page. Due to a number of bizarre coincidences, that code was generating a 500 error - but ONLY for IE6 and strict parsing engines like the W3C validator and the Facebook page crawler. The problem didn't appear in modern browsers (Chrome 4, FF 3.5, IE 8, etc) so I didn't see it right away, but older/stricter clients were showing the 500 every time and that was the main reason FB wasn't crawling our page (when everything else seemed to be correct).
Regarding Randy's response, he's correct that Facebook will keep an old cached copy of your page long after you've updated it. FB claims it's only held for 24 hours, but I experienced much longer times than that. FORTUNATELY, FB has released their "URL Linter" tool that will show you a preview of how your page will appear when being shared on FB, and it will force FB to instantly update its cache of your page. This was a lifesaving tool. You can find it at http://developers.facebook.com/tools/lint/
Regarding the URL Linter tool, be aware that each variation of a URL is cached separately on Facebook, so "www.example.com" is not the same as "example.com". Also, unique capitalization is stored as well, so "ExampleOne.com" is not the same as "exampleone.com". (This led to a lot of confusion between my client and myself when it appeared to me that the cache had been updated just fine and the client claimed they weren't seeing the updates. Turns out I was looking at exampleone.com and had used Linter to update the cache, but they were looking at exampleOne.com which I hadn't submitted to Linter. As a result, I ended up submitting quite a few variations of the URL to Linter just to cover the bases.)
WyrdNEXUS's advice to use the image_src link tag is spot-on. This allows you to be sure that FB is scraping the best possible image for your page. There are some varying guidelines out there about what specs the image file should have, but I've successfully used a 128px square image and have seen a 130x97 image make it through as well. Here is Facebook's official documentation from http://developers.facebook.com/docs/reference/plugins/like/:
Images must be at least 50 pixels by 50 pixels. Square images work best, but you are allowed to use images up to three times as wide as they are tall.
Obviously, FB will resize a large image for you, but you'll almost always get better results if you resize it yourself beforehand.
Regarding Mike Cooper's link to the eHow article, avoid using step #1 in that article. It was valid advice when the article was written and when Mike posted the link, but it's now better to use the URL Linter tool for previewing how your page will appear when being shared. By using Linter, you won't cause FB to cache a (potentially) bad copy of the page before you get a chance to tweak it.
Use the facebook lintter available here. http://developers.facebook.com/tools/lint/
This will check your link and re fetch any images. this also clears any old cache.
Or try this - https://developers.facebook.com/tools/debug
To change Title, Description and Image, we need to add some meta tags under head tag.
STEP 1 :
Add meta tags under head tag
<html>
<head>
<meta property="og:url" content="http://www.test.com/" />
<meta property="og:image" content="http://www.test.com/img/fb-logo.png" />
<meta property="og:title" content="Prepaid Phone Cards, low rates for International calls with Lucky Prepay" />
<meta property="og:description" content="Cheap prepaid Phone Cards. Low rates for international calls anywhere in the world." />
NEXT STEP :
Click on below link
https://developers.facebook.com/tools/debug
Add your URL in text box (e.g http://www.test.com/) where you mentioned the tags. Click on DEBUG button.
Its done.
You can verify here https://www.facebook.com/sharer/sharer.php?u=http://www.test.com/
In above url, u = your website link
ENJOY !!!!
try this: http://www.ehow.com/how_4938148_thumbnail-show-up-facebook-share.html
Is the site's HTML valid? Run it through w3c validation service.
Actually, if you've already tried linking that page on Facebook BEFORE adding the "image_src" link, Facebook will keep using the old cached copy and not even see your changes. Try modifying the URL by removing or adding the 'www', or duplicate your page to test it.
I've noticed that Facebook does not take thumbnails from websites if they start with https, is that maybe your case?
had the same problem and figured out that my head closing tag was in the wrong place
Old question but recently I seemed to be running into same issue with thumbnail images from my link not showing in status updates on Facebook. I post for many clients and this is relatively new.
FB doesn't seem to like long URLs anymore — if you use a URL shortener such as goo.gl or bitly.com, the thumbnail from your link/post will appear in your FB update.
Try using something like this:
<link rel="image_src" href="http://yoursite.com/graphics/yourimage.jpg" /link>`
Seems to work just fine on Firefox as long as you use a full path to your image.
Trouble is it get vertically offset downward for some reason. Image is 200 x 200 as recommended somewhere I read.
If you used any plugin for seo then Check 1st your seo plugin settings.Then find out Noindex setting if Enable Media for Noindex then disable it.