is a page redirect to most current article bad from an seo perspective? - redirect

I have navigation at the top of my site that links to news/
The news is always paginated, one article per page, with the ability to navigate to the next or previous article.
I would like the default article to be the second-to-last most current article. So if there are 10 articles, when the user clicks on news/, they are redirected to news/9 with 302 redirect code.
From an SEO perspective, is it bad to be constantly redirecting like this? Would it be better to change the link in the top navigation to instead link directly to news/9, and keep changing that everytime there is a new article instead?

Search engines expect a given piece of content to have a canonical URL. It's OK to have any number of URLs to a single page but use a canonical URL.
Sop no matter what redirect you have, add a canonical URL and search engine will take care of any mess.
302 redirects are for this. Use it.

Related

How to remove google search results for 303 redirect?

I run a dynamic site that may or may not redirect a certain route based on user preferences.
Let's say it's http://clientname.example.com/maybe. Our backend has a response for /maybe, but if the client decides they would rather use their site for the information on that page, we instead use a 303 Redirect to their page on a separate domain.
All of our content pages use the <meta name="robots" content="noindex"> tag, so google will not index any of our pages. HOWEVER, when I search google for "site:our_domain_name.com", I get a bunch of results that all trace back to those dynamic routes that return a 303. When I click on the search results in google, the 303 is followed as expected and I arrive at the client's site. What I want, is for my piece of the puzzle to not show in results at all.
I was troubleshooting it this morning, and I realized that our noindex meta tag was obviously not being seen by the robot as it was following the redirect, so I added a rule on the server that adds the 'X-Robot-Tag: noindex' header to redirect responses.
Is that enough? If I wait long enough, will those search results be removed?
Is that enough? If I wait long enough, will those search results be removed?
No because if an external page links to your site, Google will follow the link to your site, then your 303 (if your return such a code) and won't see the noindex.
Don't return a 303 for Google bots and you should be fine. It may take a bit of time, because Google needs to reprocess the page and see the noindex to remove it.

Preserve Google +1, Facebook Like, and Twitter Tweet This button counts after URI change

My question is very simple: is there any way to (programmatically, technically, or manually) make a Google +1, Facebook Like, or Twitter Tweet This button preserve its count after a change of URI?
Programmatically: Doing something with the javascript to make it show the combined counts of two URIs while posting the new action to only the latter.
Technically: Do they detect and follow a 301 redirect from the original resource? Is there any special text I can include in the HTTP header to tell them that they should move all "points" from the old URI to the new one?
Manually: Some form somewhere on their site that I can submit or someone I can email that will be able to copy our points over to the new URI?
(note that I use URI and not URL in this question on purpose. The canonical resource link is changing from something.php?id=idnumber to /mycoolproduct/)
EDIT
Bounty started, but don't answer with "it can't be done"
I believe there is only one solution that fits your request above universally. That solution would be to 301/302 the old URIs to the new URIs and then keep using the old URIs with your social buttons. All the major social buttons allow you to specify the URL for which to like/g+/tweet/etc. This would preserve your existing social share counts and all shared posts would still direct to the same page. The choice now becomes whether to 301 or 302 redirect. A 302 may help preserve current search placement and avoid loosing your ranking if its pretty good. A 301 redirect (moved permanently) will cause search engines to start indexing your new URLs and dropping the old ones. This might cause a loss in current search rankings. It appears that as of this post, nobody is honoring redirects for social votes of any kind.
So I think the the safest route is to 302 redirect and continue to use the old URIs for social votes. You will keep your equity this way, but must maintain your redirects and become even more invested in the old URI template. How are your redirects implemented? .htaccess? or in page? You will need to weigh the cost-benefits for your case.
Otherwise you should probably 301 and start using the new URIs for your social buttons. In this case you might loose your social equity, but are free to build new without fear of messing up anything. If the social equity you are replacing can be recouped in say 6 months or less I wouldn't bother with it and start fresh.
However, this brings up an interesting point. You mentioned programatically adding two counts. Well yes, you could put together some JavaScript to add a couple counts together, but I gotta ask why? Adding them together for visual display purposes will not actually help increase referral traffic or search ranking. So its just a facade that I don't think helps you. If you're just looking to fool your visitors into thinking you're popular, why not just generate an image server side that keeps counting up. (bad joke, don't do it!) Bottom line you can't actually redirect your social equity, you may be able to pretend to have moved it, but you can't actually.
Considering your original question asks about several social buttons, its important to note that even if one or two of these services started honoring redirects when applying social votes, it wouldn't relieve you from making the decisions above. You'll still need the redirects for existing backlinks, and if you are supporting multiple social buttons on your page the choice of redirect type will need to be made with all of the social buttons in mind.
I can't speak for how to do this with Twitter/G+ but for Facebook:
You can't 'move' the likes and comments between URLs, and for new content you should definitely start using the new URLs, but for your existing URLs you can still have the original like counts/comments work if you:
Continue pointing the Like button on the new URL to the old URL (i.e <fb:like url="http://oldurl"/>
Add an exception to your redirect code so that when Facebook's crawler (facebookexternalhit/1.* - currently 1.1) accesses it, the original set of open graph meta tags are displayed (this will keep the description, title, thumbnail, etc, working as before)
Other users that land on the 'old' URL will still be redirected to the content in its new location
I have a real solution to this problem it might not be the most conventional but it does work 100%. Using a simple jQuery plugin called sharrre. Keep in mind I'm not the best jQuery coder (If you see improvements please let me know!) but this works regardless.
Here is how I did it on my site:
Using the sharrre plugin you can add the current share count to any element on your page. I simply got the data for both URLs then added them together and displayed them on the like, tweet, share, etc buttons.
This example is with Twitter but I'm doing this with Facebook, Google Plus, Pinterest and Linkedin. Here is the code:
<li id="twitter" data-url="CURRENT-URL" class="twitter sharrre"></li>
<li id="twitter-old" data-url="OLD-URL" class="twitter sharrre" style="display: none;"></li>
Then I called sharrre's code:
$(function(){ sharrreItUp(); }
This is how my function looks on my .js file:
function sharrreItUp() {
$('#twitter').sharrre({
share: {
twitter: true
},
enableHover: false,
click: function(api, options){
api.simulateClick();
api.openPopup('twitter');
}
});
$('#twitter-old').sharrre({
share: {
twitter: true
}
});
setTimeout(function(){
var oldTwts = $('#twitter-old .box .count').html();
var newTwts = $('#twitter .box .count').html();
$('#twitter .box .count').html(parseInt(oldTwts) + parseInt(newTwts));
}, 2000);
}
And BAM... you have your new URL being sharrred and the old shares from different social networks get added into them.
Unfortunately, there is no solution for this. We have tried all possible solutions and you will simply lose your social equity if you do a 301 Redirect. We found it to not be worth the hassle of trying to maintain our vote counts, and have instead pointed our buttons to the homepage in the interim of moving to the new url structure.
demo: http://so.devilmaycode.it/preserve-google-1-facebook-like-and-twitter-tweet-this-button-counts-after-ur/
i don't wanna say something wrong, but i think you just need to define the URI inside each share button, so no matter from what URL the vote come from, the defined url will be used as count.
if you, instead already have two different sources and you want to join it, you should follow the iframe src and scrape the count from it; for google +1 the div id that contain the count is #aggregateCount for twitter is #count; an example could be as below:
<?php
$doc = new DOMDocument();
$doc->loadHTMLFile('iframe-url-goes-here');
$count = $doc->getElementById('aggregateCount');
echo $count->nodeValue;
?>
then, on your page after the DOM is loaded and the widget are loaded, you can append your own value.
hope this help, in anycase i prefer the first way.
Put in the head of the new page
<meta property="og:url" content="old_url_here"/>
This way Facebook attributes likes for the old page. The only downside is that this way when people share your link, the old rich snippet will be included.

Temporary redirect to avoid duplicate content

Our website has a featured article for one week on the front page, after which it is still accessible via it's true URL. Now, we want to use the true URL in promotions and social networks so that the link to the featured article is always accurate.
In other words, our front page always shows the current feature. By typing in our domain you read the currently featured article. See below.
website.com/feature1.html <-- true URL of last week's feature
website.com/feature2.html <-- true URL of current feature
website.com <-- front page shows feature2.html (not a redirect)
I'm trying to figure out how to avoid the duplicate content. Which of these do you think is the best solution for SEO? I'm thinking #1.
Temporary redirect from true URL to front page ONLY while the feature is current.
Temporary redirect from the front page to the true URL. I don't like this because I don't like a redirect on the root of the domain.
Use canonical on the front page specifying the true URL. Don't like this because website.com should be indexed.
Use canonical on the true URL specifying the front page ONLY while the feature is current.
I can make the content slightly different, however, it would not be significantly different.
In my opinion, you have to use rel="canonical" and so the #4

How best to setup 301 redirects from an old site that has many duplicate entries indexed on Google?

I am currently working with a client to redevelop their website. One of the final things I need to do before launch, is to make sure that their old website's pages are correctly redirected to the new URL structure of the new website.
Unfortunately, when I check Google to see how their current site is indexed, this relatively small website appears to have over 1500 pages indexed.
When I look at the indexed links on Google, many appear to be duplicates of the same page, but because of the terrible URI structure used on the old website, Google treats them differently.
For example, the 'Map' page is indexed at least twice on Google, under the following 2 URLs:
www.website.com/frame_page-map.html?mp_session=iris7k85851j05q55piqci31u3&mp_session=iris7k85851j05q55piqci31u3?page_code=map&mp_session=iris7k85851j05q55piqci31u3&mp_session=iris7k85851j05q55piqci31u3
www.website.com/frame_page-map.html?mp_session=sel6m8j5cu8lulep4dqa32sne7&mp_session=sel6m8j5cu8lulep4dqa32sne7?page_code=map&mp_session=sel6m8j5cu8lulep4dqa32sne7&mp_session=sel6m8j5cu8lulep4dqa32sne7
Only the session name is different in the URL (and I have no idea why it is repeated four times in a single URL, either).
For reference, the replacement URL for this page is:
www.website.com/contact/map
My question is: How do I setup a redirect for these multiple records on Google? Do I simply set-up the redirect for the old URL minus all of the URI parameters (i.e. www.website.com/frame_page-map.html) or is there another better method to do this?
Thanks for any help you might be able to offer!
It depends on what your goals are. If you don't care about the querystrings then setup a 301 (permanent redirect) that points to just your root page - map.html. To prevent google from indexing querystring params as separate pages use the canonical tag and have it reference the parent. This isn't guaranteed to work, but google takes your canonical into consideration when indexing.
If you care about the querystring values then you will have to setup a redirect for each one. There is a querystring parameter that you can append to your redirects that will tell it to be ignored so you don't have to write a regex that detects it.

Get google to index links from javascript generated content

On my site I have a directory of things which is generated through jquery ajax calls, which subsequently creates the html.
To my knwoledge goole and other bots aren't aware of dom changes after the page load, and won't index the directory.
What I'd like to achieve, is to serve the search bots a dedicated page which only contains the links to the things.
Would adding a noscript tag to the directory page be a solution? (in the noscript section, I would link to a page which merely serves the links to the things.)
I've looked at both the robots.txt and the meta tag, but neither seem to do what I want.
It looks like you stumbled on the answer to this yourself, but I'll post the answer to this question anyway for posterity:
Implement Google's AJAX crawling specification. If links to your page contain #! (a URL fragment starting with an exclamation point), Googlebot will send everything after the ! to the server in the special query string parameter _escaped_fragment_.
You then look for the _escaped_fragment_ parameter in your server code, and if present, return static HTML.
(I went into a little more detail in this answer.)