I am making a website, where a person could be redirected to a form page several different pages within the site and depending on where they were redirected from, the form would be filled out certain to make it quick for them. This is all on the mobile, so data has to be kept in mind.
That information is usually contained in the HTTP Referer header field.
You can get this data from the headers sent by the browser (referrer URL) - usually these are stored as "Server variables"
However, I would recommend staying clear of this method as it can introduce a few other problems. I would recommend using session/cookies to keep track of the last page the user has visited.
Related
I have a page called club.php. I want to disable people from being able to see my webpage if they type in www.mysite.com/club.php unless they are linked from another page. I have a page where you need to enter a password to access this.
I'm hoping I can simply drop code into my pages (javascript or something?)
There is an HTTP header called Referer, which contains the URI of the site from which the request to your site originated from. You can read the content of this header in PHP via $_SERVER["HTTP_REFERER"] (see http://php.net/manual/en/reserved.variables.server.php).
But please be aware that you can't rely on that this header field is set or not, so testing against the content of the header is not a serious security measure for protecting a particular page from beeing viewed from unauthorized visitors. You can set/unset the value of the header to any arbitrary value with your Browser or Addon.
I have a website where the URLs have some tracking parameters that do not affect the page that is displayed i.e. the URL is of the form http://mywebsite.com/page1?tracking1=aaa&tracking2=bbb and 'tracking1' and 'tracking2' are just tracking parameters used for some other purpose and do no determine the page that is displayed. The page that is displayed is always 'http://mywebsite.com/page1' irrespective of the values of these tracking parameters.
I have included the facebook like button on my website pages and facebook treats each of these URLs, including the tracking parameters, as separate pages. I'm not able to get facebook to ignore these tracking parameters and just consider the URL without tracking parameters as a page. So, I'm storing my own like count against the actual URL (when I get a callback on the like action) and displaying it next to the like button.
Is displaying own like count next to facebook like button against their usage policy? Is there a better way to do this?
Is there any particular functional reason you're using GET (ie URL) variables to store your tracking?
If you can push them into POST instead, or use cookies or sessions for your tracking, you can simplify your URLs and Facebook should treat it as a single page.
If you have to use GET due to, for example, the links coming from external websites, you could use a pass-through URL to do your tracking, before forwarding to the main page. ie someone clicks the link to redirect?tracking1=aaa&tracking2=bbb&page=page1
And redirect, as you may have guessed, does what you need to do with your tracking before forwarding the user on to page1.
A domainname that I do not own, is redirecting to my domain. I donĀ“t know who owns it and why it is redirecting to my domain.
This domain however is showing up in Googles search results. When doing a whois it also returns this message:
"Domain:http://[baddomain].com webserver returns 307 Temporary Redirect"
Since I do not own this domain I cannot set a 301 redirect, or disable it. When clicking the baddomain in Google it shows the content of my website but the baddomain.com stays visible in the URL bar.
My question is: How can I stop Google from indexing and showing this bad domain in the search results and only show my website instead?
Thanks.
Some thoughts:
You cannot directly stop Google from indexing other sites, but what you could do is add the cannonical tag to your pages so Google can see that the original content is located on your domain and not "bad domain".
For example check out : https://support.google.com/webmasters/answer/139394?hl=en
Other actions can be taken SEO wise if the 'baddomain' is outscoring you in the search rankings, because then it sounds like your site could use some optimizing.
The better your site and domain rank in the SERPs, the less likely it is that people will see the scraped content and 'baddomain'.
You could however also look at the referrer for the request and if it is 'bad domain' you should be able to do a redirect to your own domain, change content etc, because the code is being run from your own server.
But that might be more trouble than it's worth as you'd need to investigate how the 'baddomain' is doing things and code accordingly. (properly iframe or similar from what you describe, but that can still be circumvented using scripts).
Depending on what country you and 'baddomain' are located in, there are also legal actions. So called DMCA complaints. This however can also be quite a task, and well - it's often not worth it because a new domain will just pop up.
For example, when making a payment with credit-card we POST to url /paymybill-cc. And we want to avoid reposting when the user refresh the page. In this case, is it a preferred way to redirect to the same url with GET method?
Usually the POST happens to a url specifying what you want to create, like in your case, but the GET should happen to a url like /paymybill-cc/:id to get a specific one.
If I were allowed to GET /paymybill-cc I would expect it to return all payments, maybe with a default limit, but a lot of them.
If the user reloads the page that contains POST data then he will be prompted about resubmitting his data to the server. See How do I reload a page without a POSTDATA warning in Javascript? for a bit more details on that.
I am currently working with a client to redevelop their website. One of the final things I need to do before launch, is to make sure that their old website's pages are correctly redirected to the new URL structure of the new website.
Unfortunately, when I check Google to see how their current site is indexed, this relatively small website appears to have over 1500 pages indexed.
When I look at the indexed links on Google, many appear to be duplicates of the same page, but because of the terrible URI structure used on the old website, Google treats them differently.
For example, the 'Map' page is indexed at least twice on Google, under the following 2 URLs:
www.website.com/frame_page-map.html?mp_session=iris7k85851j05q55piqci31u3&mp_session=iris7k85851j05q55piqci31u3?page_code=map&mp_session=iris7k85851j05q55piqci31u3&mp_session=iris7k85851j05q55piqci31u3
www.website.com/frame_page-map.html?mp_session=sel6m8j5cu8lulep4dqa32sne7&mp_session=sel6m8j5cu8lulep4dqa32sne7?page_code=map&mp_session=sel6m8j5cu8lulep4dqa32sne7&mp_session=sel6m8j5cu8lulep4dqa32sne7
Only the session name is different in the URL (and I have no idea why it is repeated four times in a single URL, either).
For reference, the replacement URL for this page is:
www.website.com/contact/map
My question is: How do I setup a redirect for these multiple records on Google? Do I simply set-up the redirect for the old URL minus all of the URI parameters (i.e. www.website.com/frame_page-map.html) or is there another better method to do this?
Thanks for any help you might be able to offer!
It depends on what your goals are. If you don't care about the querystrings then setup a 301 (permanent redirect) that points to just your root page - map.html. To prevent google from indexing querystring params as separate pages use the canonical tag and have it reference the parent. This isn't guaranteed to work, but google takes your canonical into consideration when indexing.
If you care about the querystring values then you will have to setup a redirect for each one. There is a querystring parameter that you can append to your redirects that will tell it to be ignored so you don't have to write a regex that detects it.