Can the web host disable meta tag redirection - redirect

I want to have a page redirect to another. Here is the situation:
I don't have access to the server config, and can't use any server redirection (don't know if that's the term)
Only ASP is installed
The index file MUST be main.html (changing it to main.asp results in what seems to be an infinite refresh loop)
I'd like to stay away from JavaScript redirects, so ideally I'd use <meta http-equiv="refresh" content="0";url="site_url">. But this results in, again, an infinite refresh loop.
I'm wondering if I am making a mistake, or if the host has disabled this type of redirection.
Bonus: If meta and server redirection are not options, is there anything left besides JavaScript redirection?

I believe you should have written this way :
<meta http-equiv="refresh" content="0; url=site_url">

Related

How to redirect a google blog page....unless it is being contained in an iframe?

So I have a google blog page which I would like to redirect to my page which I have an iframe of my google blog page. When I put the following code in, it redirects to my page containing the iframe:
<head>
<meta http-equiv="refresh" content="0;url=//mysite.com"/>
However obviously, in the iframe it continuously redirects and just fills up the page with the iterating header.
I found many ways to test if a page is being loaded in a browser or an iframe (such as this solution: How to identify if a webpage is being loaded inside an iframe or directly into the browser window?) however they all use javascript or script tags which Blogger does not seem to support (it refuses to save changes). Is there a way to do this test just using HTML?
No. But the web server can detect it via the existence of a referrer string. Here is one way to do it in an Apache .htaccess file:
SetEnvIf Referer ^http remote
<FilesMatch "\.(html|xml)">
order deny,allow
deny from env=remote
allow from all
</FilesMatch>
References
Testing for SSI Injection (OTG-INPVAL-009) - OWASP

Disadvantage of redirecting to error page on javascript disabled

I searched on the web and I didn't find any website using this technique. When javascript is disabled or doesn't supported by the browser, all those website are showing small boxes of error above their main content while no one is using redirecting to error page technique. I am using following code in my site to do this
<noscript>
Javascript is disabled.
<meta HTTP-EQUIV="REFRESH" content="0; url=http://www.wrangle.in/jserror.aspx">
</noscript>
But as my research resulted in less usage of this feature on www, so I want to know is there any disadvantage of this technique due to which these websites are not using this?

Browser caching after logout

After logout from the application if i press back button that pages are cached by browser.
i place meta tags in master pages not working
I'm not sure which meta tags you're talking about, but normally these tags would "expire" a page, which you can put in your templates.
<META HTTP-EQUIV="PRAGMA" CONTENT="NO-CACHE">
<META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE">
<META HTTP-EQUIV="EXPIRES" CONTENT="0">
Hope this helps.
Like #m1ke said, you will be better off controlling the caching by setting the correct HTTP headers rather than trying to set meta tags, because, as you have probably discovered yourself, many browsers ignore the caching directives in the meta tags.
I barely worry about HTTP headers or caching in my web apps though. I simply set the default caching policy in the web server to "access plus 0 days" (ie. don't cache anything) and then put in specific entries for jpg, png and other assets that I do want cached. All you really need to worry about then is clearing the session on logout and you should be OK.
I would highly recommend reading the following article on caching: http://www.mnot.net/cache_docs/

Site not valid - but it is

So, I'm building a website called "dagbok.nu", which is swedish for "diary now" :)
Anyway, when creating the Facebook application, it claims that the site URL is invalid as well as the app domain. For site url, I used "http://dagbok.nu" and for site domain, I used "dagbok.nu". Please don't reply (as I've seen others do on similar issues) that I should type the site url with the scheme and the domain without - that's exactly what I'm doing.
Right, so according to another question here, one could trouble shoot this functionality using FB's own URL scraper, so I did just that:
http://developers.facebook.com/tools/debug/og/object?q=http%3A%2F%2Fdagbok.nu
And the reply: Error Parsing URL: Error parsing input URL, no data was scraped
Right, so now I can assume that the reason for it being considered invalid is because of FB not being able to scrape the URL. But why?
According to this question, one of the reasons seems to be that FB has deemed the URL insecure or "spammy". I've acquired this domain from a previous owner so this wasn't all that impossible. But when doing the same thing as Matthew in that post - i.e. trying to post in my timeline using the domain "http://dagbok.nu", I didn't get any information. The status box expanded as if to include a thumbnail and information about the link, but it only contained a "(No title)" text and nothing more.
So now I don't know what to do. I've tried to check the DIG and NS records from multiple servers around the web, and everyone seems to resolve it correctly, and I've had friends double check the URL from the states as well. I can't understand what's wrong and I have no idea how to ask someone at FB how to resolve this. Does anyone here have a good advice for this? Thanks in advance! :)
EDIT
When changing the domain to another domain that points to the exact same web server and document_root, it works! So this is definitely a problem with the domain "dagbok.nu" and not with the code on that page.
EDIT
When using the debug function above - I see no activity in the server log what so ever. Facebook doesn't even contact the server. When using the alternate url - the one from the last edit, it pops up in the logs as it should.
EDIT
I filed a bug report with Facebook, And their first response was that they were going to follow up. Now, a month later, I got an email that said "We are prioritizing bugs based on impact to the developer community. As this bug report has not received much attention from other developers, we are closing it so as to better focus on the top issues", and then they told me to go here to stackoverflow to try to solve my issue - but the issue is WITH THEM, and of course no one else have reported that my site doesn't work, it affects only me, and I haven't opened it yet due to this bug!
EDIT
I wanted to file a new bug report, but I can't even that now, since they are blocking bug reports with this URL as well!
I had to edit the URL - here is the new bug report
When Facebook tries to scrap your site for information, they send a call to your server with specific user agent called "facebookexternalhit"...
Facebook needs to scrape your page to know how to display it around
the site.
Facebook scrapes your page every 24 hours to ensure the properties are
up to date. The page is also scraped when an admin for the Open Graph
page clicks the Like button and when the URL is entered into the
Facebook URL Linter. Facebook observes cache headers on your URLs -
it will look at "Expires" and "Cache-Control" in order of preference.
However, even if you specify a longer time, Facebook will scrape your
page every 24 hours.
The user agent of the scraper is: "facebookexternalhit/1.1(+http://www.facebook.com/externalhit_uatext.php)"
Make sure it is not blocked by your server firewall
Look in your server log if it even tried to access your site
If you think this is a firewall issue look at this link
Your problem appears to be with your character encoding string. Your Apache server is currently sending the unsupported string latin1. You've defined your meta:content-type as iso-8859-1. See the w3c validator
From what I've seen, the Facebook parser will stop immediately if it encounters either an unrecognized character encoding string or a mismatch in character encoding strings between your header and meta tags.
The problem could be originating from either your httpd.conf or php.ini files. Change these to match your meta and restart Apache. Since the problem seems to be domain-specific, I'd check httpd.conf first.
Could your domain be blacklisted? Could you try messaging your url to someone, and see if Facebook gives you a "This message contains blocked content..." error?
For example:
If you don't provide certain minimum Facebook markup on your page, it will respond with "Error Parsing URL: Error parsing input URL, no data was scraped." I only looked at the homepage, but it appears that dagbok.nu contains no Facebook markup. I'm not sure what things must be present at minimum, but in my implementation, I assume the fb:app_id meta tag and the JavaScript SDK script must be there. You may want to take a look at http://developers.facebook.com/docs/guides/web/#plugins , particularly the Authentication section.
I discovered your question because I had this same error today for an unknown reason. I found that it was caused because the content of my og:image meta tag used an incorrect URL to the image I was trying to use. So as you add Facebook markup to your page, make sure your values are correct or you may continue to receive this message.
This doesn't seem to be a Facebook problem if you take a look at what I've discovered.
The results when testing it with W3C Online Validation Tool are 1 of 2 results.
Tested using: dagbok.nu but note http://dagbok.nu has no difference in test results. Remove the last forward slash in between tests.
Test: 1
Results: 72 Errors 0 Warning
Note: Shown here is a fragment of the source Frameset DOCTYPE webpage.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Frameset//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-frameset.dtd">
<NOSCRIPT><IMG SRC="http://svs.bystorm.se/rv?java=off"></NOSCRIPT><SCRIPT SRC="http://svs.bystorm.se/rvj"></SCRIPT>
<HTML STYLE="height:100%;">
<HEAD>
<META HTTP-EQUIV="content-type" CONTENT="text/html;charset=iso-8859-1">
Test: 2
Results: 4 Errors 1 Warning
Note: Shown here is a fragment of the source Transitional DOCTYPE webpage.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html >
<head>
<title>Dagbok: Framsida</title>
<meta http-equiv="content-type" content="text/html; charset=iso-8859-1">
<meta name="author" content="Jonas Eklundh Communication (http://jonas.eklundh.com)">
<meta name="author-email" content="jonas#eklundh.com">
<meta name="copyright" content="Jonas Eklundh Communication #2012">
<meta name="keywords" content="Atlas,Innehållssystem,Jonas Eklundh">
<meta name="description" content="">
<meta name="creation-time" content="0,079s">
<meta name="kort" content="DGB">
Repeated tests loop these results when done a couple seconds apart indicating a page-redirect is occurring.
Security warnings are seen in Firefox and Chrome when visiting your site using these secure URL's:
https://dagbok.nu
https://www.dagbok.nu
The browser indicates the site should not be trusted because it's impersonating another site using invalid security certificate from *.loopiasecure.com
Recommendation: Check your .htaccess file, CMS Settings, page redirection, and security settings. Use the above source webpages to realize those file-locations / file-names that are being served to discover what's set incorrectly.
Once that's done, I think Facebook will be happy to then debug your webpage and provide additional recommendations.
Had the same problem and I discovered it was an incorrect IPv6 address in the AAAA records for my domain. The IPv4 record was correct, so the site worked in a browser but FB obviously check the IPv6 records!
This issue may also happen when Cloudflare is used. This is because Cloudflare protects the page from Facebook, which is then unable to collect the data, which in turn makes Facebook think the page is invalid.
My fix was:
Turn off Cloudflare for the page.
Scrape the page using Facebook's Dev Tools: https://developers.facebook.com/tools/debug/og/object
Click and let run the "Fetch new scrape information" button.
Re-enable cloudflare protection for the page.
You should then be able to continue to add the page where you needed.

How can an ISP append "index.jsp" to a URL I type in my browser

My ISP requires me to log in each day by redirecting me to a login page. Once I've logged in, they present me with an information page with a link to "Go to the Internet". When I click this, it redirects me to my browser's home page (google.com) but it appends "index.jsp" to the url first. I can remove the index.jsp and press enter, but not until I clear my cache does it stop redirecting me to google.com/index.jsp. This has become a daily ritual for me and everyone else in my neighborhood regardless of browser or operating system. Anyone know how an ISP accomplishes something like this (it seems they are sticking something in the cache someone). Is there any chance I can do something to fix this? I've already called the ISP and they told me its a bug in there system and all I can do is clear the cache every day (lame answer!).
This is the source of the intercept page that appends index.jsp to the URL:
<html>
<meta http-equiv="Refresh" content="0; url=/index.jsp">
If you are not redirected in 3 seconds Click Here to redirect.
<!--
Padding to make file large enough so that IE doesn't use its own default error page.
(( line repeats, removed by stackoverflow poster ))
Padding to make file large enough so that IE doesn't use its own default error page.
-->
</html>
The ISP requires re-authentication each day (system resets at about 4 am) and this (I suspect) is the initial intercept page sent to one's browser in place of the page requested.
After interception, it goes to a page of the form http://((subdomain)).hotsitenet.com/login.jsp?orig=http://((website.initially.requested.this.day)) where one authenticates.
After authenticating regular web browsing works, except if one tries to go to the first page requested that day, http://((website.initially.requested.this.day)), the intercept intervenes.
This results, typically, in a 404 error.
For example, say you went to google.com initially, were intercepted as expected, and authenticated. Now if you go back to http://google.com, you instead get kicked to http://google.com/index.jsp and google sends back an error page.
Verified to occur in Firefox and Safari.
I wonder if they had the intercept page include some code to not cache the page, if that would do it?
E.g.,
<meta http-equiv="expires" value="Thu, 16 Mar 2000 11:00:00 GMT" />
<meta http-equiv="pragma" content="no-cache" />
Found this about possibly putting the head section at the bottom of the page:
http://www.htmlgoodies.com/beyond/reference/article.php/3472881/So-You-Dont-Want-To-Cache-Huh.htm
Alternatively, the line
<meta http-equiv="Refresh" content="0; url=/index.jsp">
from the intercept page could be changed to:
<meta http-equiv="Refresh" content="0; url=/">
And they could have http://((subdomain)).hotsitenet.com/ forward to http://((subdomain)).hotsitenet.com/index.jsp server-side instead of in the client's browser.