I am working Joomla 1.5 . I am unaware of how to set user-friendly Urls to the site in Joomla1.5.
Give suggestions for keeping User-Friendly urls ...
I have changed the configuration for SEO as yes to apache mod_rewrite and Search engine friendly Urls .It changes the url as
http://localhost/joomla/Joomla_1.5.7/publicationsform but it shows me a 404 error and it works only when i put
http://localhost/joomla/Joomla_1.5.7/index.php/publicationsform
how to resolve this .
Also even when i put http://localhost/joomla/Joomla_1.5.7/index.php/publicationsform my css is not getting added..
You must have Apache mod-rewrite plugin enabled.
http://docs.joomla.org/How_to_check_if_mod_rewrite_is_enabled_on_your_server
Related
I'm using Umbraco 7.0.1 and want to change the default redirect behavior from returning a 302 to return a 301.
So I have a page: /het-obam-perspectief/nieuws/ that redirects to /nieuws/.
I have set this up in the Umbraco CMS:
When the request comes back I get a 302 (as captured in Fiddler):
Is there a way to reconfigure the default redirect behavior? I've looked around the web and only seen instructions for changing redirect behavior when changing domains (stackoverflow.com/questions/16357712/umbraco-301-redirect-entire-site) or managing each redirect url manually (http://our.umbraco.org/projects/developer-tools/301-url-tracker), neither of which satisfies what I'm trying to do.
After doing some research, Umbraco's default behavior for the umbracoRedirect property is automatically a 302, and cannot be changed.
There's (2) different options that we have here.
Configure the redirect in either IIS or web.config for the 301
using URL rewriting.
I found this package (haven't tested nor installed), but looks very
promising. Seems to basially create a new doc type name
umbracoPermanentRedirect and is of type Content Picker, which
then does a 301.
Umbraco Perm Redirect
I'd go w/ option (2) since this is the behavior that acts like Umbraco's default property.
Please let me know how this goes b/c our company heavily uses SEO best practices and I'd get pinged on this 302 as well if I had need for redirects.
Apprec.
I've been trying to get to bottom of this problem for a few hours but I can't seem to fix it, I've seen other questions similar to this and tried to use those to implement a fix for my problem but to no avail.
I've built a facebook contest canvas app which displays fine independantely but when I link it to a facebook page (as a link to a new contest) chrome no longer displays is and gives the following warning:
The page at 'https://www.facebook.com/contest/app_xxxxxxxx' was loaded over HTTPS, but ran insecure content from 'http://mydomain.com/': this content should also be loaded over HTTPS.
I've learned partly by trawling this site that the chrome security is fussier, and the app loads correctly, without errors in FireFox and IE but I can't find any resources that are loaded from a non https source.
I have been through with firebug checking in the net tab and checked that all of the loaded resources are using https (the png images, the jpg images, the css files and the jquery js files which are all hosted on the same server that has the certificate), I have even tried hosting the transitional dtd doc itself but nothing seems to make the warning go away and the app display correctly.
In the other similar questions it seems that there are either resources sourced from non-https sources or there are ssl switches used in the javascript library for facebook passed before the fb init.
The problem is that I am using only the php sdk not the js one (although I am using version 1.9 of jquery, hosted on my server) and I could find no similar ssl specific settings there.
If someone could give me a tip about how I could investigate further, what I might be missing or is familiar with this issue I'd be interested to hear about it.
Thanks a lot.
David
Facebook requires the app to come from https:// you need an ssl certificate on your server and to enable ssl. in the Facebook app settings change secure url to https://mydomain.com url
I did have a similar issue recently (but it only caused issues on IE10) and I resolved that by adding P3P header
header('P3P:CP="IDC DSP COR ADM DEVi TAIi PSA PSD IVAi IVDi CONi HIS OUR IND CNT');
Found the solution!
In the facebook app settings, if the page tab url is specific to a page e.g. https://www.mydomain.com/index.php, chrome doesn't complain with the insecure content message but if you reference a directory the error is propogated. I found this confusing since the 'canvas' urls need to be directories.
I hope this answer will save someone a few hours! :)
I built a site out of ZF and installed it fine on my server. I have the MVC structure and use custom routing (for SEO purposes) as below:
mysite.com/controller.html
mysite.com/controller/action.html
Generally, everything is working fine but the only problem is that SE crawlers won't find any .html files. If i open the "Activity" window from Safari, I see all the css and other files being referenced/read fine but not the page itself.
So, the page renders fine on a browser but SE crawlers or any program that made the request won't find the page. I'm wondering if it's an Apache issue. My .htaccess file is the same file that shipped with the ZF.
I really appreciate any advise/suggestions/comments!
Is it possible that your app is serving all pages with a 404 status code? So browsers and crawlers are getting the same thing, but the browser will render the content whereas the crawlers ignore it. I've seen some people use the Error Controller in ZF as a way of doing routing (not a good idea), where the Error Controller 'catches' all requests and then examines the params to determine what to display.
If this isn't your problem please could you edit your question to include:
How it is you know that crawlers are getting a 404
Some more info on how you are doing your routing
Also if you can provide an example URL we can check the headers that are being returned.
I'm implementing the facebook Comments plugin on my site. Users get the warning "Show all content" in IE9
This other publisher using the same plugin and it does not bring up the warning.
Can some please help me with this?
Asking users to turn of the mixed content warning in their IE9 is not an option.
We were just looking at this today and our workaround for now was to include the Facebook Library over https (even when the page itself is viewed over http). Although not ideal it gets rid of the mixed content warnings in IE9 until they have fixed their bug.
That seems to be how it was accomplished at www.vg.no linked in the original question, the library is linked via https.
From their code:
<script src="https://connect.facebook.net/nb_NO/all.js"></script>
I have the same problem:
I have a page that's 100% http. But, the facebook javascript (which I call over http), is returning assets (.js, images) over https, which is generating security warnings for IE(9) users.
I have figured out it's the comment widget from Facebook (
Here's an example of a live page on http: with the error:
http://app.gophoto.com/p?id=10173&rkey=CD01891B287792415384&s=1&a=6940
Here's one of the assets that Facebook returns over HTTPS
https://s-static.ak.facebook.com/rsrc.php/v1/y8/r/7Htnnss1mJY.js
(I'm unable to comment (for some reason?) on Joel's answer. But, his suggestion to fetch the initial all.js over https on http sites does not actually work. I've tried it, and it also inherently looks incorrect since even the initial js fetch violates the mixing up of http & https content.)
When google crawls our site the resulting URLs all have the jsessionid appended to them.
Is this happening because the app server is detecting a lack of cookie support in Googlebot, forcing the session to be maintained via URL-rewriting? Is there anything I can do about it?
Is the solution simply to never call Component.getSession() ? Is there anything like HttpServletRequest.getSession(false) ?
Edit: just found org.apache.wicket.Session.exists()
Found the solution in SEO - Search Engine Optimization - Apache Wicket Wiki.
In a nutshell:
override WebApplication.newWebResponse()
have it return a BufferedWebResponse that checks to see if the user-agent is a crawler (i.e. googlebot) or not
if it's a crawler, don't re-write the URL