Favicon only showing when I type www.websitename.com not websitename.com
Why does this happen and how can I make favicons show up on websitename.com.
The default favicon is simply downloaded from the current Host of the HTTP request, plus `/favicon.ico'--and this Host is different in your two examples. There is nothing that says the host preceded by a "www." has anything in common with the host without the preceding "www."
There are three cases that may apply to you.
The favicon might be served from a literal file named favicon.ico and located in your document root. In this case, you need to check that your server's vhost (virtual host) configuration is set up to resolve both hosts "www.websitename.com" and "websitename.com" equivalently to the exact same set of files. (Although in general, this is not a good idea. See N.B. below.)
The favicon might be served from a file (named almost anything and located either inside or outside of your document root) which is set as your favicon by a server configuration. So check the server configuration and make sure that the rule determining the location of the favicon is applied to both hosts without and without a leading "www." Once again, these are in general completely different hosts, and a server does not normally assume there is anything in common between them.
You might be specifying the favicon individually in each file with a link HTML tag. If so, then make sure that the same HTML files are being loaded at each Host, as in answer 1, and follow this format for you link tag. (Your current rel attribute does not look like it will trigger most browsers into displaying the icon.)
Necessary to add link tag for favicon.ico?
N.B. There is a good reason why those hosts are not considered equivalent. Consider the case that half of your visitors randomly link to an article at your site with the "www." and half of them don't. And imagine this happens for every other website as well. Then slowly the internet, search engines, bookmarks become uselessly filled up with multiple links for every resource. Everything gets crawled at least twice as often for each distinct link, and resources are continually wasted forever and ever.
It is a good idea, therefore, to make your main body of content accessible only under one Host (whichever you prefer), and redirect the other host to the home page of the correct one. In the long run this will help your own server and will also help both you and the internet. By allowing only canonical links to work in the short run, one ensures that only canonical links will exist in the long run.
Related
I am writing a program which will look for Mixed Content within a URL. The aim of this script is to extract all links in a page and convert these links to absolute links, and then to see if the content is mixed.
lets say we have this page https://www.example.com/xxx1/ i'm assuming that any reference to links within this page will ALWAYS connect through to the HTTPS site, unless the link is explicitly told otherwise?
E.g
/index.html = will be HTTPS
http://www.example.com/img/insecureImage.jpg = Will be HTTP - and therefore insecure?
True?
Thanks,
The situation with mixed content depends on whether the content is active or passive. If you have an HTTPS site, all active content will be blocked. If it is passive as in the case of the image you provided, it will be displayed by default, but users can choose in their browsers to block this too.
The example you give is of an image file, so that is passive mixed content and that would not be blocked by default, but could be by the user's settings as mentioned.
The following resources fit into that class:
img
audio
video
object
The guide I link to explains the active/passive mixed content quite well.
MDN Guide on Mixed Content
Yes, independent of mixed content or not, if you see a relative link it is intended to be appended to the origin domain, so in your example /index.html should be interpreted as (https://www.example.com/index.html).
If they are absolute links, determining if its mixed content is exactly like you suggest - check the uri scheme. To reference mixed content, even from the same server, you need to use absolute links, so it makes your task kind of easy.
You're on the right track.
I have a weird network request in my page, which refers to JavaScript files, which I removed from every html file earlier. Cache is cleared and there is no single reference to be found in the source html and the JavaScript files. For fixing that and also out of general curiosity I would like to know if there is a simple way to find out where a request was triggered, preferably using the chrome-devtools.
Update:
Thanks to jaredwilli I found the initator column under the network-tab. However this only shows Other. What I would like to know, is the (html or javascript) file where those Requests have been triggered.
On the Network panel, you can determine what the initiator of a request was by viewing the Initiator column. It gives you the file, line number and type of resource it was, either Script or something else.
I am doing some updates to a site I have developed over the last few years. It has grown rather erratically (I tried to plan ahead, but with this site it has taken some odd turns).
Anyway, the site has a community blog ( blog.domain.com - used to be domainblog.com) ) and users with personal areas ( user1.domain.com, user2.domain.com, etc ).
The personal areas have standard page content that the user can use, or add snippets of text to partially customize. Now the owner wants the users to be able to create their own content.
Everything is done up to using a file browser.
I need a browser that will allow me to do the following:
the browser needs to be able to browse the common files at blog.domain.com/files and the user files at user_x.domain.com/files
the browser will also need to be able to differentiate between the two and generate the appropriate image url.
of course, the browser access to the user files will need to be dynamic and only show those files particular to the user (along with the common files)
I also need to be able to set a file size for images
the admin area is in a different directory than either the blog or the user subdomains.
general directory structure
--webdir--
|--client --
|--clientsite--
|--blog (blog.domain.com)
|--sites--
|--main site (domain.com)
|--admin (admin.domain.com)
|--users--
|--user1 (user1.domain.com)
|--user2 (user2.domain.com)
...etc.
I have tried several different browsers and using symlinks but the browsers don't seem to be able to follow them. I am also having trouble even setting them to use a directory that isn't the default.
what browser would you recommend? what would I need to customize to make it work.
TIA
ok, since I have not had any responses to this question, I guess I will have to do a work around and then see about writing a custom file browser down the road.
The root of the site http://example.com correctly identifies index.html and renders it. In a similar manner, I want, http://example.com/foo to fetch foo.html present in the root of the directory. The site that uses this functionality is www.zachholman.com. I've seen his code in Github. But still I'm not able to find how it is done. Please help.
This feature is actually available in Jekyll. Just add the following line to your _config.yml:
permalink: pretty
This will enable links to posts and pages without .html extension, e.g.
/about/ instead of /about.html
/YYYY/MM/DD/my-first-post/ instead of YYYY-MM-DD-my-first-post.html
However, you lose the ability to customize permalinks... and the trailing slash is pretty ugly.
Edit: The trailing slash seems to be there by design
It's actually the server that needs adjusting, not jekyll. Be default, jekyll is going to produces files with .html extensions. There may be a way around that, but it's unlikely that you really want to do go that route. Instead, you need to let your web server know that you want those files served when a URL is called with the file's basename (and no extension).
If your site is served via an Apache web server you can enable the "MultiViews" option. In most cases, you can do that be creating an .htaccess file at your site root with the following line:
Options +MultiViews
With this option enabled, when Apache receives a request for:
http://example.com/foo
It will serve the file:
/foo.html
Note that the Apache server must be setup to allow the option to be set in the htaccess file. If not, you would need to do it in the Apache config file itself. If your site is hosted on another web server, you'll need to look for an equivalent setting.
Can someone explain to me in simple term the concept of caching in GWT. I have read this in many places but may be due to my limited knowledge, i'm not being able to understand it.
Such as nocache.js, cache.js
or other things such as making the client cache files forever or how to make files cached by the client and then if file get changed on the server only then the client download these files again
Generally, there are 3 type of files -
Cache Forever
Cache for some time
Never Cache
Some files can never be cached, and will always fall in the "never cache" bucket. But the biggest performance wins comes from systematically converting files in the second bucket to files that can be cached forever. GWT makes it easy to do this in various ways.
The <md5>.cache.js files are safe to cache forever. If they ever change, GWT will rename the file, and so the browser will be forced to download it again.
The .nocache.js file should never be cached. This file is modified even if you change a single line of code and recompile. The nocache.js contains the links of the <md5>.cache.js, and therefore it is important that the browser always has the latest version of this file.
The third bucket contains images, css and any other static resources that are part of your application. CSS files are always changing, so you cannot tell the browser 'cache forever'. But if you use ClientBundle / CssResource, GWT will manage the file for you. Every time you change the CSS, GWT will rename the file, and therefore the browser will be forced to download it again. This lets you set strong cache headers to get the best performance.
In summary -
For anything that matches .cache., set a far-in-the-future expires header, effectively telling the browser to cache it forever.
For anything that matches .nocache., set cache headers that force the browser to re-validate the resource with the server.
For everything else, you should set a short expires header depending on how often you change resources.
Try to use ClientBundle / CssResource; this automatically renames your resources to *.cache bucket
This blog post has a good overview of the GWT bootstrapping process (and many other parts of the GWT system, incidentally), which has a lot to do with what gets cached and why.
Basically, the generated nocache.js file is a relatively small bit of JS whose sole purpose is to decide which generated permutation should be downloded.
Each individual permutation consists of the implementation of your app specific to the browser, language, etc., of the user. This is a lot more code than the simple bootstrapping code, and thus needs to be cached for your app to respond quickly. These are the cache.html files that get generated by the GWT compiler.
When you recompile and deploy your app, your users will download the nocache.js file as normal, but this will tell their browsers to download a new cache.html file with the app's new features. This will now be cached as well for the next time they load your app.