I have question regarding disabling browser caching. I have already found few solutions, and just want to know if there are better or more common approaches. So I have GWT applications and in order to disable cashing I may use next options:
Adding to URL dummy parameter
Putting on the HTML page <meta http-equiv="pragma" content="no-cache">
Setting HTTP headers:
header("Pragma-directive: no-cache");
header("Cache-directive: no-cache");
header("Cache-control: no-cache");
header("Pragma: no-cache");
header("Expires: 0");
The most important are the
header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); #Expires sometime in the past
header("Cache-control: no-cache"); #Disables caching
In addition, add the unique parameter to the url to be sure. If you are using browser back-button sometimes the entire DOM is cached and no new content is fetched unless you do it dynamically using javascript and adding a unique id to your request.
Normally, you want to set most of these headers in your server configuration so that you can serve normal images and other static content with the right headers also.
Related
My websites are caching the REST requests. I am happy with the speed of the websites and would like to retain the cache. However, I have discovered that the same file is returned after I have made a POST/PUT/DELETE request. This means that the updated database isn't called. The same happens after refreshing the page. I would like to have updated information from the database after a POST/PUT/DELETE request.
I am just wondering if I can maybe set a conditional statement to cache the files if the last REST request isn't POST/PUT/DELETE? The website is hosted on a shared web hosting platform and I can amend the .htaccess file. Greatly appreciated for any ideas. I hope my explanation is clear but please feel free to reach out if it's not. Thank you in advance
To stop browser caching we can add the below properties in an HTML as meta headers.
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: 0
We have to save the above headers in the html page which will be among the parent HTMLs to be used.
<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate">
<meta http-equiv="Pragma" content="no-cache">
<meta http-equiv="Expires" content="0">
The below code can be used from java servlet or nodejs
response.setHeader("Cache-Control", "no-cache, no-store, must-revalidate"); // HTTP 1.1.
response.setHeader("Pragma", "no-cache"); // HTTP 1.0.
response.setHeader("Expires", "0");
servlet code sample as follows
public void doGet(HttpServletRequest req,HttpServletResponse response) throws ServletException,IOException
{
if(req.get[some field name].equals("last_request")){
response.setHeader("Cache-Control", "no-cache, no-store, must-revalidate"); // HTTP 1.1.
response.setHeader("Pragma", "no-cache"); // HTTP 1.0.
response.setHeader("Expires", "0");
}
}
Below link can be a reference of this thread
How do we control web page caching, across all browsers?
Is there a way I can prevent browser image caching in an HTML email without using Javascript? I have an HTML email with an image that I want to be reloaded every time the email is opened in Gmail webmail. Right now it seems the browser is caching the image.
Unfortunately, since 2013 Gmail started adding images to its native web interface and mobile apps cache, but external apps and services retrieving mail from Gmail will download the normal images.
This snippet placed in the embedded CSS area can fix this issue by disabling the cache:
header('Content-Type: image/jpeg');
header("Cache-Control: no-store, no-cache, must-revalidate, max-age=0");
header("Cache-Control: post-check=0, pre-check=0", false);
header("Pragma: no-cache");
As per this doc on MDN:
After that it is downloaded every 24 hours or so. It may be
downloaded more frequently, but it must be downloaded every 24h to
prevent bad scripts from being annoying for too long.
Is the same true for Firefox and Chrome? OR does update to service worker javascript only happens when user navigates to site?
Note: As of Firefox 57, and Chrome 68, as well as the versions of Safari and Edge that support service workers, the default behavior has changed to account for the updated service worker specification. In those browsers, HTTP cache directives will, by default, be ignored when checking the service worker script for updates. The description below still applies to earlier versions of Chrome and Firefox.
Every time you navigate to a new page that's under a service worker's scope, Chrome will make a standard HTTP request for the JavaScript resource that was passed in to the navigator.serviceWorker.register() call. Let's assume it's named service-worker.js. This request is only made in conjunction with a navigation or when a service worker is woken up via, e.g., a push event. There is not a background process that refetches each service worker script every 24 hours, or anything automated like that.
This HTTP request will obey standard HTTP cache directives, with one exception (which is covered in the next paragraph). For instance, if your server set appropriate HTTP response headers that indicated the cached response should be used for 1 hour, then within the next hour, the browser's request for service-worker.js will be fulfilled by the browser's cache. Note that we're not talking about the Cache Storage API, which isn't relevant in this situation, but rather standard browser HTTP caching.
The one exception to standard HTTP caching rules, and this is where the 24 hours thing comes in, is that browsers will always go to the network if the age of the service-worker.js entry in the HTTP cache is greater than 24 hours. So, functionally, there's no difference in using a max-age of 1 day or 1 week or 1 year—they'll all be treated as if the max-age was 1 day.
Browser vendors want to ensure that developers don't accidentally roll out a "broken" or buggy service-worker.js that gets served with a max-age of 1 year, leaving users with what might be a persistent, broken web experience for a long period of time. (You can't rely on your users knowing to clear out their site data or to shift-reload the site.)
Some developers prefer to explicitly serve their service-worker.js with response headers causing all HTTP caching to be disabled, meaning that a network request for service-worker.js is made for each and every navigation. Another approach might be to use a very, very short max-age—say a minute—to provide some degree of throttling in case there is a very large number of rapid navigations from a single user. If you really want to minimize requests and are confident you won't be updating your service-worker.js anytime soon, you're free to set a max-age of 24 hours, but I'd recommend going with something shorter on the off chance you unexpectedly need to redeploy.
Some developers prefer to explicitly serve their service-worker.js with response headers causing all HTTP caching to be disabled, meaning that a network request for service-worker.js is made for each and every navigation.
This no-cache strategy may result useful in a fast-paced «agile» environment.
Here is how
Simply place the following hidden .htaccess file in the server directory containing the service-worker.js:
# DISABLE CACHING
<IfModule mod_headers.c>
Header set Cache-Control "no-cache, no-store, must-revalidate"
Header set Pragma "no-cache"
Header set Expires 0
</IfModule>
<FilesMatch "\.(html|js)$">
<IfModule mod_expires.c>
ExpiresActive Off
</IfModule>
<IfModule mod_headers.c>
FileETag None
Header unset ETag
Header unset Pragma
Header unset Cache-Control
Header unset Last-Modified
Header set Pragma "no-cache"
Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
Header set Expires "Thu, 1 Jan 1970 00:00:00 GMT"
</IfModule>
</FilesMatch>
This will disable caching for all .js and .html files in this server directory and those below; which is more than service-worker.js alone.
Only these two file types were selected because these are the non-static files of my PWA that may affect users who are running the app in a browser window without installing it (yet) as a full-fledged automatically updating PWA.
More details about service worker behaviour are available from Google Web Fundamentals.
I am "renaming" an existing file for a project I am working on. To maintain backwards compatibility, I am leaving a cfm file in place to redirect the users to the new one.
buy.cfm: old
shop.cfm: new
In order to keep everything as clean as possible, I want to send the 301 statuscode response if a user tries to go to buy.cfm.
I know that I can use either cflocation with the statuscode attribute
<cflocation url="shop.cfm" statuscode="301" addtoken="false">
or I can use the cfheader tags.
<cfheader statuscode="301" statustext="Moved permanently">
<cfheader name="Location" value="http://www.mysite.com/shop.cfm">
Are there any reasons to use one method over the other?
I think they do the same thing, with <cflocation> being more readable
I tested this on ColdFusion 9.
There is one major difference, and it is that cflocation stops execution of the page and then redirects to the specified resource.
From the Adobe ColdFusion documentation:
Stops execution of the current page and opens a ColdFusion page or
HTML file.
So you would need to do this:
<cfheader statuscode="301" statustext="Moved permanently">
<cfheader name="Location" value="http://www.example.com/shop.cfm">
<cfabort>
to get the equivalent of this:
<cflocation url="shop.cfm" statuscode="301" addtoken="false">
Otherwise, you risk running into issues if other code runs after the cfheader tag. I came across this when fixing some code where redirects were inserted into an application.cfm file -- using cfheader -- without aborting the rest of the page processing.
I also noticed, in the response headers, that cflocation also sets the following headers accordingly:
Cache-Control: no-cache
Pragma: no-cache
One might want to add these headers in if using the cfheader tag with Location, if needed:
<cfheader name="Cache-Control" value="no-cache">
<cfheader name="Pragma" value="no-cache">
To elaborate on the Answer by Andy Tyrone, while they MAY do the same thing in certain circumstances, the CFHEADER method give you more control over the headers passed in the request. This becomes useful, for example, if you want to send cache control headers to a browser or content delivery network so that they do not keep hitting your server with the same old redirect request. There is no way (to my knowledge) to tell a CFLocation to cache the redirect.
It's a login page. After validating, the user is redirected to home page:
#header("Content-type: text/html; charset=utf-8");
#header('Location: index.php');
#header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1
#header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); // Date in the past
But the page becomes blank if IE6! How? And it happens only for the first time; afterwards it'll work normally!
Why are you suppressing the warnings/errors that might be happening? I'd say get rid of the # first, and then tell us what is really going on.
header() won't work if there's any output already been sent. This includes spaces, empty lines or whatever. Make sure that there's strictly nothing being output before calling header.
Works:
<?php
header('Location: index.php');
?>
Does not work
<?php
header('Location: index.php');
?>
And remove the #, it eats up any useful info header tries to give you.
You can also redirect using javascript from client side:
Instead of :
#header('Location: index.php');
Client-side redirect:
echo "<script>document.location.replace('index.php');</script>";
I have been searching for an answer to a similar problem to yours for an hour or so. IE6 seems to have a problem with zipped content. Simply disabling mod_deflate on our server fixed the problem for IE6. Specifically version 6.0.29 appears to exhibit this bug.
Take a look at http://www.contentwithstyle.co.uk/content/moddeflate-and-ie6-bug