Setting the Expiry Headers for JS/Images - jboss

Please help to set the Expiry Header for files like JS/Images/CSS
Server : Linux
App Server : Jboss
I was getting some examples in internet to achieve similar thing using .htaccess files, but it not clear.

You can do that within an application by using a custom filter, like this.
But you did not explain the actual problem you are trying to solve. Your question sounds pretty unusual, so chances are high that what you really need is something completely different. You mention .htaccess file, and this means you have a web server, likely Apache, along with JBoss. Static contents (files like .css, .js, etc.) should normally be served by that server, not JBoss. Then, it's not JBoss that should set HTTP headers for them. Here you can find an explanation how it should be done in Apache.

Related

Issues Recreating Http Headers from an older application

I am having an issue uploading files to my server on a frontend rewrite I am working on.
I am getting an error response 500 "Premature end of script headers:" I have my headers recreated almost exactly the same as the legacy web app I am trying to recreate with two exceptions.
Backstory: I am recreating a Flash web application in Vue.js. I am using the exact same back end intact, it is Perl, cgi scripts, generally, I am able to use axios to talk to the backend scripts no problem.
Current Flash Frontend: When I upload a file with the current flash app my header looks like this.
Note the highlighted text at the bottom.
Replacement Vue Frontend: I see a couple differences with my new header, I don't have the Content-Type: application/octet-stream bit directly underneath the filedata part of the header. And the end of the header is "--" which is the convention, I am guessing the old application used some outdated header ending I can't find and doesn't know how to handle mine. Please help I could not find anything on the Google. I also note the new header has "WebKitFormBoundarys" not sure if that could cause an issue with some old header standards or something.
Issues were related to outdated Perl packages, once they were upgraded to backend script recognized the header correctly.

Maintaining (version + redirect) in S3

So, far in our application, the *.js files were served directly from apache. For example, this was a script include in a jsp page : /foo/v6565/my_script.js. The v6565 in the path is phony and an internal apache redirect, redirects /foo/v6565/my_script.js to /foo/my_script.js.
Whenever my_script.js is updated, v<xxxx> in the included jsp page (an internal tool does it based on the SVN revision of my_script) is updated - thus forcing the browser to fetch my_script.js again and not the cached version. I hope I am able to explain my current approach clearly.
[A different approach could have been to user /foo/my_script.js?v=5652. However, there was some caching issue (can't remember it) because of which the decision was taken to use /foo/v56564/ instead of adding version to the queryParam. I will dig into it, though]
Now, that we are moving all of our *.js files to an s3 bucket, I was wondering what would be a way of doing this?
The path from s3 bucket would look like : mybucket.aws.com/js/my_script.js. How to I insert the version tag + redirection for s3? Are there any other standard approaches used when resources are served from s3?
(I've read about page redirects on s3 resources but the redirects are to be written directly on the resources, which is not really applicable in my case)
Thanks.
I think cache busting with ?v=<hash> is pretty much standard now.
It has been disadvised, however that's a pretty old resource (though often cited) and I'm not sure if this is still true. Even your trusted StackOverflow is using it with SHA1, so I guess it's good enough for everybody now.

Best way to implement three hundred 301 redirects on windows server / .NET

I'm going live with a rebuilt website (on a new, Windows Server 2008 R2 Standard server). I have a spreadsheet containing 300 URLs from the old site which are mapped to the new site URLs. The URLs for each are 'clean' in that they don't have file extensions (no .php, .aspx, .htm etc).
I've read about the URL Rewrite extension for IIS here: http://weblogs.asp.net/scottgu/archive/2010/04/20/tip-trick-fix-common-seo-problems-using-the-url-rewrite-extension.aspx but it seems to me this is just a GUI tool for writing rules to the web.config file.
If I have 300 rules in web.config won't this hamper performance?
It could be tackled with ISAPI_Rewrite too, but I'm not sure what the optimum way to handle this is.
Can anyone give any advice on the best way to implement my 301 redirects in this situation?
Thanks
If you have a large number of exact URL's that you want to redirect to other exact URL's, have a look at the Rewrite Map feature of the URL Rewrite module. It's specifically designed for that purpose and the performance should be OK.
ISAPI_Rewrite 3 or Helicon Ape can help you handle your situation with ease.
They both support plain text map files and Apache-like configuration.
See the example of using mapfiles here.

custom .zfproject.xml file

Yall:
I'm trying to squeeze the Zend Framework into my ISP securely.
My ISP pretty much requires me to put much of the stack in a /private directory
in my HTDOCS Home.
So, it looks like this
/index.php
/private/application/configs
/private/application/controllers
/private/application/bootstrap.php
...
I tried editing the .zfproject.xml so indicate this, but ZF.bat/ZF.sh seems
to ignore this.
Anyone had any success with this type of configuration.
you should just set up the directory like this:
/private/MyProject/application
DONE! No need to modify the xml file
Unless you'll be using Zend_Tool you won't need zfproject.xml. This thread talks more about it.

Counting eclipse plugin installations/downloads

I'm currently hosting an Eclipse plugin update site on sourceforge.net . SF.net does not allow access to server logs but I'd still like to know how many downloads the plugin gets.
Is there an alternative way of gathering them?
I'm not going to have any sort of 'call home' feature in the plugin, so please don't suggest that.
I wrote a blog about how to track downloads of an Eclipse plug-in update site. What you can do is specify a url to your server and every time a download is initiated the update site will send an HTTP HEAD request to that url, which you can then use to count the number of times the plug-in was downloaded. If you want to track some information about who is downloading the plug-in you can pass information, like the package name, version, os, and and store it in a database.
http://programmingfortherestofus.blogspot.com/2014/08/tracking-downloads-to-your-eclipse.html
I hope it helps!
It is possible to host the plugin jars in the file release service, and then get your site.xml file to point to them. You need to point at a specific mirror to make it work.
This will tell you how many times people download each file as with a normal file release.
Unfortunately, in practice this is a lot of work to maintain, and tends to be unreliable (I kept getting bug reports saying the update site wasn't working).
You could write a very simple php script which just serves up the relevant file, and logs the download to a file or DB. Make sure it double checks the URL is a valid one to download to the user of course :)
Once that's in place, you can update the site.xml to point to the correct thing, or you could probably use URL rewriting to intercept requests to your jar file and pass them through the script. I've never tried that on the SF servers, but it might work.
EDIT:
Even better, just have a php script which sends a redirect like this:
<?php
$file = $_GET('file');
// Now log the access to file
header('Location: ' . $file);
?>
Just a thought: AFAIK, SourceForge does tell you how much data you served. You know the size of your plugin JARs. Divide the data served by the size of your plugin and you get a rough estimate of how many downloads you had.