Maintaining (version + redirect) in S3 - deployment

So, far in our application, the *.js files were served directly from apache. For example, this was a script include in a jsp page : /foo/v6565/my_script.js. The v6565 in the path is phony and an internal apache redirect, redirects /foo/v6565/my_script.js to /foo/my_script.js.
Whenever my_script.js is updated, v<xxxx> in the included jsp page (an internal tool does it based on the SVN revision of my_script) is updated - thus forcing the browser to fetch my_script.js again and not the cached version. I hope I am able to explain my current approach clearly.
[A different approach could have been to user /foo/my_script.js?v=5652. However, there was some caching issue (can't remember it) because of which the decision was taken to use /foo/v56564/ instead of adding version to the queryParam. I will dig into it, though]
Now, that we are moving all of our *.js files to an s3 bucket, I was wondering what would be a way of doing this?
The path from s3 bucket would look like : mybucket.aws.com/js/my_script.js. How to I insert the version tag + redirection for s3? Are there any other standard approaches used when resources are served from s3?
(I've read about page redirects on s3 resources but the redirects are to be written directly on the resources, which is not really applicable in my case)
Thanks.

I think cache busting with ?v=<hash> is pretty much standard now.
It has been disadvised, however that's a pretty old resource (though often cited) and I'm not sure if this is still true. Even your trusted StackOverflow is using it with SHA1, so I guess it's good enough for everybody now.

Related

Meteor 1.4 - General approach to file system + /public activity

I've done some digging around and a lot of the threads regarding file system and how it works with Meteor seem to be pretty outdated, not to mention packages related to file storage/serving (i.e. CollectionFS). I was wondering if anyone here has deep experience with handling files in lieu of 1.4 or even 1.3 (I am currently on 1.4.1.1).
My questions are as follows:
Did Meteor 1.3/1.4 come with any changes regarding fs?
What is the general best approach to storing and serving static assets in light of Meteor 1.4?
I've seen many threads that say dynamically storing files to /public triggers a server upload, but I've tested this on local by manually copy/pasting a .png file into /public, and it only triggers a client refresh with the console message Client modified -- refreshing. Would this hold true for files added during runtime, and would it hold true in production?
Currently I am trying to stay clear from S3 or any other third party CDN's to keep a low budget, and also trying to stay clear from storing files into Mongo.
Thanks for any and all opinions!
What about setting up a shared folder or NFS folder, have your Meteor app handle the file upload, write the file to that location, and configure Nginx or whatever you are using as the load balancer to serve those files. If you worry about browser refreshed when the file is put into the public folder, you do not need to write files to the public folder right?

Which is the best method to get a local file URI and save it online?

I'm working on a web project but the scenario has some restrictions for a specific user case. We have been investigating a web-only solution and a dropbox-like native way to solve this.
The main restriction is that we shouldn't upload local files to a cloud. We can only track local URI's.
The use cases are:
As a developer, I should be able to link the URI of a local file to a webapp. Thus, I can click on a webapp element and the local file should be opened.
As a user, I should be able to add a directory and view the same structure on the webapp (clicking opens the file). The files are not uploaded.
Possible solutions:
We started trying the FileSystem API but when the specs. were fully defined, we figured out that a local sandbox was not enough, and we can't access to the local URI due to security issues.
We are considering a Dropbox-like native app. The Invision Sync App is closer to what we want.
The less optimal solution would be a complete native application.
The question:
Which is the more efficient way to achieve this? Any idea on some native libraries for doing this faster? Any web-only workaround?
Thanks in advance.

GWT Caching Concept

Can someone explain to me in simple term the concept of caching in GWT. I have read this in many places but may be due to my limited knowledge, i'm not being able to understand it.
Such as nocache.js, cache.js
or other things such as making the client cache files forever or how to make files cached by the client and then if file get changed on the server only then the client download these files again
Generally, there are 3 type of files -
Cache Forever
Cache for some time
Never Cache
Some files can never be cached, and will always fall in the "never cache" bucket. But the biggest performance wins comes from systematically converting files in the second bucket to files that can be cached forever. GWT makes it easy to do this in various ways.
The <md5>.cache.js files are safe to cache forever. If they ever change, GWT will rename the file, and so the browser will be forced to download it again.
The .nocache.js file should never be cached. This file is modified even if you change a single line of code and recompile. The nocache.js contains the links of the <md5>.cache.js, and therefore it is important that the browser always has the latest version of this file.
The third bucket contains images, css and any other static resources that are part of your application. CSS files are always changing, so you cannot tell the browser 'cache forever'. But if you use ClientBundle / CssResource, GWT will manage the file for you. Every time you change the CSS, GWT will rename the file, and therefore the browser will be forced to download it again. This lets you set strong cache headers to get the best performance.
In summary -
For anything that matches .cache., set a far-in-the-future expires header, effectively telling the browser to cache it forever.
For anything that matches .nocache., set cache headers that force the browser to re-validate the resource with the server.
For everything else, you should set a short expires header depending on how often you change resources.
Try to use ClientBundle / CssResource; this automatically renames your resources to *.cache bucket
This blog post has a good overview of the GWT bootstrapping process (and many other parts of the GWT system, incidentally), which has a lot to do with what gets cached and why.
Basically, the generated nocache.js file is a relatively small bit of JS whose sole purpose is to decide which generated permutation should be downloded.
Each individual permutation consists of the implementation of your app specific to the browser, language, etc., of the user. This is a lot more code than the simple bootstrapping code, and thus needs to be cached for your app to respond quickly. These are the cache.html files that get generated by the GWT compiler.
When you recompile and deploy your app, your users will download the nocache.js file as normal, but this will tell their browsers to download a new cache.html file with the app's new features. This will now be cached as well for the next time they load your app.

Web CMS That Outputs to Flat Static Pages (.html) via FTP to Remote Server?

I have a web app project that I will be starting to work on shortly. One of the features included is going to be a content management system where users can add content and then that content will be combined with a template and then output as a regular .html file. This .html file would then be FTPed to their own web host.
As I've always believed in not reinventing the wheel I figured I'd see if there are any quality customizable CMSes out there that do this already do this. For instance, Blogger.com allows you to post all of your content to your account there; but offers the option to let you use your own hosting. Any time you publish a new article then a new .html page is generated (as well as an updated index page with links to the new article) and then the updated content is FTPed to your own server.
What I would like is something like this that I can modify to more closely suit my needs.
Required Features:
Able to host on my own server
Written in PHP
Users add content through their account, then when posted it is FTPed as .html to their server
Any appropriate pages are also updated to link to the new content (like the index page or whatnot)
Templateable
Customizable
Optional (but very much desired) features:
Written in CodeIgniter or a similar PHP framework
While CodeIgniter isn't strictly required, I would very much prefer it. It speeds up development time and makes things much easier to implement.
So - any suggestions? I've stumbled across a few CMSes that push to remote servers as static pages, but the ones I've found all are hosted on the developers servers which means that I cannot modify it at all.
Adobe Contribute might work for your situation. A developer/designer creates a set of templates with Dreamweaver and publishes the templates. Authorized users can then create pages based on the templates and only make changes within the editable regions. It includes systems for drafts and reviews prior to publishing (via many options, including ftp) and incorporates automatic version control. It can work with static html pages or dynamic pages like php.
Sounds like you need a separate application that can do this for you.
For example, you should be able to write something that queries Drupal's menu router and saves the output (with curl) to a directory and then run's rsync to push your content where you want it to go.
Otherwise your requirements are likely to be outside the scope of a typical CMS. Separating this functionality will give you better options.
You'd need to write a filter for your URLs too. It's a bit of work...
Hope that helps!

Counting eclipse plugin installations/downloads

I'm currently hosting an Eclipse plugin update site on sourceforge.net . SF.net does not allow access to server logs but I'd still like to know how many downloads the plugin gets.
Is there an alternative way of gathering them?
I'm not going to have any sort of 'call home' feature in the plugin, so please don't suggest that.
I wrote a blog about how to track downloads of an Eclipse plug-in update site. What you can do is specify a url to your server and every time a download is initiated the update site will send an HTTP HEAD request to that url, which you can then use to count the number of times the plug-in was downloaded. If you want to track some information about who is downloading the plug-in you can pass information, like the package name, version, os, and and store it in a database.
http://programmingfortherestofus.blogspot.com/2014/08/tracking-downloads-to-your-eclipse.html
I hope it helps!
It is possible to host the plugin jars in the file release service, and then get your site.xml file to point to them. You need to point at a specific mirror to make it work.
This will tell you how many times people download each file as with a normal file release.
Unfortunately, in practice this is a lot of work to maintain, and tends to be unreliable (I kept getting bug reports saying the update site wasn't working).
You could write a very simple php script which just serves up the relevant file, and logs the download to a file or DB. Make sure it double checks the URL is a valid one to download to the user of course :)
Once that's in place, you can update the site.xml to point to the correct thing, or you could probably use URL rewriting to intercept requests to your jar file and pass them through the script. I've never tried that on the SF servers, but it might work.
EDIT:
Even better, just have a php script which sends a redirect like this:
<?php
$file = $_GET('file');
// Now log the access to file
header('Location: ' . $file);
?>
Just a thought: AFAIK, SourceForge does tell you how much data you served. You know the size of your plugin JARs. Divide the data served by the size of your plugin and you get a rough estimate of how many downloads you had.