What would be the best way to use jammit and publish files on amazon S3? - capistrano

I'm using jammit to package the js and css files for a rails project.
I would like now to upload the files to Amazon S3 and use CloudFront for the delivery.
What would be the best way to deal with new versions ?
My ideal solution would be to have a capistrano recipe to deal with it.
As anyone already done something like that?

You could simply create a capistrano task that triggers the copy to s3 after deploying.
You might use s3cmd as the command line tool for that.
Alternatively you could create a folder mounted by FuseOverAmazon, and configure it as the package_path in your jammit assets.yml. Make sure to run the rake task for generating the asset packages manually or in your deploy recipie.
http://s3tools.org/s3cmd
http://code.google.com/p/s3fs/wiki/FuseOverAmazon

Related

use workbox without using cdn

Does anybody know how to use workbox without getting it from the CDN? I tried this...
add workbox-cli to my dependencies:
"workbox-cli": "^3.6.3"
which gets me all of the following dependencies
$ ls node_modules | grep workbox
workbox-background-sync
workbox-broadcast-cache-update
workbox-build
workbox-cacheable-response
workbox-cache-expiration
workbox-cli
workbox-core
workbox-google-analytics
workbox-navigation-preload
workbox-precaching
workbox-range-requests
workbox-routing
workbox-strategies
workbox-streams
workbox-sw
Then I replaced this line in the examples
importScripts('https://storage.googleapis.com/workbox-cdn/releases/3.6.1/workbox-sw.js');
with this
importScripts('workbox-sw.js');
after copying node_modules/workbox-sw/build/workbox-sw.js to the public folder
But now I realise by looking at the network tab, that that file still gets all the other modules from the cdn
(I thought it would be build with everything inside it.)
Can anybody tell me if there is an npm package somewhere that already has everything inside it? Or should I copy the modules I need from the npm folder, and somehow tie them all together myself? Or do I have to use the webpack plugin? (which I guess will only bundle the modules that I use)
(Update: Workbox v5 makes the process of using a local copy of the Workbox runtime much simpler, and in most cases, it's the default.)
There's one more step that's required. The "Using Local Workbox Files Instead of CDN" has the details:
If you don’t want to use the CDN, it’s easy enough to switch to
Workbox files hosted on your own domain.
The simplest approach is to get the files via workbox-cli's
copyLibraries
command
or from a GitHub Release, and then tell workbox-sw where to find
these files via the modulePathPrefix config option.
If you put the files under /third_party/workbox/, you would use them
like so:
importScripts('/third_party/workbox/workbox-sw.js');
workbox.setConfig({modulePathPrefix: '/third_party/workbox/'});
With this, you’ll use only the local Workbox files.

Can I deduplicate content of zip files on Artifactory side?

I'm using JFrom Artifactory, which has deduplication feature - see documentation. Our deployment procedure is the following:
Create zip file with libraries: jars, dlls, etc. This is the same with war file or with fat jar.
During deployment: just extract content of zip file and do small initialization scripts.
As you understand, the most of content on these files is already on the Artifactory:
3rd party java dependencies are already on the same Artifactory
Previous installation has a lot of the same binaries
So, question: how can I ask Artifactory to unzip my archives on server side during upload and then transparent zip it back during download?
This solution will give major data deduplication for me, which gain the following advantages:
Saving disk space
Decreasing server IO
And I know, that there will be the following disadvantages:
Checksum of the zip package can be changed
CPU load can be increased during artifacts upload and download
I don't think this is doable transparently from the client side. However, if you are ready to change your clients, I can imagine:
On the upload / release front, use JFrog CLI and its --explode option for uploads.
The rationale for this flag is at https://github.com/JFrogDev/jfrog-cli-go/issues/5 and the feature is quickly described in https://www.jfrog.com/confluence/display/CLI/CLI+for+JFrog+Artifactory#CLIforJFrogArtifactory-UploadingFiles
Instead of uploading simply your ZIP / WAR with any client or JFrog CLI like
jfrog rt u my.zip repo-release/test/0.5/my-0.5.zip
You would upload it while asking for it to be exploded on the target
jfrog rt u --explode my.zip repo-release/test/0.5/my-0.5.zip/thisisignored.zip
On the download side, use a User Plugin (only available with Pro instances though) to zip the directory content on the fly. There is an existing implementation at https://github.com/JFrogDev/artifactory-user-plugins/tree/master/download/downloadDirectoryContent that you can install on your Artifactory server.
Once this is set up, you should be able to retrieve your original zip with
curl -X GET -uadmin:password "http://localhost:8081/artifactory/repo-release/test/0.5/my-0.5.zip;downloadDirectory+=true" > my.zip

Packaging Applications for Azure Batch

I am having trouble packaging applications to get them to run in Azure Batch compute nodes. I am using user subscription with VM configuration, so I can't use application packages. I have been uploading my executable files and dlls as resource files. Currently, I have a task that requires a lot of dlls, but it seems that I can't upload more than 10 resource files through Azure portal.
What is the best way to package an application and all its required dlls to have it run on a batch compute node without using the built-in application package? Is there a way other than going through all its dlls and adding them individually manually as resource files?
How to go about the limitation of 10 resource files per task application?
Thanks!
Application package functionality for Virtual Machine configuration should be available now (documentation may be out of date). With that being said, answers to your questions:
Without using application packages, you can do one of the following: (1) create a SFX-archive (self-extracting archive) with your archiver of choice. Ensure that it can be silently installed without a GUI pop-up (e.g., 7-zip can do this) and run the SFX-archive command as part of your start task. (2) Zip up your files. Add the zip file and unzip.exe as your two resource files. Run the unzip command as part of your start task.
The service limit is not 10 (although that may be the limit in portal). You can add as many resource files up to the service limit which varies depending upon the length of your URLs. For large number of dependencies, please follow the recommendation from #1 or use Application Packages (if possible).

Packaging SF service into a single file

I am working through how to automate the build and deploy of my Service Fabric app. Currently I'm working on the package step and while it is creating files within the pkg subfolder it is always creating a folder hierarchy of files, not a true package in a single file. I would swear I've seem a .SFPKG file (or something similarly named) that has everything in one file (a zip maybe?). Is there some way to to create such a file with msbuild?
Here's the command line I'm using currently:
msbuild myservice.sfproj "/p:Configuration=Dev;Platform=AnyCPU" /t:Package /consoleloggerparameters:verbosity=minimal /maxcpucount
I'm concerned about not having a single file because it seems inefficient in sending a new package up to my clusters, and it's harder for me to manage a bunch of files on a build automation server.
I believe you read about the .sfpkg at
https://azure.microsoft.com/documentation/articles/service-fabric-get-started-with-a-local-cluster
Note that internally we do not yet support provisioning a .sfpkg file. This is a feature that will be coming in soon (date TBD). Instead, we upload each file in the application package.
Update (SF 6.1 - April 2018)
Since 6.1 it is possible to create a ZIP file (*.sfpkg) and upload it to an external store. Service Fabric executes a GET operation to download the sfpkg application package. For more infos see https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-package-apps#create-an-sfpkg
NOTE: This only works with external provisioning, the Azure image store still doesn't support sfpkg files.

Deploy Click once as a single file?

I am looking to use click once to deploy an application for internal use, When publishing to the network share it creates several files and folders. (manifest, ApplicationFiles etc)
Is there a way to bundle this up as a single file, I do not fancy the idea of allowing other users access to the application Files folder that is created, I would rather just give them the exe and have it take care of everything else.
Does anyone have experience with this, or am I stuck with the application Folder, Application Manifest, and setup file all being in the same directory for installation.
There is not a way to package the whole application folder and files into one file, like an MSI with ClickOnce.
You could code something on your own to have a shell app that use ClickOnce and its only file would be your app compressed. The shell would download that compressed file to the client's machine and would unzip etc.
You could also InstallShield Limited Edition that comes with VS 2012/2013 in the Other Projects, Setup and Deployment but that does give you the ClickOnce easy of deployment features. You could use the InstallShield setup to be your compress file in your shell clickonce app and then just use Process.Start to launch the InstallShield setup. It should work.