Can I deduplicate content of zip files on Artifactory side? - deployment

I'm using JFrom Artifactory, which has deduplication feature - see documentation. Our deployment procedure is the following:
Create zip file with libraries: jars, dlls, etc. This is the same with war file or with fat jar.
During deployment: just extract content of zip file and do small initialization scripts.
As you understand, the most of content on these files is already on the Artifactory:
3rd party java dependencies are already on the same Artifactory
Previous installation has a lot of the same binaries
So, question: how can I ask Artifactory to unzip my archives on server side during upload and then transparent zip it back during download?
This solution will give major data deduplication for me, which gain the following advantages:
Saving disk space
Decreasing server IO
And I know, that there will be the following disadvantages:
Checksum of the zip package can be changed
CPU load can be increased during artifacts upload and download

I don't think this is doable transparently from the client side. However, if you are ready to change your clients, I can imagine:
On the upload / release front, use JFrog CLI and its --explode option for uploads.
The rationale for this flag is at https://github.com/JFrogDev/jfrog-cli-go/issues/5 and the feature is quickly described in https://www.jfrog.com/confluence/display/CLI/CLI+for+JFrog+Artifactory#CLIforJFrogArtifactory-UploadingFiles
Instead of uploading simply your ZIP / WAR with any client or JFrog CLI like
jfrog rt u my.zip repo-release/test/0.5/my-0.5.zip
You would upload it while asking for it to be exploded on the target
jfrog rt u --explode my.zip repo-release/test/0.5/my-0.5.zip/thisisignored.zip
On the download side, use a User Plugin (only available with Pro instances though) to zip the directory content on the fly. There is an existing implementation at https://github.com/JFrogDev/artifactory-user-plugins/tree/master/download/downloadDirectoryContent that you can install on your Artifactory server.
Once this is set up, you should be able to retrieve your original zip with
curl -X GET -uadmin:password "http://localhost:8081/artifactory/repo-release/test/0.5/my-0.5.zip;downloadDirectory+=true" > my.zip

Related

Packaging Applications for Azure Batch

I am having trouble packaging applications to get them to run in Azure Batch compute nodes. I am using user subscription with VM configuration, so I can't use application packages. I have been uploading my executable files and dlls as resource files. Currently, I have a task that requires a lot of dlls, but it seems that I can't upload more than 10 resource files through Azure portal.
What is the best way to package an application and all its required dlls to have it run on a batch compute node without using the built-in application package? Is there a way other than going through all its dlls and adding them individually manually as resource files?
How to go about the limitation of 10 resource files per task application?
Thanks!
Application package functionality for Virtual Machine configuration should be available now (documentation may be out of date). With that being said, answers to your questions:
Without using application packages, you can do one of the following: (1) create a SFX-archive (self-extracting archive) with your archiver of choice. Ensure that it can be silently installed without a GUI pop-up (e.g., 7-zip can do this) and run the SFX-archive command as part of your start task. (2) Zip up your files. Add the zip file and unzip.exe as your two resource files. Run the unzip command as part of your start task.
The service limit is not 10 (although that may be the limit in portal). You can add as many resource files up to the service limit which varies depending upon the length of your URLs. For large number of dependencies, please follow the recommendation from #1 or use Application Packages (if possible).

Packaging SF service into a single file

I am working through how to automate the build and deploy of my Service Fabric app. Currently I'm working on the package step and while it is creating files within the pkg subfolder it is always creating a folder hierarchy of files, not a true package in a single file. I would swear I've seem a .SFPKG file (or something similarly named) that has everything in one file (a zip maybe?). Is there some way to to create such a file with msbuild?
Here's the command line I'm using currently:
msbuild myservice.sfproj "/p:Configuration=Dev;Platform=AnyCPU" /t:Package /consoleloggerparameters:verbosity=minimal /maxcpucount
I'm concerned about not having a single file because it seems inefficient in sending a new package up to my clusters, and it's harder for me to manage a bunch of files on a build automation server.
I believe you read about the .sfpkg at
https://azure.microsoft.com/documentation/articles/service-fabric-get-started-with-a-local-cluster
Note that internally we do not yet support provisioning a .sfpkg file. This is a feature that will be coming in soon (date TBD). Instead, we upload each file in the application package.
Update (SF 6.1 - April 2018)
Since 6.1 it is possible to create a ZIP file (*.sfpkg) and upload it to an external store. Service Fabric executes a GET operation to download the sfpkg application package. For more infos see https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-package-apps#create-an-sfpkg
NOTE: This only works with external provisioning, the Azure image store still doesn't support sfpkg files.

Changes in conf/server.xml does not seem to have any effect during runtime

Here's what I know:
When uploading files given by users, we should put them in a folder
outside the deployment folder. Let me call it D:\uploads.
We should (somehow) add that folder (D:\uploads) as a web app context.
Here's what I did:
I upload my files to the folder D:\uploads.
I tried adding the web app context as it's mentionned here by adding the following row to TOMCAT_DIR/conf/server.xml:
<Context docBase="D:\uploads" path="/uploads"/>
But that doesn't have any effect. When consulting http://localhost:8080/uploads/file.png or http://localhost:8080/uploads I get a HTTP Status 404 error.
So what I want to know:
What did I do wrong ? How can I add my upload folder to Tomcat?
Is there any better approach when it comes to uploading files ?
Because I'm wondering what should I change if I want to deploy my
application to another server where there's no D:\uploads.
Change the docBase attribute. Use D:/uploads (with slash) instead of D:\uploads (with backslash).
When dealing with files in Java, you can safely use / (slash, not backslash) on all platforms.
Regarding the differences you mentioned in the comments when starting the Tomcat from the IDE and from bin/startup.bat: It's very likely when you start the Tomcat from the IDE, it is not using the same context.xml your Tomcat is using. Just review the Tomcat settings in the IDE.
How to store uploaded files is a common topic at Stack Overflow. Just look around and you'll get surprised in how this topic is popular.
If you aren't happy enough in storing your files in D:/uploads or you'll have other servers accessing the files, you could consider storing them in some location in your network. Depending on your requirements, you can have one dedicated server to store your files or just share the folder which contains the files in your current server. The right decision will always depend on your requirements.

Building .car ColdFusion Archive from command-line

The current way I'm packaging my application is to deploy it on a running ColdFusion server, and export is as a .car (Coldfusion ARchive) through the admin console, but this manual process is usually prone to errors.
That's why I'm looking for a tool (or any solution) to build this final .car from the command line (so without requiring a ColdFusion server up and running).
Note: because of the complexity and the size of this legacy application, I cannot work with .war files, and I'm not aware of any other packaging format than .war or .car for ColdFusion applications.
Hi unfortunately Coldfusion is not built to use in this way but you can use API or the admin console.
I invite you to read this page : http://help.adobe.com/en_US/ColdFusion/10.0/Admin/WSc3ff6d0ea77859461172e0811cbf3638e6-7fc5.html for more details.
From coldfusion 9 servers, CAR files are zip files. You can unzip them, edit the files, and zip them back up and deploy them on coldfusion10 servers. The zip contains a folder called "{WorkingDir}" with curly brackets.
Structure looks like
{WorkingDir}
server_settings.xml
archive_settings.xml

Deployed a version control system for company, how to use it with binary files

I am tasked with setup a Mercurial version control system for our small team of developers (2-3 person). There was no version control system before, just shared folders and multi-copies. I don't have much experience in setting version control system except for personal projects, just happened to be the most experienced person in term of version control system in our team. The code repository is in a shared folder in centre server, the top leve directory is client name, one level down is project name for that client.
The problems is I haven't figure out how to deal with binary files in our code repository. From what I read, the binary files shouldn't be version tracked. But as the code repository is centralized on the server, shouldn't the binary in here as well? Otherwise for things like image file, and third-party dll files, the project wouldn't build or run properly when cloned from centre server. Also there is a nice feature for Mercurial web interface where you can download the whole source package as ZIP or BZ2 compressed file, without necessary binary files, the download project wouldn't run or compile.
I guess the solution is including everything for the version control system except the temporary files and the files for debug purpose, but other than that, most binary files should be included? Due to limitation of version control system, I don't think there is a way for them to track changes sets only for binary files, so I guess we have to deal with it for a version control system.
Edit: After more research about how to setup version-control repository, the more recommended way of using version-control is to "store everything which is created manually, and nothing else", quote from Eric Sink.
You want to version control anything that you can't generate from other stuff in version control. That would be your source files, and your instances of third-party libraries, tools, etc. that your package relies on.
The binaries built from your project are something else entirely, and should be treated as different sorts of artifacts. If you want an easy-to-test downloadable archive, adapt your build process to provide that as a target: it should build the code, and then compress the source and built binary into the desired single file.
Binary files that are related or required by the project must be included in version-control, they can be tracked. The only thing that version control can't do with binary files is compare and merge.