Best Practice: Erlang Application Deploy on windows - deployment

When deploying a ready to use erlang application I don't want the user to
Find the right erl release on the
internet.
Install the erl vm
unzip and decide a location for the beam files (with the application)
read a readme
modify anything that even looks like a config file
I have a couple of ideas of what could be a way but I would like to get some input.

SAE (stand-alone Erlang) used to be a pretty good solution for situations like you describe, but that no longer seems to be maintained.
Although I've never used it myself, CEAN seems like it might come close to what you want: it offers a self-extracting installer (though not for Windows at present) and the option to deliver a customized minimal Erlang framework.

There is also Erlware.
At our core we host public
repositories containing reliable
Erlang OTP-compliant applications. Our
repositories enable developers to use
software written by the Erlang
community and to publish and
distribute their own software.
It's more backend orient though, so not a complete solution.

The reltool application first released with Erlang R13B02 is aimed at solving this issue. Note that it is currently a beta release (version 0.5).

Related

Installer for Software? Paas?

currently I'm looking for an open source project that gives me the opportunity to install software easily. I prefer direct calls or access with a REST interface.
I thought that CloudFoundry would fits my needs but it is'nt so.
AppFog (https://www.appfog.com/product/) comes much closer to my goal. It allows me to install Drupal, Wordpress, PhpMyAdmin, NodeJS Apps and so on.
The conclusion is that I'm looking for an project that...
is open source.
gives that possibility to install, configure and
uninstall software
is extendable when a specific software not
available
is accessible with an interface like REST.
is "hostable" on my own linux server
I would be happy for all kind of hints and tips :)
Cheers Tobias
Docker is seems to be the next big thing in the PaaS world. There are dozens new projects that build on top of docker or supporting it. For example OpenShift and Apache Stratos support docker. So if you look at solutions based on docker you can find a solution for you needs.
Right now I'm using docker for hosting couple of Drupal websites with simple bash scripts to manage them. Nginx is used for web traffic routing
Docker is open source
Gives you ability to prepare and install apps
You can build what you need on top of it
It has REST interface
It is running on nearly all major Linux distros
Its relatively easy to learn and use
Has great community
Tobias,
Suggest you look at Apache Stratos:
100% open source
Easy to Get Up and Running
Highly extensible, flexible, expandable
Uses REST APIs
Runs on Linux (Ubuntu or SUSE)
Mature (version 4)
See:
Intro article -- "Why Apache Stratos is the Preferred Choice in the PaaS Space"
http://wso2.com/library/articles/2014/05/why-apache-stratos-is-the-preferred-choice-in-the-paas-space/
Apache Stratos Project site -- which notes that "Stratos PaaS is easy to get it up and running in quick time. A developer will be able to run and test PaaS framework on a single machine to try out."
http://stratos.apache.org/
Cheers,
Michael
OpenShift is what you looking for :
it is open source and free for 3 gears for ever.
gives that possibility to install, configure and uninstall software in openshift.redhat.com or in rhc client tools.
it is extendable when a specific software not available is accessible throw DIY(Do it yourself)
with an REST interface
is "hostable" on Fedora or CentOS .
It is really easy to setup throw Eclipse.

How can I automate Node.js deployments?

I'm looking for something analogous to Capistrano for Rails - https://github.com/capistrano/capistrano/wiki/
I'd like to be able to run a single command from my workstation that will update the code on my server(s) from a GitHub project and handle all necessary process restarting for the application. I need to be able to control specifically when this happens, not use a hook in GitHub's checkin event.
Are Node.js developers also using Capistrano, or is there a tool that works better for Node.js?
You could use fabric, it's a python lib. Nodejs already uses python for some build operations for extensions, no reason you couldn't also use python to do what you're asking.
http://docs.fabfile.org/en/1.2.2/index.html
I don't know of a javascript lib that does this, not to say there isn't one though. Fabric sounds very much like what capistrano is, but maybe a tiny bit different in some aspects.
Capistrano seems to be the most popular choice.

Could you use Nuget / OpenWrap to manage remote deployments?

I haven't thought this through to completion, but it seems that if nuget is a tool for managing the inclusion of packages in a known location, could it not be used as a deployment tool for web servers (a website being just a very large package itself)?
A service running on the web server would ping a nuget server for updates, and install them when available. There would have to be some additional management (recycling app pools, making sure that all your webservers don't update at the same time etc.), but I think it could work?
Any thoughts?
Yes that's definitly on the roadmap for openrasta/openwrap, so it's not a crazy idea. Some people already have done some of that work themselves.
This sort of thing is usually known as a Continuous Integration (CI for short) setup. You could probably cobble something together with Nuget but there are already some pretty good tools out there. Cruise and TFS to name a couple.
If you're looking for a mad scientist project though, carry on and let the community know what you come up with!

Best version control for a one man web app?

I'm just learning how to do things, and want to start using some sort of version control for a web app.
What's most appropriate for deploying a python or php web app on my own? I'm using linux and have a linux server.
Thanks!
SVN, but you need to be able to easily deploy your webapp with SVN.
Since it is not always a simple task, so I just point out this article which may be of interest for your project.
General principle:
Configure Apache on your development server so that it picks up your checked out working copies as separate subdomains. Using this, you can simply make a checkout of your project and it will automagically be up and running. No need to touch the Apache configuration. You need a DNS wildcard entry so that all subdomains of dev.example.org go to your development server.
The only problem with using the above Apache configuration locally is the DNS wildcard. Unless your desktop is assigned a hostname by your network's DNS server and you can set the wildcard there, you will have to make do with your localhost address. You can install dnsmasq to act as a local caching DNS server and put the wildcard on your own machine
Use dnsmasq so you can achieve the same effect on your own development machine. That way you can develop your web applications locally and you won't need a central development server. In my examples I will be assuming you use subversion for your version control, but it works virtually the same with other version control packages, such as git or bazaar.
Note: (Humor)
This other question on Subversion allowed me to point out to this article about publishing its (source-controlled) data into production, with in it probably the ugliest diagram I ever saw on the topic ;-)
If I had not bumped into git, I would've doubtless gone with SVN. Having said that, I would recommend git.
Nowadays, I would certainly go with a distributed version control system. Setup is faster since you don't need to set up a version control server and everything, all you usually need to do is initialize a certain directory within your development box for version control and you're good to go. They also seem like the way to go these days. If it were 2001, I would recommend a centralized system like Subversion. But it's 2008, everyone is moving to distributed systems and user interfaces and supporting tools tend to get better.
Here are some suggestions for you:
Darcs: Easy to learn and has all the features you will usually need
Mercurial
Git: Powerful. May take some time to understand but evolves rapidly
All three of them should be readily available in your Linux-based OS through the usual package management solutions.
SVN is great.
Nowadays the hype around DVCS.
I prefer Bazaar.
Because of it's name, the support, the feature set, and it works well on my window$ machine too.
I'm using unfuddle.com and I love it. It's free for a one person web app
The answer really depends on your way of thinking. I personally had problems switching to subversion from SourceSafe. If you come from microsoft shop, I'd suggest using SourceGear Vault, it is free for <=2 users. If you come from non microsoft area, then using subversion would be preferrable. Also please consider git if working on linux.
HTH, Valve.
Personally I use monotone, learning a DVCS is definitely the way forward.
For a one-man job, pretty much any revision control system will do the job. It's when you get into multiple people, and past that into multiple repositories, where there start to be differences.
Given that, I'd go with whatever Free Software system your development environment supports best. I see Subversion and Git mentioned and both are fine choices.
SVN would been my first choice. If I have to take a second choice I would go to CVS.
One of the most popular models out there today is Subversion. It's generally easy to setup & configure and is able to handle multiple platforms.
SVN. If one does not need concurrent access (which is your case), it is VERY easy to setup as no server is required at all. Definitely your weapon of choice.
I wholeheartedly agree with SVN. Command-line SVN is quite easy too.
While I like svn a lot, I've found mercurial handy for having the whole repository locally. (the same goes for git, but its interface is a little less polished in my opinion.)
I'm not able to answer the question as asked, because I don't develop on a Linux server.
But maybe this experience has a counterpart in Linux world.
I use a local-on-my-LAN-only IIS server (actually on an old laptop that no longer travels but works as a little server). I have VSS installed on that server too. There is an integration between the IIS Server, the FrontPage extensions on that server, and the VSS.
The upshot is that I can use FrontPage to build and edit my site and build a development image that is always backed up in VSS, and I can check out, check in, and do all of that from within FrontPage.
Now, the way I publish is I take advantage of the sharing capability of VSS so I have a deployment image that shares with the project that is actually an IIS web site. I have a deployment-image directory that I can transfer the latest checked-in material to (material that has not changed is not updated). I then deploy the deployment image to the hosted, public web site using FTP (again, only transfering new and updated files).
I present all of these details to suggest what might be the use-case of interest, even though a different solution approach is needed with Linux.
If I wasn't using a tool that integrated with the web server and also the source control at the server, I could do something similar by checking the VSS material in and out of a local directory and then pushing the updated VSS project to the IIS server web-pages directory hierarchy. The workflow is a little more clumsy. In this case, I would not edit pages directly on the development web server unless I could lock check-in pages as read-only or something.
Does this suggest anything that might be appealing in the Linux server case?
Definitively Mercurial is a good choice, quick, easy to use, perfect for working alone, or with multiple other developer, perfectly multiplateform, handles merges, branches, etc. very simply, plugin based, there are great tools out there such as nice IDE plugins (notably Netbeans and Eclipse).
Robust, it works just as you a expect such a tool to work, not like SVN (and I have years of day to day)...
Both Sun, Xen and Mozilla host all their repos on Mercurial. We're currently moving from SVN to Mercurial after a 6 month daily test, without any regret.
I once used Perforce and was impressed with it. There's GUI and command line versions and it supports Windows, Linux, Mac and Unix for both the server and client. It integrates with Eclipse and has APIs for writing your own client applications (C/C++, Ruby, Perl, Python) It only supports two users and five workspaces before you need to buy licenses though (but that is within the scope of this question).
Subversion is a good choice. For the client, there's TortoiseSVN (http://tortoisesvn.tigris.org/) that integrates with the shell and lets you do things with a right click on a folder. For integration with Visual Studio (I'll assume that's your environment) there's VisualSVN (http://www.visualsvn.com/) and AnhkSVN (http://ankhsvn.open.collab.net/). For the server there's a one-click installer you can find here (http://svn1clicksetup.tigris.org/) that does the setup in a snap. VisualSVN also has a (free) server that you can use which provides it's own web access and security (rather than using apache) and has a mmc-snapin for managing/creating repositories and users.
CVS - No, I'm not joking. Not that it is better (it is not) or the simplest (it isn't), but it really doesn't matter at the end of the day. The important thing is to get started with ANY version control system even if it is a one-developer shop, even if it is CVS.

How to version control the build tools and libraries?

What are the recommendations for including your compiler, libraries, and other tools in your source control system itself?
In the past, I've run into issues where, although we had all the source code, building an old version of the product was an exercise in scurrying around trying to get the exact correct configuration of Visual Studio, InstallShield and other tools (including the correct patch version) used to build the product. On my next project, I'd like to avoid this by checking these build tools into source control, and then build using them. This would also simplify things in terms of setting up a new build machine -- 1) install our source control tool, 2) point at the right branch, and 3) build -- that's it.
Options I've considered include:
Copying the install CD ISO to source control - although this provides the backup we need if we have to go back to an older version, it isn't a good option for "live" use (each build would need to start with an install step, which could easily turn a 1 hour build into 3 hours).
Installing the software to source control. ClearCase maps your branch to a drive letter; we could install the software under this drive. This doesn't take into account non-file part of installing your tools, like registry settings.
Installing all the software and setting up the build process inside a virtual machine, storing the virtual machine in source control, and figuring out how to get the VM to do a build on boot. While we capture the state of the "build machine" with ease, we get the overhead of a VM, and it doesn't help with the "make the same tools available to developers issue."
It seems such a basic idea of configuration management, but I've been unable to track down any resources for how to do this. What are the suggestions?
I think the VM is your best solution. We always used dedicated build machines to get consistency. In the old COM DLL Hell days, there were dependencies on (COMCAT.DLL, anyone) on non-development software installed (Office). Your first two options don't solve anything that has shared COM components. If you don't have any shared components issue, maybe they will work.
There is no reason the developers couldn't take a copy of the same VM to be able to debug in a clean environment. Your issues would be more complex if there are a lot of physical layers in your architecture, like mail server, database server, etc.
This is something that is very specific to your environment. That's why you won't see a guide to handle all situations. All the different shops I've worked for have handled this differently. I can only give you my opinion on what I think has worked best for me.
Put everything needed to build the
application on a new workstation
under source control.
Keep large
applications out of source control,
stuff like IDEs, SDKs, and database
engines. Keep these in a directory as ISO files.
Maintain a text document, with the source code, that has a list of the ISO files that will be needed to build the app.
I would definitely consider the legal/licensing issues surrounding the idea. Would it be permissible according to the various licenses of your toolchain?
Have you considered ghosting a fresh development machine that is able to build the release, if you don't like the idea of a VM image? Of course, keeping that ghosted image running as hardware changes might be more trouble than it's worth...
Just a note on the versionning of libraries in your version control system:
it is a good solution but it implies packaging (i.e. reducing the number of files of that library to a minimum)
it does not solves the 'configuration aspect' (that is "what specific set of libraries does my '3.2' projects need ?").
Do not forget that set will evolves with each new version of your project. UCM and its 'composite baseline' might give the beginning of an answer for that.
The packaging aspect (minimum number of files) is important because:
you do not want to access your libraries through the network (like though dynamic view), because the compilation times are much longer than when you use local accessed library files.
you do want to get those library on your disk, meaning snapshot view, meaning downloading those files... and this is where you might appreciate the packaging of your libraries: the less files you have to download, the better you are ;)
My organisation has a "read-only" filesystem, where everything is put into releases and versions. Releaselinks (essentially symlinks) point to the version being used by your project. When a new version comes along it is just added to the filesystem and you can swing your symlink to it. There is full audit history of the symlinks, and you can create new symlinks for different versions.
This approach works great on Linux, but it doesn't work so well for Windows apps that tend to like to use things local to the machine such as the registry to store things like configuration.
Are you using a continuous integration (CI) tool like NAnt to do your builds?
As a .Net example, you can specify specific frameworks for each build.
Perhaps the popular CI tool for whatever you're developing in has options that will allow you to avoid storing several IDEs in your version control system.
In many cases, you can force your build to use compilers and libraries checked into your source control rather than relying on global machine settings that won't be repeatable in the future. For example, with the C# compiler, you can use the /nostdlib switch and manually /reference all libraries to point to versions checked in to source control. And of course check the compilers themselves into source control as well.
Following up on my own question, I came across this posting referenced in the answer to another question. Although more of a discussion of the issue than an aswer, it does mention the VM idea.
As for "figuring out how to build on boot": I've developed using a build farm system custom-created very quickly by one sysadmin and one developer. Build slaves query a taskmaster for suitable queued build requests. It's pretty nice.
A request is 'suitable' for a slave if its toolchain requirements match the toolchain versions on the slave - including what OS, since the product is multi-platform and a build can include automated tests. Normally this is "the current state of the art", but doesn't have to be.
When a slave is ready to build, it just starts polling the taskmaster, telling it what it's got installed. It doesn't have to know in advance what it's expected to build. It fetches a build request, which tells it to check certain tags out of SVN, then run a script from one of those tags to take it from there. Developers don't have to know how many build slaves are available, what they're called, or whether they're busy, just how to add a request to the build queue. The build queue itself is a fairly simple web app. All very modular.
Slaves needn't be VMs, but usually are. The number of slaves (and the physical machines they're running on) can be scaled to satisfy demand. Slaves can obviously be added to the system any time, or nuked if the toolchain crashes. That'ss actually the main point of this scheme, rather than your problem with archiving the state of the toolchain, but I think it's applicable.
Depending how often you need an old toolchain, you might want the build queue to be capable of starting VMs as needed, since otherwise someone who wants to recreate an old build has to also arrange for a suitable slave to appear. Not that this is necessarily difficult - it might just be a question of starting the right VM on a machine of their choosing.