Deployment strategies for Go services? - deployment

I'm writing some new web services in Go.
What are some deployment strategies I can use, regardless of the target platform? For example, I'm developing on a Mac, but the staging/production servers will be running Linux.
Are there some existing deployment tools I can use that support Go? If not, what are some things I can do to streamline the process?
I use LiteIDE for development. Is there any way to hook LiteIDE into the deployment process?

Unfortunately since Go is such a young language not much exists yet, or at least they've been hard to find. I would also be interested in the development of such tools for Go.
What I have found is that some people have been doing it themselves, or they've adapted other tools, such as Capistrano, to do it for them.
Most likely it's something you'll have to do yourself. And you don't have to limit yourself to shell scripts - do it in Go! In fact many of the Go tools are written in Go. You should avoid compiling on the target system as it's usually a bad practice to have build tools on your production system. Go makes it really easy to cross compile binaries. For example, this is how you compile for ARM & Linux:
GOARCH=arm GOOS=linux go build myapp
One thing you could do is hop on the #go-nuts freenode IRC channel or join the Go mailing list and ask other Gophers what they're doing.

Capistrano sounds like a good idea for deployment alone. You can also do cross-compilation as Luke suggested. Both will work just fine.
More generally though... I'm also kind of torn between OS X (development) and Linux (deployment) and in fact I ended just developing in a virtual machine via VirtualBox and Vagrant. I'm using TextMate 2 for text editing but installing many of development tools on a Mac is just a major PITA and I'm just more comfortable with having Debian or the like running somewhere in the background. The bonus is - this virtual environment can mirror deployment environment so I can avoid surprises when I deploy my code, whatever the language.

I haven't tried it myself, but it appears you can cross compile golang (either with goxc or Dave Cheney's golang-crosscompile), albeit with some caveats.
But if you need to match the environment with production, which probably you should most of the time, it's safest to go as Marcin suggested.
You can find some prebuilt VirtualBox images on http://virtualboxes.org/images/ although creating one yourself is pretty easy.

what are some things I can do to streamline the process?
The cross-compilation idea should be even more appealing with Go 1.5 (Q3 2015), as Dave Cheney details in "Cross compilation just got a whole lot better in Go 1.5":
Before:
For successful cross compilation you would need
compilers for the target platform, if they differed from your host platform, ie you’re on darwin/amd64 (6g) and you want to compile for linux/arm (5g).
a standard library for the target platform, which included some files generated at the point your Go distribution was built.
After (Go 1.5+):
With the plan to translate the Go compiler into Go coming to fruition in the 1.5 release the first issue is now resolved.
package main
import "fmt"
import "runtime"
func main() {
fmt.Printf("Hello %s/%s\n", runtime.GOOS, runtime.GOARCH)
}
build for darwin/386
% env GOOS=darwin GOARCH=386 go build hello.go
# scp to darwin host
$ ./hello
Hello darwin/386
Or build for linux/arm
% env GOOS=linux GOARCH=arm GOARM=7 go build hello.go
# scp to linux host
$ ./hello
Hello linux/arm
I'm developing on a Mac, but the staging/production servers will be running Linux.
Considering the compiler for Go is in Go, the process to produce a Linux executable from your Mac should become straightforward.

Related

What is the recommended workflow and environment for working on the FreeBSD code base?

I want to develop a new feature or change and existing program of the FreeBSD distribution, specifically the user space¹. To do so, I need to make changes to the FreeBSD code base and then compile and test them.²
Doing so on the tree in /usr/src and installing the result on the system seems like a bad idea, given that it requires you to run your development machine on CURRENT, to develop with root privileges and hoses your system if you make a mistake. I suppose there must be a better way and possibly a standard setup FreeBSD developers use.³
What is the recommended workflow to develop the FreeBSD code base?
¹ so considerations specific to kernel development aren't terribly important
² I'm familiar with the process to submit changes after I have developed them
³ I have previously read both the development handbook and the FreeBSD handbook chapter on building the source but neither seem to recommend a specific process.
I am a src committer.
I often start with the lowest release that I intend to back port to (e.g., RELENG_11_3.
I would then do (before or after making changes):
make buildworld
then deploy to a jail directory:
make DESTDIR=/usr/jails/test installworld
This jail directory, as the first responder hinted, can be used with bhyve, but I find it easier to configure a jail or even just use chroot.
I like to configure my jails in /etc/rc.conf instead of /etc/jail.conf:
Example /etc/rc.conf contents:
jail_enable="YES"
jail_list="test"
jail_test_rootdir="/usr/jails/test"
jail_test_hostname="test"
jail_test_devfs_enable="YES"
I can provide more in-depth examples, ones where the jail has a private networking stack so you can SSH into it, for example, but I don't get the sense that a networking stack is important to your testing from the posted question.
You can see the running jail with "jls" and you can enter the running jail with "jexec test bash"
Inside the jail you can test your changes.
When doing this kind of sandboxing, jails work so long as the /usr/src that you built/installed to the jail is from a release that is:
Older than the guest OS, or
In the same STABLE branch as the guest OS, or
At the very least binary-compatible with the guest OS
Situations 1 and 2 are pretty safe, while situation 3 (e.g., running a newer /usr/src than the guest OS) can get dodgy. For example, trying to run /usr/src head (13.0-CURRENT) on a 12.0-RELEASE-pX guest OS where the KBI, KPI, and API can all differ between kernel and userland (with jails, each jail runs under the guest OS's kernel).
If you find that you have to run the newest sources against an older guest OS, then bhyve is definitely the solution. You would take that jail directory and instead of running a jail with that root directory, run a bhyve instance with the jail directory as its root. I don't use bhyve that often, so I can't recall if you first have to deposit the contents inside a disk image and point bhyve at the disk image first -- others and/or Google would know the answer to that.
I'm a ports committer, not a src one, but AFAIK running CURRENT is a common practice amongst developers.
Another way to work is to setup a CURRENT VM, share it over NFS, mount from the host and install into by running make install DESTDIR=/mnt/current. You can use BHyVe for virtualizing, by the way.

A meta-build solution managing multiple projects

I'm looking for an advice about the software that did not catch my eye during a few days of searches.
Is there any software solution for deployment of multiple projects at once that operates at the level of source-building package manager (like ports, portages, or nix), but could be resided locally?
As for details, we have few loosely connected software projects with the following traits:
Projects are written mostly on C/C ++ and Python, but more languages are represented as a minor mixture (Haskell, Rust, Perl)
Projects are grouped together within git superprojects with certain build/deployment presets carefully tuned for particular environments (and adapted for particular purposes by subset of boolean-like options)
We have already done with well-elaborated CMake build scripts for C++ projects that do support options, build configurations, exporting targets, and so on. It would be expensive to switch from it now.
We are forced to deal with various Linux distributions (from Gentoo and Ubuntu to Debian and CentOS).
We need an unifying build&deployment tool for various environments. The CMake does not integrate well with non-compiling languages (e.g. does not natively support local Python installations via virtualenv).
Instead of changing the things we already developed, I would like to use them in the manner the OS package manager does. For my vision, it should be something pretty similar to the so-called meta-build tool. In fact, the Gentoo Portages are pretty close:
Easy customization with simple boolean options (useflags, profiles)
Delegation of the building procedures to the reliable and customizable tools designed and well-tested especially for this purpose (CMake, autotools, Bazel, etc.)
Offers an ability to change the target compiler installation and specify the building process in a clear declarative way with a standardized instructions set.
Portages can not be ran locally, though (and have other unrelated flaws).
I have to become very confident before switching the whole build system to something like Meson or Bazel or whatever else I could find up to the moment.
Update
To be more specific, I could refer to what we have done up to the moment. The one of the superprojects we're maintaining deals with particular scientific experiment:
All the sub-projects are listed as git submodules
A simple BASH-script maintaining the entire build-install-deploy lifecycle
The presets referring particular settings for a concrete environment are represented as BASH sources and are included by this maintaining shell script.
As few days more had come with no result, I coming up with intention to write my own solution based on experience we already gained due to these shell scripting activity.
For a similar task Bob was developed some time ago. If you still need a build system it might be worth a look. There are some small tutorials available as well as a complex basement example build a linux from scratch.
Basically it is just a environment for a controlled execution of bash scripts with dependency and variant handling, CI integration, IDE support,..
Take a look at Sparrow - Linux scripting platform, it supports projects hierarchy and tasks - configurable scripts, where configuration is made in declarative way in Yaml, Json formats. This hopefully might address many if not all the needs you've mentioned here.
Disclosure - I am the tool author.

Should we deploy scrrun.dll (Windows Scripting Runtime)?

Our VB6 application relies heavily on the use of scrrun.dll (Windows Scripting Host). Until a year ago we used to deploy this dll with our installer. Since the Windows Scripitng Host is supposed to be part of Windows we removed the dll from the installation package. However, now and then surface customers who have a non functional scrrun.dll on their system and we have to help them reinstall or reregister it.
So, should we put the scrrun.dll back in the installation package? Should we perform some check on installation? Or should we just live with the fact that we have to provide hands on support to some of our customers to set their systems right?
Don't try to deploy these libraries as part of a normal setup.
Microsoft Scripting Runtime must be installed through the use of a
self-extracting .exe file. For versions of Scripting Runtime mentioned
at the beginning of this article, the only way to distribute it is to
use the complete self extracting .exe file located at the following
locations...
It is possible that some users employ an older anti-malware suite, many of which tried to disable scripting. It is more likely though that some users have managed to break their Windows installation, either themselves or by using applications improperly packaged to try to include these libraries - and blindly remove them from the system on uninstall (cough, cough - Inno).
The libraries involved have been tailored code for some time. This is why the ancient .CAB file was "recalled" long ago. There is no single copy of them intended to run on any random version of Windows, and there are no redist packs for any modern version of Windows. The correct fix is a system restore or repair install.
While this can't be blamed directly on InnoSetup because it is the result of poorly authored scripts it is frustrating enough and common enough that I won't cry when its signature is added to anti-malware suites. There are just too many poorly written examples loose in the wild copy/pasted by too many people.
I spend plenty of time undoing the damage caused by uninstalls of these applications and have grown quite weary of it. Where possible I use isolated assemblies now in self-defense, which helps a lot. Windows File Protection is getting better about preventing abusive action for system files too.
But in general you are much better off avoiding any dependency on scripting tools in an application. There isn't very much that they can do as well as straight code anyway, though it may take some time to write alternative logic.

What's the best system for installing a Perl web app?

It seems that most of the installers for Perl are centered around installing Perl modules, not applications. Things like ExtUtils::MakeMaker and Module::Build are very well suited for modules, but require some additional work for Web Apps.
Ideally it would be nice to be able to do the following after checking out the source from the repository:
Have missing dependencies detected
Download and install dependencies from CPAN
Run a command to "Build" the source into a final state (perform any source parsing or configuration necessary for the local environment).
Run a command to install the built files into the appropriate locations. Not only the perl modules, but also things like template (.tt) files, and CGI scripts, JS and image files that should be web-accessible.
Make sure proper permissions are set on installed files (and SELinux context if necessary).
Right now we have a system based on Module::Build that does most of this. The work was done by done by my co-worker who was learning to use Module::Build at the time, and we'd like some advice on generalizing our solution, since it's fairly app-specific right now. In particular, our system requires us to install dependencies by hand (although it does detect them).
Is there any particular system you've used that's been particularly successful? Do you have to write an installer based on Module::Build or ExtUtils::MakeMaker that's particular to your application, or is something more general available?
EDIT: To answer brian's questions below:
We can log into the machines
We do not have root access to the machines
The machines are all (ostensibly) identical builds of RHEL5 with SELinux enabled
Currently, the people installing the machines are only programmers from our group, and our source is not available to the general public. However, it's conceivable our source could eventually be installed on someone else's machines in our organization, to be installed by their programmers or systems people.
We install by checking out from the repository, though we'd like to have the option of using a distributed archive (see above).
The answer suggesting RPM is definitely a good one. Using your system's package manager can definitely make your life easier. However, it might mean you also need to package up a bunch of other Perl modules.
You might also take a look at Shipwright. This is a Perl-based tool for packaging up an app and all its Perl module dependencies. It's early days yet, but it looks promising.
As far as installing dependencies, it wouldn't be hard to simply package up a bunch of tarballs and then have you Module::Build-based solution install them. You should take a look at pip, which makes installing a module from a tarball quite trivial. You could package this with your code base and simply call it from your own installer to handle the deps.
I question whether relying on CPAN is a good idea. The CPAN shell always fetches the latest version of a distro, rather than a specific version. If you're interested in ensuring repeatable installs, it's not the right tool.
What are your limitations for installing web apps? Can you log into the machine? Are all of the machines running the same thing? Are the people installing the web apps co-workers or random people from the general public? Are the people installing this sysadmins, programmers, web managers, or something else? Do you install by distributed an archive or checking out from source control?
For most of my stuff, which involves sysadmins familiar with Perl installing in control environments, I just use MakeMaker. It's easy to get it to do all the things you listed if you know a little about MakeMaker. If you want to know more about that, ask a another question. ;) Module::Build is just as easy, though, and the way to go if you don't already like using MakeMaker.
Module::Build would be a good way to go to handle lots of different situations if the people are moderately clueful about the command line and installing software. You'll have a lot of flexibility with Module::Build, but also a bit more work. And, the cpan tool (which comes with Perl), can install from the current directory and handle dependencies for you. Just tell it to install the current directory:
$ cpan .
If you only have to install on a single platorm, you'll probably have an easier time making a package in the native format. You could even have Module::Build make that package for you so the developers have the flexibility of Module::Build, but the installers have the ease of the native process. Sticking with Module::Build also means that you could create different packages for different platforms from a single build tool.
If the people installing the web application really have no idea about command lines, CPAN, and other things, you'll probably want to use a packager and installer that doesn't scare them or make them think about what is going on, and can accurately report problems to you automatically.
As Dave points out, using a real CPAN mirror always gets you the latest version of a module, but you can also make your own "fake" CPAN mirror with exactly the distributions you want and have the normal CPAN tools install from that. For our customers, we make "CPAN on a CD" (although thumb drives are good now too). With a simple "run me" script everything gets installed in exactly the versions they need. See, for instance, my Making my own CPAN talk if you're interested in that. Again, consider the audience when you think about that. It's not something you'd hand to the general public.
Good luck, :)
I'd recommend seriously considering a package system such as RPM to do this. Even if you're running on Windows I'd consider RPM and cygwin to do the installation. You could even set up a yum or apt repository to deliver the packages to remote systems.
If you're looking for a general installer for customers running any number of OSes and distros, then the problem becomes much harder.
Take a look at PAR.
Jonathan Rockway as a small section on using this with Catalyst in his book.

Best practices for deploying tools & scripts to production?

I've got a number of batch processes that run behind the scenes for a Linux/PHP website. They are starting to grow in number and complexity, so I want to bring a small amount of process to bear on them.
My source tree has a bunch of cpp files and scripts, organized with development but not deployment in mind. After compiling all the executables, I need to put various scripts and binaries on a cluster of machines. Different machines need different executables, scripts, and config files for their batch processes. I also have a few of tools that I've written that belong on every machine. At the moment, this deployment process is manual and error prone.
I'm guessing I'm just going to end up with a script that runs at the root of the source tree and builds a smaller tree of everything necessary for any of the machines. Then, I'll just rsync that to the appropriate machines. But I'm curious how other people are managing this type of problem. Any ideas?
There are a several categories of tool here. Some people use a combination of tools from these categories. I sometimes use, for example, both Puppet and Capistrano. See Puppet or Capistrano - Use the Right Tool for the Job for a discussion.
Scripting Tools aimed at Deploying an Application:
The general pattern with tools in this category is that you create a script and/or config file, often with sets of commands similar to a Makefile, and the tool will ssh over to your production box, do a checkout of your source, and run whatever other steps are necessary.
Tools in this area usually have facilities for rollback to a previous version. So they'll check out your source to releases/ directory, and create a symbolic link from "current" to "releases/" if all goes well. If there's a problem, you can revert to the previous version by running a command that will remove "current" and link it to the previous releases/ directory.
Capistrano comes from the Rails community but is general-purpose. Users of Capistrano may be interested in deprec, a set of deployment recipes for Capistrano.
Vlad the Deployer is an alternative to Capistrano, again from the Rails community.
Write your own shell script or Makefile.
Options for getting the files to the production box:
Direct checkout from source. Not always possible if your production boxes lack development tools, specifically source code management tools.
Checkout source locally, then tar/zip it up. Use scp or rsync to copy the tarball over. This is sometimes preferred for something like an Amazon EC2 deployment, where a compressed tarball can save time/bandwidth.
Checkout source locally, then rsync it over to the production box.
Packaging Tools
Use your OS's packaging system to generate packages containing the files for your app. Create a master package that has as dependencies the other packages you need. The RubyWorks system is an example of this, used to deploy a Rails stack and sample application. Then it's a matter of using apt, yum/rpm, Windows msi, or whatever to deploy a given version. Rollback involves uninstalling and reinstalling an old version.
General Tools Aimed at Installing Apps/Configs and Maintaining a Set of Systems
These tools do not specifically target the problem of deploying a web app, but rather the more general problem of deploying/maintaining Apps/Configs for a set of servers, or an entire company's workstations. They are aimed more at the system administrator than the web developer, though either can find them useful.
Cfengine is a tool in this category.
Puppet aims to improve on Cfengine. It's got a learning curve but many find it worth the time to figure out how to do the configs. Once you've got it going, each box checks the central server periodically and makes sure everything is up to date. If someone edits a file or changes a permission, this is detected and corrected. So, unlike the deployment tools above, Puppet not only puts files in the right place for you, it ensures they stay that way.
Chef is a little younger than Puppet with a similar approach.
Smartfrog is another tool in this category.
Ansible works with plain YAML files and does not require agents running on the servers it manages
For a comparison of these and many more tools in this category, see the Wikipedia article, Comparison of open source configuration management software.
Take a look at the cfengine tutorial to see if cfengine looks like the right tool for your situation. It may be a little too complicated for a small website, but if it is going to involve more computers and more configuration in the future, at some point you will end up using cfengine or something like that.
Create your own packages in the format your distribution uses, e.g. Debian packages (.deb). These can either be copied to each machine and installed manually, or you can set up your own repository, and add it to your list of sources.
Your packages should be set up so that the scripts they contain consult a configuration file, which is different on each host, depending on what scripts need to be run on each.
To tie it all together, you can create a meta package that just depends on each of the other packages you create. That way, when you set up a new server, you install that one meta package, and the other packages are brought in as dependencies.
Although this process sounds a bit complicated, if you have many scripts and many hosts to deploy them to, it can really pay off in the long run.
I have to roll out PHP scripts and Apache configurations to several customers on a frequent basis. Since they all run Debian Linux, I've set up a Debian package repository on my server and the all the customer has to do is type apt-get upgrade and they get the latest version.
The first thing to do is get all these scripts into a source control repository (svn or git are good) so that you can track changes to these scripts over time.
If you are interested in ruby, check out Capistrano, it is well suited deploying things to multiple machines in a cluster, and is fairly easy to set up. It can read files directly from your version control system.
Puppet is another tool that can be used in this situation. It is similar to cfengine - you create a model of the desired deployment and Puppet figures how to get the environment to this state.