Deploying Go App - deployment

I have a REST API endpoint written in Go and I am wondering what is the best way to deploy it. I know that using Google App Engine would probably make my life easier in terms of deployment. But, suppose that I want to deploy this on AWS. What options/process/procedures do I have. What are some of the best practices out there? Do I need to write my own task to build, SCP and run it?
One option that I am interested in trying is using Fabric to create deployment tasks.

Just got back from Mountain West DevOps today where we talked about this, a lot. (Not for Go specifically, but in general.)
To be concise, all I can say is: it depends.
For a simple application that doesn't receive high use, you might just manually spin up an instance, plop the binary onto it, run it, and then you're done. (You can cross-compile your Go binary if you're not developing on the production platform.)
For slightly more automation, you might write a Python script that uploads and runs the latest binary to an EC2 instance for you (using boto/ssh).
Even though Go programs are usually pretty safe (especially if you test), for more reliability, you might daemonize the binary (make it a service) so that it will run again if it crashes for some reason.
For even more autonomy, use a CI utility like Jenkins or Travis. These can be configured to run deployment scripts automatically when you commit code to a certain branch or apply tags.
For more powerful automation, you can take it up another notch and use tools like Packer or Chef. I'd start with Packer unless your needs are really intense. The Packer developer talked about it today and it looks simple and powerful. Chef serves several enterprises, but might be overkill for you.
Here's the short of it: the basic idea with Go programs is that you just need to copy the binary onto the production server and run it. It's that simple. How you automate that or do it reliably is up to you, depending on your needs and preferred workflow.
Further reading: http://www.reddit.com/r/golang/comments/1r0q79/pushing_and_building_code_to_production/ and specifically: https://medium.com/p/528af8ee1a58

Related

Create a local custom testing environment through Minikube

I'm currently creating a Minikube cluster for the developers, they will each have their own Minikube cluster in their local machine for testing, assuming the developers don't know anything about Kubernetes, is creating a bash script to handle all the installations and the setup of the pod the recommended way? Is it possible to do it through Terraform instead?Or there's other way to do this easier? Thanks!
Depending on what your requirements are, choosing Minikube may or may not be the best way to go.
Just to give you some other options you might want to take a look at the following tools when it comes to local enviornments for developers (depending on their needs):
kind
k8s-vagrant-multi-node
Since you do not seem to care about Windows or other users (at least they weren't mentioned), a bash script may be the simplest way to go. However, usually that's were tools like Ansible come into play. They help you with automating things in a clear fashion and allow for proper testing. Some tools (like Ansible) even have support for certain Windows features that may be useful.
TL;DR
A Bash script is not the recommended way as it has lots of pain points that come with it, however, it may be the fastest approach depending on your skillset.
If you want to do it properly use tools like Ansible, Chef, Puppet, etc.

PhpStorm - debug on local, deploy on remote server

I'm working on a website using PhpStorm. For a long time I developed it locally, but then I got hosting and a remote ftp server.
I created a new project in PhpStorm with the settings for remote host, and I found that deploying code takes long time (over a minute) before I can see the result, which is quite uncomfortable when debugging.
Is there any possibility to work with code on a local server, and, when I think that the project is ready for deploy, just send it to the server.
I understand, that I can just work in two different projects and just deploy the "ready" version to server via FTP, but maybe there is some more comfortable way?
There is several answers to this question, and most of them opinion based but i will try and keep it objective.
Case 1
A big corporation gives every developer a sandbox, to test their code from, the corp requires every developer to keep their code on the sandbox.
Using mounted drives could be extremely slow. Especially when PhpStorm is indexing.
Case 2
An easy way to keep an auto backup of your code it to use the build in (s)ftp(s) upload/deploy.
Solution
In both cases you could use the auto deploy feature that saves every changes to the server, that way the deploy doesn't take over a minute, but is usually already there before you know it.
I cannot recommend to use the deployment for Production as it will not pass through your version control, SAT, security setups etc. In that case I would suggest something like rocketeer etc.
EDIT:
As for 2 projects, well you can define 2 different deployment servers, and use the default one for your testing, with auto upload or something, and then the other one can be selected from the deployment menu.

Build RPM to deploy

I would like to know what is the best method to deploy applications like Django, Flask etc.. is by building RPM files or by using a tool like fabric which more or less does the same thing.. I'm trying to figure out the best approach to handle deployment and automation.
After considering the requirements I believe in my situation fabric will work best for basic deployments to multiple servers. While rpms can do similar things every time there is a change in source a new RPM must be created which for my environment will not work since the source code changes frequently. Any input from anyone is welcome. But I feel at least for me this will work best in the current situation.

How can I share deployment code between Lab Management and Release Management

After having just started using Microsoft Release Management, I am more and more convinced that it is not well suited to run integration tests. This might be a false feeling I'm having, and I'd love to get more input on this. When we first considered it, I had the intention to run the tests defined in our test plan through it's pipeline, but now I'm seeing that we should be running those as frequently as possible. We would like to run integration testing every night, but our release candidates are only defined at the end of sprints, so using Release Management for that seems conflicting.
With the tool out of the equation, we are considering exploring the Lab Template again. We did some very minor tests with it a few months ago in a legacy project but never went too far. My main concern now is that both stages need deployment:
the Release Management pipeline needs to deploy our projects to the QA and production environment
the Lab Template also needs to deploy the project on a few virtual machines to run integration tests on
The Release Management uses some very nice abstractions to achieve that. You can code machine scopes and define components based on the drop folder structure to define each part of the whole application to be deployed. On the other hand, the lab management workflow does not support this (or perhaps I'm just missing it). The standard way to make deployment work for lab testing, is to write a custom power shell script that moves the files from the build drop folder to the correct places, creates the application pools for web projects, and stuff like that, all by hand.
Ideally, I'd like to just share the entire deployment workflow between both tools and, since the Release Management way of doing it seems much simpler, I'd use that. This would make it easier to maintain both pipelines at the same time, which I assume is actually commonplace.
What is the correct approach to share the deployment code as much as possible between the two tools?
I would expect that better integration between RM and MTM/LM will be a future feature. In the interim, you could investigate using Desired State Configuration to handle having a single script that configures environments for you.
DSC support isn't really out-of-the-box in RM Update 2, but RM Update 3 will have built-in support for DSC to both Azure and on-prem VMs. Update 3 CTP 1 is out right now, but it's not production-ready.
You can still use DSC from RM in Update 2, it just requires a bit more work.

Introduction to Erlang/OTP production applications deployment

I would like to develop and deploy an Erlang/OTP application into production on a VPS.
I am pretty familiar with developing Erlang code on a local machine and my question is about deployment.
Basically, I would like to know what steps I should take in order to move Erlang code from a local machine to a production server and make it run, i.e. be available for users.
Note: I have read some documentation about Erlang and command line, Erlang code module, Erlang releases, but I am still not sure how to pursue the required task.
However, I guess that deploying an Erlang-based software on a server is a bit more tricky than doing sudo tasksel for LAMP.
I plan to have an Erlang/OTP application which has Mochiweb, CouchDB (couchbeam) and boss_db as dependencies.
So, my newbie questions about deploying all that stuff on a production server are the following:
I plan to use Ubuntu Server 12.04; is there any better choice for a Linux distro to use for Erlang/OTP in production?
How all the code should be organized? Should I put my application into a /home/myapp/ dir and then put all the dependencies into /home/myapp/deps? Or should I put all dependencies into /usr/local/lib/erlang/lib? (returned by code:get_path()). Should I somehow update the dependencies regularly or should I freeze them?
How do I make the whole application start once the server starts? Should it be some kind of bash script or anything else?
I know that Erlang allows hot code upgrades, but how should I organize that? On Rails I could update the code with git, does anything similar exist in the Erlang world?
There are two types of dependencies: Internal and External. If you want to do it the right way(tm), it takes a bit of time getting to work:
External dependencies:
Taking the latter first, an external dependency is some other thing that has to run before your application can run. For instance a PostgreSQL database, or a Riak cluster. For those, you usually just use the usual stuff in Ubuntu for making it start up properly. I've had good experience with using monit for these tasks:
http://mmonit.com/monit/
Internal Dependencies:
For internal dependencies, you need to arrange your program into applications inside the Erlang VM. These have dependencies on each other, like the external dependencies. Your main application may need a logger running before it should start, for instance. Then you create a release. A release copies the Erlang binaries and necessary libraries/beams/applications into a release directory, forming a self-contained Erlang system. It contains a boot-script which tells how to start up the applications in the right order and keep them running. So you can tar-ball up this release, copy it to the server and then start it. There are some basics covered here:
http://learnyousomeerlang.com/release-is-the-word
but do also read the chapters before it on applications. You can also get rebar to call reltool for you to build a release. This is what I usually do.
Hot upgrades:
Handling hot upgrades in production can be done in a couple of ways. You can move the beam to the machine and then deploy it, take the shell and then call l(Module) to load it into the running system. This works for smaller fixes. For large systematic upgrades you can do a release-upgrade which will upgrade the running system on the fly without stopping service. But if your system is mostly shared nothing, it is usually not worth it. Instead, you can have multiple machines and upgrade them in sequence.
For instance, you can upgrade a machine and then use a system like HAProxy to send 2% of all requests to the new system. Then systematically turn up the request load weight.
While #I GIVE CRAP ANSWERS gave a pretty thorough summary, I feel compelled to throw in the use of sync, which helps to automate the hot-recompiling and reloading of modules.
The simple way is you specify sync as a rebar dependency, then when you're getting ready to deploy an upgrade, you can run sync:go() on the Erlang node. This starts the sync engine, which watches for filesystem changes. Then you can use git to push to your server. Sync will notice the files change, recompile them, and load the new beams automatically.
Then, you can run sync:stop() right away to tell the system to stop watching for filesystem changes (it's generally not recommended to keep sync running on a live server, just to prevent accidental recompiling if, for whatever reason, a source file changes and it's unintentional.