Docker as replacement for standalone multihost webserver - webserver

I'm thinking about build system for my web server based on docker probably on CoreOS.
Now I have two apache webservers primary and fallback which are in sync.
I want to move my actual webserver to container and build new one with new php and nginx for new projects.
I want to store all applications data and code outside of container in mounted folder as volume. But not sure if it's good approach. Mostly it's because of need to keep current deploy with jenkins, and all code is also in git repository.
Is there some best practice, how to deal with this?

You're getting into the "experimental" side of devops. There's nothing wrong with that, but it is a bit like the wild west. If you use Docker, Vagrant, etc, make sure you're using OO best practices. And just pick a sensible approach.

Related

Is there a best practice to deploy multi stack to one server instance only?

I have been working on MEAN stack project and intend to make a deployment. To save cost, I'm going to deploy it all on one server with docker-compose.yml (push my project to server and run docker-compose up -d) but I'm not sure that it is a best practice in my case. Could everyone please give me any suggestions or other best practices to work in my case?
I don't see any issues with this. On a single server compose alone should be fine as long as your code is production ready and as long as you have mechanisms in place for load balancing incase you need to scale the services.
I think you only need something like docker-swarm or kubernetes if you are deploying to multiple servers

Material on Building a REST api from within a docker container

I'm looking to build an api on a application that is going to run its own docker container. It needs to work with some applications via its REST apis. I'm new to development and dont understand the process very well. Can you share the broad steps necessary to build and release the APIs so that my application runs safely within the docker but externally whatever communication needs to happen they work out well.
For context: I'm going to be working on a Google Compute VM instance and the application I'm building is a HyperLedger Fabric program written in GoLang.
Links to reference material and code would also be appreciated.
REST API implementation is very easy in Go. You can use the inbuilt net/http package. Here's a tutorial which will help you understand its usage. https://tutorialedge.net/golang/creating-restful-api-with-golang/
Note : If you are planning on developing a production server, the default HTTP client is not recommended. It will knock down the server on heavy frequency calls. In that case, you have to use a custom HTTP client as described here, https://medium.com/#nate510/don-t-use-go-s-default-http-client-4804cb19f779
For learning docker I would recommend the docker docs they're very good and cover a handful of stuff. Docker swarm and orchestration are useful things to learn but most people aren't using docker swarm anymore and use things like kubernetes instead. Same principles, but different tech. I would definitely go through this website: https://docs.docker.com/ and implemented on your own computer. Then just practice by looking at other peoples dockerfiles and building your own. A good understanding a linux will definitely help with installing packages and so on.
I haven't used go myself but I suspect it shouldn't be too hard to deploy into a docker container.
The last production step of deployment will be similar for whatever your using if it's docker or no docker. The VM will need an webserver like apache or nginx to expose the ports you wish to use to the public and then you will run the docker container or the go server independently and then you'll have your system!
Hope this helps!

Docker deployment options

I'm wondering which options are there for docker container deployment in production. Given I have separate APP and DB server containers and data-only containers holding deployables and other holding database files.
I just have one server for now, which I would like to "docker enable", but what is the best way to deploy there(remotely will be the best option)
I just want to hit a button and some tool will take care of stopping, starting, exchanging all needed docker containers.
There is myriad of tools(Fleet, Flocker, Docker Compose etc.), I'm overwhelmed by the choices.
Only thing I'm clear is, I don't want to build images with codes from git repo. I would like to have docker images as wrappers for my releases. Have I grasped the docker ideas from wrong end?
My team recently built a Docker continuous deployment system and I thought I'd share it here since you seem to have the same questions we had. It pretty much does what you asked:
"hit a button and some tool will take care of stopping, starting, exchanging all needed docker containers"
We had the challenge that our Docker deployment scripts were getting too complex. Our containers depend on each other in various ways to make the full system so when we deployed, we'd often have dependency issues crop up.
We built a system called "Skopos" to resolve these issues. Skopos detects the current state of your running system and detects any changes being made and then automatically plans out and deploys the update into production. It creates deployment plans dynamically for each deployment based on a comparison of current state and desired state.
It can help you continuously deploy your application or service to production using tags in your repository to automatically roll out the right version to the right platform while removing the need for manual procedures or scripts.
It's free, check it out: http://datagridsys.com/getstarted/
You can import your system in 3 ways:
1. if you have a Docker Compose, we can suck that in and start working iwth it.
2. If your app is running, we can scan it and then start working with it.
3. If you have neither, you can create a quick descriptor file in YAML and then we can understand your current state.
I think most people start their container journey using tools from Docker Toolbox. Those tools provide a good start and work as promised, but you'll end up wanting more. With these tools, you are missing for example integrated overlay networking, DNS, load balancing, aggregated logging, VPN access and private image repository which are crucial for most container workloads.
To solve these problems we started to develop Kontena - Docker Container Orchestration Platform. While Kontena works great for all types of businesses and may be used to run containerized workloads at any scale, it's best suited for start-ups and small to medium sized business who require worry-free and simple to use platform to run containerized workloads.
Kontena is an open source project and you can view it on GitHub.

Should docker image be bundled with code?

We are building a SaaS application. I don't have (for now - for this app) high demands on availability. It's mostly going to be used in a specific time zone and for business purposes only, so scheduled restarting at 3 in the morning shouldn't be a problem at all.
It is an ASP.NET application running in mono with the fastcgi server. Each customer will have - due to security reasons - his own application deployed. This is going to be done using docker containers, with an Nginx server in the front, to distribute the requests based on URL. The possible ways how to deploy it are for me:
Create a docker image with the fcgi server only and run the code from a mount point
Create a docker image with the fcgi server and the code
pros for 1. would seem
It's easier to update the code, since the docker containers can keep running
Configuration can be bundled with the code
I could easily (if I ever wanted to) add minor changes for specific clients
pros for 2. would seem
everything is in an image, no need to mess around with additional files, just pull it and run it
cons for 1.
a lot of folders for a lot of customers additionally to the running containers
cons for 2.
Configuration can't be in the image (or can it? - should i create specific images per customer with their configuration?) => still additional files for each customer
Updating a container is harder since I need to restart it - but not a big deal, as stated in the beginning
For now - the first year - the number of customers will be low and when the demand is low, any solution is good enough. I'm looking rather at - what is going to work with >100 customers.
Also for future I want to set up CI for this project, so we wouldn't need to update all customers instances manually. Docker images can have automated builds but not sure that will be enough.
My concerns are basically - which solution is less messier, maybe easier to automate?
I couldn't find any best practices with docker which cover a similar scenario.
It's likely that your application's dependencies are going to be dependent on the code, so you'll still have to sometimes rebuild the images and restart the containers (whenever you add a new dependency).
This means you would have two upgrade workflows:
One where you update just the code (when there are no dependency changes)
One where you update the images too, and restart the containers (when there are dependency changes)
This is most likely undesirable, because it's difficult to automate.
So, I would recommend bundling the code on the image.
You should definitely make sure that your application's configuration can be stored somewhere else, though (e.g. on a volume, or accessed through through environment variables).
Ultimately, Docker is a platform to package, deploy and run applications, so packaging the application (i.e. bundling the code on the image) seems to be the better way to use it.

Deploying Go App

I have a REST API endpoint written in Go and I am wondering what is the best way to deploy it. I know that using Google App Engine would probably make my life easier in terms of deployment. But, suppose that I want to deploy this on AWS. What options/process/procedures do I have. What are some of the best practices out there? Do I need to write my own task to build, SCP and run it?
One option that I am interested in trying is using Fabric to create deployment tasks.
Just got back from Mountain West DevOps today where we talked about this, a lot. (Not for Go specifically, but in general.)
To be concise, all I can say is: it depends.
For a simple application that doesn't receive high use, you might just manually spin up an instance, plop the binary onto it, run it, and then you're done. (You can cross-compile your Go binary if you're not developing on the production platform.)
For slightly more automation, you might write a Python script that uploads and runs the latest binary to an EC2 instance for you (using boto/ssh).
Even though Go programs are usually pretty safe (especially if you test), for more reliability, you might daemonize the binary (make it a service) so that it will run again if it crashes for some reason.
For even more autonomy, use a CI utility like Jenkins or Travis. These can be configured to run deployment scripts automatically when you commit code to a certain branch or apply tags.
For more powerful automation, you can take it up another notch and use tools like Packer or Chef. I'd start with Packer unless your needs are really intense. The Packer developer talked about it today and it looks simple and powerful. Chef serves several enterprises, but might be overkill for you.
Here's the short of it: the basic idea with Go programs is that you just need to copy the binary onto the production server and run it. It's that simple. How you automate that or do it reliably is up to you, depending on your needs and preferred workflow.
Further reading: http://www.reddit.com/r/golang/comments/1r0q79/pushing_and_building_code_to_production/ and specifically: https://medium.com/p/528af8ee1a58