For multiple projects using Docker, use Multiple VMs or Single Host with multiple containters - deployment

Suppose I had three apps that are currently hosted at Digital Ocean or AWS. Each of them use at least one VM for the database and one or more VMs for the web app.
Now let's say that I wanted to get one dedicated server at OVH with 64GB of RAM and use docker to deploy these apps. Each project would have its own docker-compose file. I'm thinking of two ways of doing this:
Install VMWare Esxi on the server, create one VM for each project and deploy docker containers for the web and database.
Just install Ubuntu as the host OS and manage containers for all apps using separate network entry points(IPs) for each project.
Would I be wasting too much of the server resources going for the first choice?
Would I be overcomplicating my infrastructure by going for the second?
I understand both are valid choices, but what would be the better/suggested way?
Thanks for the help!

will need a full VM for each app with all the memorysharing and I/O stuff. You may use memory ballooning with virtual box (ESXi should have such a feature too, maybe named different). And within every VM you'll have have the docker stack included.
If you use a native OS you'll need the docker stack only once.
What OS do you use within your docker images? An 200MB Ubuntu? Or a 5MB Alpine? If you choose Alpine as your host OS and/or your image OSes you'll be able to keep your "container overhead" much smaller
It depends what system services your apps need, like cron, upstart, ... how much resources does the app need? Is it JVM based app that needs an own JVM in every container? Etc...
At first hand I would plan an Alpine Host with Alpine docker images. If there are Apps that realy need Ubuntu images you can just use it in the image for that specific app.
Have also a look at Docker vs. VM

Related

Should XAMPP be installed on an actual physical Server?

Is XAMPP just meant for testing and setting up virtual servers ?(cause that's what wiki say)
Can it be installed on an actual physical server? Do developers actually do that?
I'm a little confused cause if it were true, why would anyone install a virtual server on a physical server? It's like trying to run Excel on VirtualBox.
XAMPP simulates a typical stack used for web development on a local machine. If you have access to an actual physical server, you would typically install things like the web server (such as Apache) and MySQL on the server itself. The developers of XAMPP consider it more of a development tool due to certain features being disabled to make dev easier.
Virtualisation in servers is used because the actual physical machines are very powerful and so are idling a large amount of time. Putting those resources to use by creating two virtual servers on top of the host reduces cost and increases operational throughput.
Virtual server and Docker can be used to test with different environments at the same time, or test beta software for future releases. On Maschines that have 6 or 8 cores and running 3.6 Millions instructions per second, there are plenty of resources to have more than 1 maschine virtual or as a docker file, so that you can uses for example different databases, with out them interfering.
Besides phiscal Hard cost mony to buy and to maintain.
Last virtualisation and docker are only files, that you can simply copy to have a backup. A real maschine is a little more work, to make a backup.
But don't use XAMPP as real maschien that is exposed to the world. There much to many security risks ind teh standard configuration.

Material on Building a REST api from within a docker container

I'm looking to build an api on a application that is going to run its own docker container. It needs to work with some applications via its REST apis. I'm new to development and dont understand the process very well. Can you share the broad steps necessary to build and release the APIs so that my application runs safely within the docker but externally whatever communication needs to happen they work out well.
For context: I'm going to be working on a Google Compute VM instance and the application I'm building is a HyperLedger Fabric program written in GoLang.
Links to reference material and code would also be appreciated.
REST API implementation is very easy in Go. You can use the inbuilt net/http package. Here's a tutorial which will help you understand its usage. https://tutorialedge.net/golang/creating-restful-api-with-golang/
Note : If you are planning on developing a production server, the default HTTP client is not recommended. It will knock down the server on heavy frequency calls. In that case, you have to use a custom HTTP client as described here, https://medium.com/#nate510/don-t-use-go-s-default-http-client-4804cb19f779
For learning docker I would recommend the docker docs they're very good and cover a handful of stuff. Docker swarm and orchestration are useful things to learn but most people aren't using docker swarm anymore and use things like kubernetes instead. Same principles, but different tech. I would definitely go through this website: https://docs.docker.com/ and implemented on your own computer. Then just practice by looking at other peoples dockerfiles and building your own. A good understanding a linux will definitely help with installing packages and so on.
I haven't used go myself but I suspect it shouldn't be too hard to deploy into a docker container.
The last production step of deployment will be similar for whatever your using if it's docker or no docker. The VM will need an webserver like apache or nginx to expose the ports you wish to use to the public and then you will run the docker container or the go server independently and then you'll have your system!
Hope this helps!

Learning to use Kuberentes on one single computer

I'm in the need of learning how to use Kubernetes. I've read the first sentences of a couple of introductory tutorials, and never have found one which explains me, step by step, how to build a simulated real world example on a single computer.
Is Kubernetes by nature so distributed that even the 101-level tutorials can only be performed on clusters?
Or can I learn (execute important examples) the important stuff there is to know by just using my Laptop without needing to use a stack of Raspberry Pi's, AWS or GCP?
The easiest might be minikube.
Minikube is a tool that makes it easy to run Kubernetes locally.
Minikube runs a single-node Kubernetes cluster inside a VM on your
laptop for users looking to try out Kubernetes or develop with it
day-to-day.
For a resource that explains how to use this, try this getting started guide. It runs through an entire example application using a local development environment.
If you are okay with using Google Cloud Platform (I think one gets free credits initially), there is hello-node.
If you want to run the latest and greatest (not necessary stable) and you're using Linux, is also possible to spin up a local cluster on Linux from a cloned copy of the kubernetes sources, using hack/local_up_cluster.sh.

Docker deployment options

I'm wondering which options are there for docker container deployment in production. Given I have separate APP and DB server containers and data-only containers holding deployables and other holding database files.
I just have one server for now, which I would like to "docker enable", but what is the best way to deploy there(remotely will be the best option)
I just want to hit a button and some tool will take care of stopping, starting, exchanging all needed docker containers.
There is myriad of tools(Fleet, Flocker, Docker Compose etc.), I'm overwhelmed by the choices.
Only thing I'm clear is, I don't want to build images with codes from git repo. I would like to have docker images as wrappers for my releases. Have I grasped the docker ideas from wrong end?
My team recently built a Docker continuous deployment system and I thought I'd share it here since you seem to have the same questions we had. It pretty much does what you asked:
"hit a button and some tool will take care of stopping, starting, exchanging all needed docker containers"
We had the challenge that our Docker deployment scripts were getting too complex. Our containers depend on each other in various ways to make the full system so when we deployed, we'd often have dependency issues crop up.
We built a system called "Skopos" to resolve these issues. Skopos detects the current state of your running system and detects any changes being made and then automatically plans out and deploys the update into production. It creates deployment plans dynamically for each deployment based on a comparison of current state and desired state.
It can help you continuously deploy your application or service to production using tags in your repository to automatically roll out the right version to the right platform while removing the need for manual procedures or scripts.
It's free, check it out: http://datagridsys.com/getstarted/
You can import your system in 3 ways:
1. if you have a Docker Compose, we can suck that in and start working iwth it.
2. If your app is running, we can scan it and then start working with it.
3. If you have neither, you can create a quick descriptor file in YAML and then we can understand your current state.
I think most people start their container journey using tools from Docker Toolbox. Those tools provide a good start and work as promised, but you'll end up wanting more. With these tools, you are missing for example integrated overlay networking, DNS, load balancing, aggregated logging, VPN access and private image repository which are crucial for most container workloads.
To solve these problems we started to develop Kontena - Docker Container Orchestration Platform. While Kontena works great for all types of businesses and may be used to run containerized workloads at any scale, it's best suited for start-ups and small to medium sized business who require worry-free and simple to use platform to run containerized workloads.
Kontena is an open source project and you can view it on GitHub.

Simulating computer cluster on simple desktop to test parallel algorithms

I want to try and learn MPI as well as parallel programming.
Can a sandbox be created on my desktop PC?
How can this be done?
Linux and windows solutions are welcome.
If you want to learn MPI, you can definitely do it on a single PC (Most modern MPIs have shared memory based communication for local communication so you don't need additional configuration). So install a popular MPI (MPICH / OpenMPI) on a linux box and get going! If your programs are going to be CPU bound, I'd suggest only running job sizes that equal the number of processor cores on your machine.
Edit: Since you tagged it as a virtualization question, I wanted to add that you could also run MPI on multiple VMs (on VMPlayer or VirtualBox for example) and run your tests. This would need inter-vm networking to be configured (differs based on your virtualization software).
Whatever you choose (single PC vs VMs) it won't change the way you write your MPI programs. Since this is for learning MPI, I'd suggest going with the first approach (run multiple MPI programs on a single PC).
You don't need to have VMs running to launch multiple copies of your application that communicate using MPI .
MPI can help you a virtual cluster on a given single node by launching multiple copies of your applications.
One benifit though, of having it run in a VM is that (as you already mentioned) it provides sand boxing .Thus any issues if your application creates will remain limited to that VM which is running the app copy.