What is the difference between AUFS and devicemapper in docker? - virtualization

When docker was introduced a big hype was made about docker using AUFS, allowing two different docker containers to use the same underlying layers, and thus reducing some of the overhead. Docker now seems to prefer devicemapper (e.g. default in Ubuntu 14.04). Does devicemapper provide the same functionality, or did people figure out that the advantages are not too big using AUFS ?

This article details the differences between the storage backends available to docker. Devicemapper support was implemented since AUFS is not included in the kernel and thus was only available on systems (such as Ubuntu) that provided it. Because of this it is generally not recommended in production environments.

No, devicemapper does not provide the same functionality -- it's much, much slower; since it operates at the block-device layer, it needs to deal with mounting, unmounting, fsck'ing, etc.
The reason it's widely used is that many distributions' kernels do not support AUFS. However, if you can use AUFS, you probably should.

Related

Running two podman/docker containers of PostgreSQL on a single host

I have two applications, each of which use several databases. Before the days of Docker, I would have just put all the databases on one host (due to resource consumption associated with running multiple physical hosts/VMs).
Logically, it seems to me that separating these into groups (1 group of DBs per application) is the right thing to do and with containers the overhead is low and this seems possible. However, I have not seen this use case. I've seen multiple instances of containerized Postgres running so as to maintain multiple versions (hence different images).
Is there a good technical reason why people do not do this (two or more containers of PostgreSQL instances using the same image for purposes of isolating groups of DBs)?
When I tried to do this, I ran into errors having to do with the second instance trying to configure the postgres user. I had to pass in an option to ignore migration errors. I'm wondering if there is a good reason not to do this.
Well, I am not used to work with prosgresql but with mysql, sqlite and ms sql - and docker, of course.
When I entered docker I used to read a lot about microservices, developing of these and, of course, the devops ideas behind docker and microsoervices.
In this world I would absolutly prefer to have 2 containers of the same base image with a multi stage build and / or different env-files to run you infrastructure. Docker is not only allowing this, it is prefering this.

NixOS within NixOS?

I'm starting to play around with NixOS deployments. To that end, I have a repo with some packages defined, and a configuration.nix for the server.
It seems like I should then be able to test this configuration locally (I'm also running NixOS). I imagine it's a bad idea to change my global configuration.nix to point to the deployment server's configuration.nix (who knows what that will break); but is there a safe and convenient way to "try out" the server locally - i.e. build it and either boot into it or, better, start it as a separate process?
I can see docker being one way, of course; maybe there's nothing else. But I have this vague sense Nix could be capable of doing it alone.
There is a fairly standard way of doing this that is built into the default system.
Namely nixos-rebuild build-vm. This will take your current configuration file (by default /etc/nixos/configuration.nix, build it and create a script allowing you to boot the configuration into a virtualmachine.
once the script has finished, it will leave a symlink in the current directory. You can then boot by running ./result/bin/run-$HOSTNAME-vm which will start a boot of your virtualmachine for you to play around with.
TLDR;
nixos-rebuild build-vm
./result/bin/run-$HOSTNAME-vm
nixos-rebuild build-vm is the easiest way to do this, however; you could also import the configuration into a NixOS container (see Chapter 47. Container Management in the NixOS manual and the nixos-container command).
This would be done with something like:
containers.mydeploy = {
privateNetwork = true;
config = import ../mydeploy-configuration.nix;
};
Note that you would not want to specify the network configuration in mydeploy-configuration.nix if it's static as that could cause conflicts with the network subnet created for the container.
As you may already know, system configurations can coexist without any problems in the Nix store. The problem here is running more than one system at once. For this, you need an isolation or virtualization tools like Docker, VirtualBox, etc.
NixOS Containers
NixOS provides an efficient implementation of the container concept, backed by systemd-nspawn instead of an image-based container runtime.
These can be specified declaratively in configuration.nix or imperatively with the nixos-container command if you need more flexibility.
Docker
Docker was not designed to run an entire operating system inside a container, so it may not be the best fit for testing NixOS-based deployments, which expect and provide systemd and some services inside their units of deployment. While you won't get a good NixOS experience with Docker, Nix and Docker are a good fit.
UPDATE: Both 'raw' Nix packages and NixOS run in Docker. For example, Arion supports images from plain Nix, NixOS modules and 'normal' Docker images.
NixOps
To deploy NixOS inside NixOS it is best to use a technology that is designed to run a full Linux system inside.
It helps to have a program that manages the integration for you. In the Nix ecosystem, NixOps is the first candidate for this. You can use NixOps with its multiple backends, such as QEMU/KVM, VirtualBox, the (currently experimental) NixOS container backend, or you can use the none backend to deploy to machines that you have created using another tool.
Here's a complete example of using NixOps with QEMU/KVM.
Tests
If the your goal is to run automated integration tests, you can make use of the NixOS VM testing framework. This uses Linux KVM virtualization (expose /dev/kvm in sandbox) to run integrations test on networks of virtual machines, and it runs them as a derivation. It is quite efficient because it does not have to create virtual machine images because it mounts the Nix store in the VM. These tests are "built" like any other derivation, making them easy to run.
Nix store optimization
A unique feature of Nix is that you can often reuse the host Nix store, so being able to mount a host filesystem in the container/vm is a nice feature to have in your solution. If you are creating your own solutions, depending on you needs, you may want to postpone this optimization, because it becomes a bit more involved if you want the container/vm to be able to modify the store. NixOS tests solve this with an overlay file system in the VM. Another approach may be to bind mount the Nix store forward the Nix daemon socket.

Postgres docker container vs. hosted solution

I'm used to building web apps the 'traditional' way but am trying to wrap my head around using docker. If I run postgres in a container along with my python web app, is this the same as spinning up a digital ocean server and installing postgres from scratch? How do I handle backups, fault tolerance, etc with a postgres database that is in docker?
As an alternative, I normally use hosted postgres on Heroku or AWS. Doesn't that solve a lot of the issues I would run into when hosting postgres myself in docker? Do developers really run postgres in docker or do they typically prefer to use an external hosted service?
It's wise for the moment to only keep stateless services or one-off jobs in Docker, and not put any stateful service, like a database.
This recent article from mesosphere has more details about why this isn't yet the case.
One issue would be that orchestration technologies aren't yet up to snuff for the high requirements of stateful services. To quote:
The first challenge is resource isolation. Many container orchestration solutions in the market provide a best effort approach to resource allocation, including memory, CPU and Storage. While this may be ok for stateless apps, it may be catastrophic for stateful services, where loss of performance may result in loss of customer transactions or data.
Another is that stateful databases have been built with different assumptions than those employed by containers, and are heavily optimized for them. Again, quoting:
Most of today’s stateful database technologies were originally designed for a non-containerized world. The operational instructions are very specific to the technology and can sometimes be version specific. Trying to map generic primitives of a container orchestration platform to stateful services is usually a time consuming and error prone operation.
You could totally run your postgres instance inside Docker. But it will require some work to handle backups, fault tolerance and such.
At my company we've made the choice to not put databases inside Docker, for now at least.

About docker swarm mode with filters, strategy, affinity, etc

I've started using docker swarm mode, and I couldn't find reliable information about a lot of things covered in traditional swarm. Does anyone know about the following things??
What kind of filters are available? Used to have constraint, health, and containerslots, but not sure how to set, change or use that filter when creating services. I got constraint label working by passing "--constraint node.labels.FOO==BAR" to docker service create, but not sure about other filters.
How do you set affinity, dependency, port? passing "-e" doesn't seemed to be working..
Anyway to set strategy...?
Not specific to swarm, but is there any way to check how much CPU or memory is reserved by containers? Couldn't find relevant information in docker info.
This question is also not specific to swarm. Is there any way to limit disk and network bandwidth?
I'm referring this => https://docs.docker.com/swarm/scheduler/filter/ but I can't find one for the swarm mode.
Seriously should be working on improving swarm mode documentation...
Question 1, 2 and 3 can be answered in the following link I believe
https://docs.docker.com/engine/swarm/manage-nodes/
For 4th question:
You can very well do docker inspect on the containers to get cpu and memory reserved. By default docker doesn't assign limit to memory and cpu, it will try to consume what ever is available on the host. If you have set the limits then you can see the same through docker inspect.

Which way to run PostgreSQL in Docker?

Which of these methods is correct?
One db container for each app
One db container for all apps
Install db without docker
I tried to find information, but nothing. Or I badly searched?
It is immature, but that doesn't seem to be stopping a lot of people using Docker for persistence.
The official Postgres image has 4.5 million pulls - Ok, this doesn't mean that all those images/containers are being used but it does suggest that it is a popular solution.
If you have already decided that you would like to use Docker, because of what containers can offer your architecture, then I don't think you will have trouble using it for persistence - assuming you are happy learning Docker.
I'm using Postgres and MySql in several projects quite successfully on Docker.
In choosing option 1 or 2, I would say that unless your apps are related to the same problem domain/company/project I would go with option 1. Of course, running costs will possibly factor in as well.
I generally go with option 1.
All 3 options could be valid but it depends on the use you have to perform.
In my server I've 1 container for every main postgresql releases currently I use.
I run all of them on different ports (use not random number for ports but some easy to remember because a problem with docker is remember all port numbers and other for every container).
pg84(port 8432), pg93(port 9332), pg94 (port 9432)
I link the pgXX to the container I need and that's perfect for me.
For my experience I prefer option 2 so.