Docker: Running multiple applications VS running multiple containers - postgresql

I am trying to run Wildfly, Jenkins and Postgresql in Docker container(s).
As far as I could understand from articles I've read, the Docker way is to have each application run in a different container.
Is my assumption correct or is it better to have only one container containing these three applications?

Afaik the basic philosophy behind docker is to run one service per container. You can run whole application inside a container, but I don't think that will go well with the way docker work. Running different services in different containers gives you more flexibility and a better modularity for your app.

Related

Running two podman/docker containers of PostgreSQL on a single host

I have two applications, each of which use several databases. Before the days of Docker, I would have just put all the databases on one host (due to resource consumption associated with running multiple physical hosts/VMs).
Logically, it seems to me that separating these into groups (1 group of DBs per application) is the right thing to do and with containers the overhead is low and this seems possible. However, I have not seen this use case. I've seen multiple instances of containerized Postgres running so as to maintain multiple versions (hence different images).
Is there a good technical reason why people do not do this (two or more containers of PostgreSQL instances using the same image for purposes of isolating groups of DBs)?
When I tried to do this, I ran into errors having to do with the second instance trying to configure the postgres user. I had to pass in an option to ignore migration errors. I'm wondering if there is a good reason not to do this.
Well, I am not used to work with prosgresql but with mysql, sqlite and ms sql - and docker, of course.
When I entered docker I used to read a lot about microservices, developing of these and, of course, the devops ideas behind docker and microsoervices.
In this world I would absolutly prefer to have 2 containers of the same base image with a multi stage build and / or different env-files to run you infrastructure. Docker is not only allowing this, it is prefering this.

Designing system architecture with Docker containers

I am new to Docker. I want some opinion from some expert about container design. I have set up a database in the MongoDB cloud (Atlas). I have Windows app in Docker container which include Windows OS and application based components. I want to use RavenDB and this database is very new to me. A component of my Windows container will communicate to both MongoDB and RavenDB.
My question is
should I create different docker container for RavenDB or will I install RavenDB in my existing windows container.
it is design decision problem. I am new to RavenDB and Docker so the pros and cons are not clear to me yet. Kindly help me.
I had a similar application, where I had a postgresql db and Nodejs webapp.
The web application and the database were running on separate docker containers.
This way the two containers were independent of each other.
This replicates the actual production scenario, where you'll have your service and database running separately.
It is recommended to run single process on each container.
Better modularity of the services. Separation of Concerns.
Scaling containers horizontally is much easier if the container is isolated to a single function.
This way the two containers were independent of each other. The postgresql db container had a volume mounted to persist the data.
A more detailed explanation can be found here

Communicating between swarm container and host machine

This is a total beginner question with regards to Docker.
I have a basic swarm running on a single host as a testing environment. There are 11 different containers running, all communicating through the host (the literal machine I am now typing this on). Only 1 physical machine, 11 containers.
On my physical machine's localhost I have a MongoDB server running. I want to be able to communicate with this MongoDB server from within the containers in my swarm.
What do I have to configure to get this working? There is lots of information with regards to networking on Docker. I normally use:
docker run --net="host" --rm -ti <name_of_image>
and everything works fine. But as soon as I run a swarm (not a single container) I can't seem to figure out how to connect everything together so I can talk to my MongoDB server.
I realise this is probably a very basic question. I appreciate also that I probably need to read some more of the swarm networking docs to understand this but I don't know which documentation to look at. There seems to be multiple different ways to network my containers and physical machine together.
Any information would be much appreciated, even if it's just a link to some docs you think would be enlightening.
Cheers.

Managing resources (database, elasticsearch, redis, etc) for tests using Docker and Jenkins

We need to use Jenkins to test some web apps that each need:
a database (postgres in our case)
a search service (ElasticSearch in our case, but only sometimes)
a cache server, such as redis
So far, we've just had these services running on the Jenkins master, but this causes problems when we want to upgrade Postgres, ES or Redis versions. Not all apps can move in lock step, and we want to run the tests on new versions before committing to move an app in production.
What we'd like to do is have these services provided on a per-job-run basis, each one running in its own container.
What's the best way to orchestrate these containers?
How do you start up these ancillary containers and tear them down, regardless of whether to job succeeds or not?
how do you prevent port collisions between, say, the database in a run of a job for one web app and the database in the job for another web app?
Check docker-compose and write a docker-compose file for your tests.
The latest network features of Docker (private network) will help you to isolate builds running in parallel.
However, start learning docker-compose as if you only had one build at the same time. When confident with this, look further for advanced docker documentation around networking.

Build multiple images with Docker Compose?

I have a repository which builds three different images:
powerpy-base
powerpy-web
powerpy-worker
Both powerpy-web and powerpy-worker inherit from powerpy-base using the FROM keyword in their Dockerfile.
I'm using Docker Compose in the project to run a Redis and RabbitMQ container. Is there a way for me to tell Docker Compose that I'd like to build the base image first and then the web and worker images?
You can use depends_on to enforce an order, however that order will also be applied during "runtime" (docker-compose up), which may not be correct.
If you're only using compose to build images it should be fine.
You could also split it into two compose files. a docker-compose.build.yml which has depends_on for build, and a separate one for running the images as services.
These is a related issue: https://github.com/docker/compose/issues/295
About run containers:
It was bug before, but they fixed it since docker-compose v1.10.
https://blog.docker.com/2016/02/docker-1-10/
Start linked containers in correct order when restarting daemon: This is a little thing, but if you’ve run into it you’ll know what a headache it is. If you restarted a daemon with linked containers, they sometimes failed to start up if the linked containers weren’t running yet. Engine will now attempt to start up containers in the correct order.
About build:
You need to build base image first.