Docker multi postgres containers with one mount point - postgresql

I have two Postgres databases and I want to sync data between themes.
So far I have these two containers, exactly the same with different posts and different names.
docker container run --name='p1' -d -p 5435:5432 -v /tmp/dbs/test/:/var/lib/postgresql/data postgres
docker container run --name='p2' -d -p 5436:5432 -v /tmp/dbs/test/:/var/lib/postgresql/data postgres
The problem happens when something changed.
If I change something in p1 like insert a row, then I can't see it in p2.
But if I kill, and run containers again, then I can see the inserted data in both of themes.
Why this is happening?
Is there a way to sync data between themes?

Running two postmaster processes on the same files is a sure road to data corruption. Don't do that.
You cannot have multi-master replication with standard PostgreSQL, but you can have a read-only standby server.

Related

what if two containers postgres containers mapping the same host volume

Actually learning docker,
i manipulate postgres containers and asking myself the
following questions :
I launch a first postgres container like this :
docker run -e POSTGRES_PASSWORD=secret -p 5464:5432 -v postgres-data:/var/lib/postgresql/data -d postgres
and then a second container, using this command, and by consequence EXACTLY THE SAME VOLUME.
docker run -p 5465:5432 -v postgres-data:/var/lib/postgresql/data -d postgres
Is it a problem ?
And my most essential question is :
do i have to consider i have two postgres servers sharing the same configurations files,
or do i have to conside i have two postgres containers sharing the same postgres server ?
It's not really clear for me.
Thanks in advance.
Yes, that's a problem. I think PostgreSQL is clever enough that one of the databases just won't start up. In the worst case, this is a recipe for data corruption. This isn't specific to Docker; just in general, you can't run two databases against the same physical storage.
A typical container-oriented setup is to have two separate databases with two separate volumes, one for each service that requires a database.

Docker postgres persistance and container lifetime

I'm new to docker. You can take a look at my last questions here and see that I've been asking questions down this line. I read the docs carefully, and also read several articles on the web (which is pretty difficult given the rapid versioning in docker), but I still can't get a clear picture of how am I supposed to use containers and its impact on persistance.
The official postgres image creates a volume in its Dockerfile using this command
VOLUME /var/lib/postgresql/data
And the readme.md file shows only one example of how to run the image
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
When I try that, I can see (with "docker inspect some-postgres") that the volume created lives in a random directory in my host, and it seems to "belong" to that particular container.
So here are some questions that may help my understanding:
It looks (from the official postgres image docs) that expected usage is to use "docker run" to create the container, and "docker start" afterwards (this last bit I inferred from the fact that -d and --name are used). This makes sense to me, but conflicts with a lot of information I've seen regarding containers should be ephemeral. If spin a new container every time, then the default VOLUME config in the Dockerfile doesn't work for persistance. What's the right way of doing things?
Given the above is correct (that I can run once and start many times), the only reason I see for the VOLUME command in the Dockerfile is I/O performance because of the CoW filesystem bypass. Is this right?
Could you please clearly explain what's wrong with using this approach over the (I think unofficially) recommended way of using a data container? I'd like to know the pros/cons to my specific situation, which is a node js intranet application.
Thanks,
Awer
You're correct that you can start the container using 'docker run' and start it again in the future using 'docker start' assuming you haven't removed the container. You're also correct that docker containers are supposed to be ephemeral and you shouldn't be in bad shape if the container disappears. What you can do is mount a volume into the docker container to the storage location of the database.
docker run -v /postgres/storage:/container/postgres --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
If you know the location of where the database writes to inside the container you can mount it correctly and then even if you remove the postgres container, when you start back up all your data will persist. You may need to mount some other areas that control configurations as well unless you modify and save the container.

How am I supposed to use a Postgresql docker image/container?

I'm new to docker. I'm still trying to wrap my head around all this.
I'm building a node application (REST api), using Postgresql to store my data.
I've spent a few days learning about docker, but I'm not sure whether I'm doing things the way I'm supposed to.
So here are my questions:
I'm using the official docker postgres 9.5 image as base to build my own (my Dockerfile only adds plpython on top of it, and installs a custom python module for use within plpython stored procedures). I created my container as suggedsted by the postgres image docs:
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
After I stop the container I cannot run it again using the above command, because the container already exists. So I start it using docker start instead of docker run. Is this the normal way to do things? I will generally use docker run the first time and docker start every other time?
Persistance: I created a database and populated it on the running container. I did this using pgadmin3 to connect. I can stop and start the container and the data is persisted, although I'm not sure why or how is this happening. I can see in the Dockerfile of the official postgres image that a volume is created (VOLUME /var/lib/postgresql/data), but I'm not sure that's the reason persistance is working. Could you please briefly explain (or point to an explanation) about how this all works?
Architecture: from what I read, it seems that the most appropriate architecture for this kind of app would be to run 3 separate containers. One for the database, one for persisting the database data, and one for the node app. Is this a good way to do it? How does using a data container improve things? AFAIK my current setup is working ok without one.
Is there anything else I should pay atention to?
Thanks
EDIT: adding to my confusion, I just ran a new container from the debian official image (no Dockerfile, just docker run -i -t -d --name debtest debian /bin/bash). With the container running in the background, I attached to it using docker attach debtest and the proceeded to apt-get install postgresql. Once installed I ran (still from within the container) psql and created a table in the default postgres database, and populated it with 1 record. Then I exited the shell and the container stopped automatically since the shell wasn't running anymore. I started the container againg using docker start debtest, then attached to it and finally run psql again. I found everything is persisted since the first run. Postgresql is installed, my table is there, and offcourse the record I inserted is there too. I'm really confused as to why do I need a VOLUME to persist data, since this quick test didn't use one and everything apears to work just fine. Am I missing something here?
Thanks again
1.
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword
-d postgres
After I stop the container I cannot run it again using the above
command, because the container already exists.
Correct. You named it (--name some-postgres) hence before starting a new one, the old one has to be deleted, e.g. docker rm -f some-postgres
So I start it using
docker start instead of docker run. Is this the normal way to do
things? I will generally use docker run the first time and docker
start every other time?
No, it is by no means normal for docker. Docker process containers are supposed normally to be ephemeral, that is easily thrown away and started anew.
Persistance: ... I can stop and start
the container and the data is persisted, although I'm not sure why or
how is this happening. ...
That's because you are reusing the same container. Remove the container and the data is gone.
Architecture: from what I read, it seems that the most appropriate
architecture for this kind of app would be to run 3 separate
containers. One for the database, one for persisting the database
data, and one for the node app. Is this a good way to do it? How does
using a data container improve things? AFAIK my current setup is
working ok without one.
Yes, this is the good way to go by having separate containers for separate concerns. This comes in handy in many cases, say when for example you need to upgrade the postgres base image without losing your data (that's in particular where the data container starts to play its role).
Is there anything else I should pay atention to?
When acquainted with the docker basics, you may take a look at Docker compose or similar tools that will help you to run multicontainer applications easier.
Short and simple:
What you get from the official postgres image is a ready-to-go postgres installation along with some gimmicks which can be configured through environment variables. With docker run you create a container. The container lifecycle commands are docker start/stop/restart/rm Yes, this is the Docker way of things.
Everything inside a volume is persisted. Every container can have an arbitrary number of volumes. Volumes are directories either defined inside the Dockerfile, the parent Dockerfile or via the command docker run ... -v /yourdirectoryA -v /yourdirectoryB .... Everything outside volumes is lost with docker rm. Everything including volumes is lost with docker rm -v
It's easier to show than to explain. See this readme with Docker commands on Github, read how I use the official PostgreSQL image for Jira and also add NGINX to the mix: Jira with Docker PostgreSQL. Also a data container is a cheap trick to being able to remove, rebuild and renew the container without having to move the persisted data.
Congratulations, you have managed to grasp the basics! Keep it on! Try docker-compose to better manage those nasty docker run ...-commands and being able to manage multi-containers and data-containers.
Note: You need a blocking thread in order to keep a container running! Either this command must be explicitly set inside the Dockerfile, see CMD, or given at the end of the docker run -d ... /usr/bin/myexamplecommand command. If your command is NON blocking, e.g. /bin/bash, then the container will always stop immediately after executing the command.

How to alter the official mongo docker for authentication and data separation?

I want to make two minor improvements on the official MongoDB docker so that it starts with the --auth enabled and uses a separate data container to store the data. What's the best way to do this?
If all are set, how should I start the shell? Will it be possible for someone without a username and password to access any of the databases available? Which directory should I backup?
EDIT
Apparently, this is not enough:
docker run --name mymongoname1 -v /my/local/dir:/data/db -d -P mongo:latest
OK, so partial answer, because I haven't messed around with docker auth.
Containerising storage is done with a storage container. That's basically a container created off a token instance, with some volumes assigned.
So for elasticsearch (which I know isn't mongo, but it is at least a NoSQL db) I've been using:
docker create -v /es_data:/es_data --name elasticsearch_data es-base /bin/true
Then:
docker run -d -p 9200:9200 --vols-from elasticsearch_data elasticsearch-2.1.0
This connects the container volume to my es container - in this example it passes through a host volume, but you don't actually need to any more, because the container can hold the data in the docker filesystem. (And then I think you can push the data container around too, but I've not got that far!)
If you run ps -a you will see the data container in Created state. Just watch if you're doing a cleanup script that you don't delete it, because unlike running containers, you can freely delete it...

docker with postgres and bash

Today I was researching and trying docker, and with the most of things I was impressed. There are still some questions for me about docker.
Can anyone more experienced than me with Docker tell me what is the best way to login to postgres container (run bash), in order to view some postgres configuration files, view postgres logs, log into postgres shell, execute pg_dump for example, etc. etc..., and everything this while postgres process is running.
I see that people usually run one process per container, and with this approach I am not sure what is the best way to do mentioned actions on container which runs postgres?
Any advices?
Thanks!
You can usually get a shell like this:
docker exec -it some-node bash
The canonical docker way would be not to log in to the running db container, but instead do docker logs or link other containers to do maintenance tasks (e.g. docker run -it --rm --link <my-pg-container>:pg <my-pg-image> pgsql --host pg etc..