How am I supposed to use a Postgresql docker image/container? - postgresql

I'm new to docker. I'm still trying to wrap my head around all this.
I'm building a node application (REST api), using Postgresql to store my data.
I've spent a few days learning about docker, but I'm not sure whether I'm doing things the way I'm supposed to.
So here are my questions:
I'm using the official docker postgres 9.5 image as base to build my own (my Dockerfile only adds plpython on top of it, and installs a custom python module for use within plpython stored procedures). I created my container as suggedsted by the postgres image docs:
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
After I stop the container I cannot run it again using the above command, because the container already exists. So I start it using docker start instead of docker run. Is this the normal way to do things? I will generally use docker run the first time and docker start every other time?
Persistance: I created a database and populated it on the running container. I did this using pgadmin3 to connect. I can stop and start the container and the data is persisted, although I'm not sure why or how is this happening. I can see in the Dockerfile of the official postgres image that a volume is created (VOLUME /var/lib/postgresql/data), but I'm not sure that's the reason persistance is working. Could you please briefly explain (or point to an explanation) about how this all works?
Architecture: from what I read, it seems that the most appropriate architecture for this kind of app would be to run 3 separate containers. One for the database, one for persisting the database data, and one for the node app. Is this a good way to do it? How does using a data container improve things? AFAIK my current setup is working ok without one.
Is there anything else I should pay atention to?
Thanks
EDIT: adding to my confusion, I just ran a new container from the debian official image (no Dockerfile, just docker run -i -t -d --name debtest debian /bin/bash). With the container running in the background, I attached to it using docker attach debtest and the proceeded to apt-get install postgresql. Once installed I ran (still from within the container) psql and created a table in the default postgres database, and populated it with 1 record. Then I exited the shell and the container stopped automatically since the shell wasn't running anymore. I started the container againg using docker start debtest, then attached to it and finally run psql again. I found everything is persisted since the first run. Postgresql is installed, my table is there, and offcourse the record I inserted is there too. I'm really confused as to why do I need a VOLUME to persist data, since this quick test didn't use one and everything apears to work just fine. Am I missing something here?
Thanks again

1.
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword
-d postgres
After I stop the container I cannot run it again using the above
command, because the container already exists.
Correct. You named it (--name some-postgres) hence before starting a new one, the old one has to be deleted, e.g. docker rm -f some-postgres
So I start it using
docker start instead of docker run. Is this the normal way to do
things? I will generally use docker run the first time and docker
start every other time?
No, it is by no means normal for docker. Docker process containers are supposed normally to be ephemeral, that is easily thrown away and started anew.
Persistance: ... I can stop and start
the container and the data is persisted, although I'm not sure why or
how is this happening. ...
That's because you are reusing the same container. Remove the container and the data is gone.
Architecture: from what I read, it seems that the most appropriate
architecture for this kind of app would be to run 3 separate
containers. One for the database, one for persisting the database
data, and one for the node app. Is this a good way to do it? How does
using a data container improve things? AFAIK my current setup is
working ok without one.
Yes, this is the good way to go by having separate containers for separate concerns. This comes in handy in many cases, say when for example you need to upgrade the postgres base image without losing your data (that's in particular where the data container starts to play its role).
Is there anything else I should pay atention to?
When acquainted with the docker basics, you may take a look at Docker compose or similar tools that will help you to run multicontainer applications easier.

Short and simple:
What you get from the official postgres image is a ready-to-go postgres installation along with some gimmicks which can be configured through environment variables. With docker run you create a container. The container lifecycle commands are docker start/stop/restart/rm Yes, this is the Docker way of things.
Everything inside a volume is persisted. Every container can have an arbitrary number of volumes. Volumes are directories either defined inside the Dockerfile, the parent Dockerfile or via the command docker run ... -v /yourdirectoryA -v /yourdirectoryB .... Everything outside volumes is lost with docker rm. Everything including volumes is lost with docker rm -v
It's easier to show than to explain. See this readme with Docker commands on Github, read how I use the official PostgreSQL image for Jira and also add NGINX to the mix: Jira with Docker PostgreSQL. Also a data container is a cheap trick to being able to remove, rebuild and renew the container without having to move the persisted data.
Congratulations, you have managed to grasp the basics! Keep it on! Try docker-compose to better manage those nasty docker run ...-commands and being able to manage multi-containers and data-containers.
Note: You need a blocking thread in order to keep a container running! Either this command must be explicitly set inside the Dockerfile, see CMD, or given at the end of the docker run -d ... /usr/bin/myexamplecommand command. If your command is NON blocking, e.g. /bin/bash, then the container will always stop immediately after executing the command.

Related

How to Start Cron / Crond inside the Official Postgres Container

The crond is not running by default in the official postgres alpine image. How could I define my Dockerfile to make sure that the daemon runs in the background? I want that it is running by default, if possible even when the container gets restarted.
I tried to add CMD ["/usr/sbin/crond"] to my Dockerfile but I didn't succeed. Any thoughts how to run this in combination with postgres?
Update
I have added the answer of tianon:
[...]
If you must run crond inside a container, I'd recommend instead using
a separate container which runs nothing but crond (and thus Docker can
both track its lifecycle, and restart it when/if it fails, the machine
restarts, etc). You should be able to connect to the PostgreSQL
instance from a second container, but if absolutely necessary, one
could use things like --network container:some-postgres in order to
join the network namespace of the database container directly.
pg_cron must be added to shared_preload_libraries. Per the docs:
# add to postgresql.conf:
shared_preload_libraries = 'pg_cron'
and you must then restart PostgreSQL.

Restart Postgres in a docker

I tried to restart Postgres in Docker using 'docker restart ' command. It got stopped but I'm not able to start it. When I run the command 'ps -a' it says the status as 'Exited'. Is there any way to start it again? I don't want to loose any data in that database.
The container had one active connection during restarting. Is that creating a problem?
If the container crashed due to a bug or something, you may not be able to restart it. However, you should still be able to recover at least part of your data by making a new image out of the container that you want to recover. Here's how you do it:
First, list all the containers that have run in your machine:
docker ps -a
Find out which one is the container that run with all the data you want to recover. You should be able to figure out from the CREATED field (you know when you started it).
Grab the hash (CONTAINER_ID) of the container, and execute the following command:
docker commit <hash> <a_new_name:tag>
This will save the container as an image that you can execute.
Execute the container with a bash or sh session, depending on what our base image offers:
docker run --entrypoint sh/bash -it <a_new_name:tag>
This will give you access to the state of the container at the time of exiting, which will allow you to inspect its conditions, find bugs, and possibly recover some data. Good luck!

Docker postgres persistance and container lifetime

I'm new to docker. You can take a look at my last questions here and see that I've been asking questions down this line. I read the docs carefully, and also read several articles on the web (which is pretty difficult given the rapid versioning in docker), but I still can't get a clear picture of how am I supposed to use containers and its impact on persistance.
The official postgres image creates a volume in its Dockerfile using this command
VOLUME /var/lib/postgresql/data
And the readme.md file shows only one example of how to run the image
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
When I try that, I can see (with "docker inspect some-postgres") that the volume created lives in a random directory in my host, and it seems to "belong" to that particular container.
So here are some questions that may help my understanding:
It looks (from the official postgres image docs) that expected usage is to use "docker run" to create the container, and "docker start" afterwards (this last bit I inferred from the fact that -d and --name are used). This makes sense to me, but conflicts with a lot of information I've seen regarding containers should be ephemeral. If spin a new container every time, then the default VOLUME config in the Dockerfile doesn't work for persistance. What's the right way of doing things?
Given the above is correct (that I can run once and start many times), the only reason I see for the VOLUME command in the Dockerfile is I/O performance because of the CoW filesystem bypass. Is this right?
Could you please clearly explain what's wrong with using this approach over the (I think unofficially) recommended way of using a data container? I'd like to know the pros/cons to my specific situation, which is a node js intranet application.
Thanks,
Awer
You're correct that you can start the container using 'docker run' and start it again in the future using 'docker start' assuming you haven't removed the container. You're also correct that docker containers are supposed to be ephemeral and you shouldn't be in bad shape if the container disappears. What you can do is mount a volume into the docker container to the storage location of the database.
docker run -v /postgres/storage:/container/postgres --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
If you know the location of where the database writes to inside the container you can mount it correctly and then even if you remove the postgres container, when you start back up all your data will persist. You may need to mount some other areas that control configurations as well unless you modify and save the container.

Wrap application deployables in Docker Image

I'd like to make the deployment to production the easiest it can be, but struggling with the way how to do it.
If I will have docker for production, it will be nice to have docker image with my application deployables, but I'm not sure if it is good approach.
I have several concerns:
wouldn't the layer system bloat, when I will replace the file every time in new version of image?
Is it good idea to make DB scripts and migration tool part of this image?
The last concern is how to run it conveniently. I don't want to go there stop the tomcat container and start it again using volume from new application image(as the new app container name cannot be the same).
I have seen ways to do that, but I don't like them very much i.e. deploy to Tomcat docker image ,create Tomcat image with application already bundled or use host system volume. I like to have something like install "CD". I'd like to evaluate my idea with other approaches, speaking about the proper tool to run it is maybe for other question.
wouldn't the layer system bloat, when I will replace the file every time in new version of image?
No because you can clean up dangling images
docker rmi $(docker images --filter "dangling=true" -q --no-trunc)
Is it good idea to make DB scripts and migration tool part of this image?
Yes, if your startup script knows to detect if it needs to apply them.
I don't like them very much i.e. deploy to Tomcat docker image ,create Tomcat image with application already bundled or use host system volume.
If your data volume container is separate from the app, that shouldn't be an issue.
From the discussion, the OP adds:
using this docker create --name <container_name> <image_name> with different image name can retain the container name and I can run Tomcat container with the same volumes-from?
docker run -it --rm -p 8888:8080 --volumes-from <container_name> <image_name>
That is the idea, but it won't work if there is already a create data container with that name.
If there is no persistent data in it, one can docker rm that data container, and recreate it with the same name.
If there are persistent data, then it is best to copy the new updated data through an intermediate (docker run) container which would mount temporarily the data container.

Why doesn't postgres official docker repo start db service at build time?

Under the background of https://github.com/docker-library/postgres (github repo) and https://registry.hub.docker.com/_/postgres/ (docker hub)
It can be seen database is started by Entrypoint and CMD with bash script
/docker-entrypoint.sh
with
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]
another script hook provided to change database is
/docker-entrypoint-initdb.d
which means the database starts (can be pqsl) only at runtime, when docker run command is typed in.
This causes a problem, we could not customize the database before it runs in build time, for example add extensions and populate db with data.
Of course, it could be done in run time. But it has the advantage to repeat the operation every time when the image is run.
So, what is the logic behind this design from docker or postgres perspective? How could I add extension and populate data in build time ?
If you were to customize (create, populate data) a database at build time, that would imply that the database data is written into the docker image filesystem itself (as one cannot mount a volume at build time).
The issue with that is that the docker image filesystem is a special one (AUFS or btrfs, etc) which isn't delivering good I/O performances for data intensive applications such as a database server.
As a consequence, you want to have your data written on a volume instead of on the docker container filesystem. As you don't know at build time what would be the volume used at run time, and as there is no mean anyway to mount volumes at build time, no one should create database at build time.
Furthermore, if you take a close look at the Dockerfile of the official PostgreSQL image, you will see that there is a VOLUME instruction that makes the path at which the data is written a volume. That means that the image is designed so that the data will never hit the docker container filesystem.
If you take a look at other Dockerfiles for other databases or data intensive applications, you will notice that they all operate in this manner. An other reason for that is that it is accepted as a good practice to make your docker containers immutable.
If you want to install additional modules to your image, it is fine as long as those do not depend on data that would be written on a volume, and as long as you make sure to declare a volume for any path they would write data on.
tl;dr
Application code/binary → docker image filesystem
Application data → docker volume
This is right from the docker page for the postgres image (library/postgres):
If you would like to do additional initialization in an image derived from this one, add a *.sql or *.sh script under /docker-entrypoint-initdb.d (creating the directory if necessary). After the entrypoint calls initdb to create the default postgres user and database, it will run any *.sql files and source any *.sh script found in that directory to do further initialization before starting the service.
You can also extend the image with a simple Dockerfile to set the locale. The following example will set the default locale to de_DE.utf8:
FROM postgres:9.4
RUN localedef -i de_DE -c -f UTF-8 -A /usr/share/locale/locale.alias de_DE.UTF-8
ENV LANG de_DE.utf8
Since database initialization only happens on container startup, this allows us to set the language before it is created.
You have the ability to extend an image just as the example shows from the docs that I pasted above. You can also use the exec command and execute virtually anything within the container right from your host machine. It took me a little while to get used to it, I continue to discover things as I play with it more and more.
UPDATE:
sudo docker run --name some-postgres -v ~/PATH/TO/some-postgres/data:/var/lib/postgres/data -p 127.0.0.1:5432:5432 -e POSTGRES_PASSWORD=test -d postgres