I am productionalizing using docker postgresql on a very large project. Sometimes, I would need to restart postgresql manually. I tried 3 approaches to shut it down and then restarted later.
The 1st approach is 'I go to the screen container postgresql (I used screens to manage my orchestration). and press Ctrl-C multiple times to shut it down' This approach seems the best. Restarting seems to be smooth as well. The shutdown usually completes 1 minute or two but I have to be there manually.
2nd approach is
DOCKER_CONTAINER_NAME="timescaledb"
docker stop $DOCKER_CONTAINER_NAME
However, it seems it never completes.
3rd approach
docker kill $DOCKER_CONTAINER_NAME
However, the restart seems to be pretty long with a large recovery process....
What's the best I can do, mimicking method 1 where I keep pressing Ctrl-C to terminate it? While I could smoothly restart it later?
This solved my problem! Thanks!
docker stop -t 120 $DOCKER_CONTAINER_NAME
docker kill $DOCKER_CONTAINER_NAME
screen -S i2 -X stuff 'docker run -ti --user 1000:1000 -p 5432:5432 --name timescaledb --volume=/home/ubuntu/pgdata3:/home/postgresql/pgdata --rm -e POSTGRES_PASSWORD=sahisahikqhwwkejkqwjehjhwqjh -e PGDATA=/home/postgresql/pgdata timescale/timescaledb-ha:pg13-latest;\n'
Related
I am using Postgres with repmgr, one of the small problems I am having is that sometimes repmgr will have to stop and start the Postgres service and that will just kill the container, I tried some of the solutions online in the Dokcerfile but none seems to work, is there something I can add in the docker-compose file to prevent docker from exiting immediately, I don't want to stay alive forever, but maybe couple minutes?
Remember that docker-composer is mostly development thing. For production there are other ways, like kubernetes.
The only solution i know is to run our own .sh script as main process, which would have infinite loop with necessary checks in it.
This way you can control how to check - like ps aux and grep what you need. and exit main process if you need to by doing logic.
sh scrip would look something like:
while sleep 180; do
ps aux |grep postgres_service_name |grep -v grep
POSTGRESS=$?
if [ $POSTGRESS -ne 0 ]; then
# do what you need before exiting whole container
exit 1
fi
done
make sure you replace postgres_service_name with real name of Postgres service on linux.
Use that script as a startup script in docker compose or whatever you would use in prod.
if you really need 2 minutes before it is off - i would implement logic which would measure time after first time process is not there
The way docker is designed it will start a new container by starting the command specified as entrypoint/command to it and when this process terminates docker will kill all remaining processes in that container and shut it down.
So to keep the container running while the Postgres process is restarted you need to have another command running as the root process in the container.
You can achieve this by writing a simple shell script as a wrapper which will only exit when no Postgres process is running anymore or by using a dedicated init tool such as supervisord.
So, I pulled the postgres image down from docker. I followed a tutorial which explained what's going with the command below and the the whole docker pull. I can log in to the instance fine. But when I restart my computer or shutdown docker I end up goign through similar setup steps and am not able to access the postgres instance anymore. Can someone explain what's going on here:
Run this command
docker run --rm --name pg-docker -e POSTGRES_PASSWORD=docker -d postgres -p 5432:5432 -v $HOME/docker/volumes/postgres:/var/lib/postgresql/data postgres
log in via PG admin.
Nothing, instance not available.
So, I feel like I am missing a step at one point I had executed a command like this:
docker exec -it c5b8bdd0820b35a01ea153a44e82458a6285cf484b701b2b2d6d4210266fb4f8 bash
which gave me acess to the shell for the image, after doing that I was able then to use PGAdmin, however, I feel like that may have been coincidence? As this does not work currently.
So, what am I doing wrong? What's an easier way to do this?
The --rm causes Docker to automatically remove the container when it exits. Remove it.
You can also add --restart always and your container will be up after restart.
I am working on setting up Postgres 9.5 AS in Docker, and got everything installed. The issue however is, when I start the Docker Container, it appears that Postgres starts at first, but then the Container stops right away. (it does not show up with a docker container ls.). When I overwrite the Container startup with --entrypoint sh, and manually start Postgres, it all works fine.
I also checked with docker logs <container-id>, but that does not give me any info at all.
The setup is like this :
Dockerfile :
ENTRYPOINT ["/opt/edb/9.5AS/bin/init.sh"]
init.sh :
su enterprisedb -c '/opt/edb/9.5AS/bin/pg_ctl start -D /opt/edb/9.5AS/data'
From my command prompt I run :
docker run -it -v pgdata:/opt/edb/9.5AS/data <image_name>
It almost looks like it does start, but as soon as the start process is done, the shell stops, and as a result the Container stops as well.
So how to get it so the Container starts, Postgres starts and everything stays running, preferable in detached mode of course?
After researching some more, I found the answer in part also by finding clues on Stackoverflow.
Anyway, I modified my init.sh script to look like this :
/opt/edb/9.5AS/bin/pg_ctl start -D /opt/edb/9.5AS/data exec
"$#"
And the Dockerfile now ends like below :
USER enterprisedb
ENTRYPOINT ["/opt/edb/9.5AS/bin/init.sh"]
CMD ["/bin/bash"]
The core of the solution is the last line in the init.sh script as well as the last line in the Dockerfile. Both combined make it so that once the DB started, a new shell (/bin/bash) gets started. This will run in the foreground, thus keeping the Container alive. By starting the container in detached mode, it now does exactly what we need it to do.
I tried to restart Postgres in Docker using 'docker restart ' command. It got stopped but I'm not able to start it. When I run the command 'ps -a' it says the status as 'Exited'. Is there any way to start it again? I don't want to loose any data in that database.
The container had one active connection during restarting. Is that creating a problem?
If the container crashed due to a bug or something, you may not be able to restart it. However, you should still be able to recover at least part of your data by making a new image out of the container that you want to recover. Here's how you do it:
First, list all the containers that have run in your machine:
docker ps -a
Find out which one is the container that run with all the data you want to recover. You should be able to figure out from the CREATED field (you know when you started it).
Grab the hash (CONTAINER_ID) of the container, and execute the following command:
docker commit <hash> <a_new_name:tag>
This will save the container as an image that you can execute.
Execute the container with a bash or sh session, depending on what our base image offers:
docker run --entrypoint sh/bash -it <a_new_name:tag>
This will give you access to the state of the container at the time of exiting, which will allow you to inspect its conditions, find bugs, and possibly recover some data. Good luck!
I'm new to docker. I'm still trying to wrap my head around all this.
I'm building a node application (REST api), using Postgresql to store my data.
I've spent a few days learning about docker, but I'm not sure whether I'm doing things the way I'm supposed to.
So here are my questions:
I'm using the official docker postgres 9.5 image as base to build my own (my Dockerfile only adds plpython on top of it, and installs a custom python module for use within plpython stored procedures). I created my container as suggedsted by the postgres image docs:
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
After I stop the container I cannot run it again using the above command, because the container already exists. So I start it using docker start instead of docker run. Is this the normal way to do things? I will generally use docker run the first time and docker start every other time?
Persistance: I created a database and populated it on the running container. I did this using pgadmin3 to connect. I can stop and start the container and the data is persisted, although I'm not sure why or how is this happening. I can see in the Dockerfile of the official postgres image that a volume is created (VOLUME /var/lib/postgresql/data), but I'm not sure that's the reason persistance is working. Could you please briefly explain (or point to an explanation) about how this all works?
Architecture: from what I read, it seems that the most appropriate architecture for this kind of app would be to run 3 separate containers. One for the database, one for persisting the database data, and one for the node app. Is this a good way to do it? How does using a data container improve things? AFAIK my current setup is working ok without one.
Is there anything else I should pay atention to?
Thanks
EDIT: adding to my confusion, I just ran a new container from the debian official image (no Dockerfile, just docker run -i -t -d --name debtest debian /bin/bash). With the container running in the background, I attached to it using docker attach debtest and the proceeded to apt-get install postgresql. Once installed I ran (still from within the container) psql and created a table in the default postgres database, and populated it with 1 record. Then I exited the shell and the container stopped automatically since the shell wasn't running anymore. I started the container againg using docker start debtest, then attached to it and finally run psql again. I found everything is persisted since the first run. Postgresql is installed, my table is there, and offcourse the record I inserted is there too. I'm really confused as to why do I need a VOLUME to persist data, since this quick test didn't use one and everything apears to work just fine. Am I missing something here?
Thanks again
1.
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword
-d postgres
After I stop the container I cannot run it again using the above
command, because the container already exists.
Correct. You named it (--name some-postgres) hence before starting a new one, the old one has to be deleted, e.g. docker rm -f some-postgres
So I start it using
docker start instead of docker run. Is this the normal way to do
things? I will generally use docker run the first time and docker
start every other time?
No, it is by no means normal for docker. Docker process containers are supposed normally to be ephemeral, that is easily thrown away and started anew.
Persistance: ... I can stop and start
the container and the data is persisted, although I'm not sure why or
how is this happening. ...
That's because you are reusing the same container. Remove the container and the data is gone.
Architecture: from what I read, it seems that the most appropriate
architecture for this kind of app would be to run 3 separate
containers. One for the database, one for persisting the database
data, and one for the node app. Is this a good way to do it? How does
using a data container improve things? AFAIK my current setup is
working ok without one.
Yes, this is the good way to go by having separate containers for separate concerns. This comes in handy in many cases, say when for example you need to upgrade the postgres base image without losing your data (that's in particular where the data container starts to play its role).
Is there anything else I should pay atention to?
When acquainted with the docker basics, you may take a look at Docker compose or similar tools that will help you to run multicontainer applications easier.
Short and simple:
What you get from the official postgres image is a ready-to-go postgres installation along with some gimmicks which can be configured through environment variables. With docker run you create a container. The container lifecycle commands are docker start/stop/restart/rm Yes, this is the Docker way of things.
Everything inside a volume is persisted. Every container can have an arbitrary number of volumes. Volumes are directories either defined inside the Dockerfile, the parent Dockerfile or via the command docker run ... -v /yourdirectoryA -v /yourdirectoryB .... Everything outside volumes is lost with docker rm. Everything including volumes is lost with docker rm -v
It's easier to show than to explain. See this readme with Docker commands on Github, read how I use the official PostgreSQL image for Jira and also add NGINX to the mix: Jira with Docker PostgreSQL. Also a data container is a cheap trick to being able to remove, rebuild and renew the container without having to move the persisted data.
Congratulations, you have managed to grasp the basics! Keep it on! Try docker-compose to better manage those nasty docker run ...-commands and being able to manage multi-containers and data-containers.
Note: You need a blocking thread in order to keep a container running! Either this command must be explicitly set inside the Dockerfile, see CMD, or given at the end of the docker run -d ... /usr/bin/myexamplecommand command. If your command is NON blocking, e.g. /bin/bash, then the container will always stop immediately after executing the command.