How to run pg_rewind in postgresql docker container - postgresql

Got the same question. Running PostgreSQL replication clusters in docker container with the official postgresql docker image, now is trying to work out an approach to do failover.
When running pg_rewind against the previous primary container without stopping PostgreSQL service, the failure occurs:
pg_rewind: fatal: target server must be shut down cleanly
But if I run:
docker exec <container-name> pg_ctl stop -D <datadir>
The container is restarted because of the restart policy unless-stopped.

Found the answer by myself.
Just stop the existing container and run command in a new container using the same image and volume mounts, something like:
docker run -it --rm --name pg_rewind --network=host \
--env="PGDATA=/var/lib/postgresql/data/pgdata" \
--volume=/var/lib/postgresql/data:/var/lib/postgresql/data:rw \
--volume=/var/lib/postgresql:/var/lib/postgresql:rw \
--volume=/var/run/postgresql:/var/run/postgresql:rw \
postgres:12.4 \
pg_rewind --target-pgdata=/var/lib/postgresql/data/pgdata --source-server='host=10.0.0.55 port=5432 dbname=postgres user=replicator password=password'

Related

create postgres db contaner in local docker on mac

I and new to use docker, I'm so confused about creating postgres container in docker.
what is -v /data:/var/lib/postgresql/data in the command line below, which is for creating a container in docker? Is it for setting the volume? Can I change the path since I cannot find the postgresql in /lib, and so I cannot find the file path when I want add the file permission in file sharing in docker setting.
sudo docker run -d --name mybd --network mydb-network -p 5432:5432 -v /data:/var/lib/postgresql/data -e POSTGRES_PASSWORD=mydb -e PGDATA=/var/lib/postgresql/data/pgdata postgres
running container error
when I tried to run container in docker, it showed this error. Then I went to docker setting and wanted add /data path to file sharing, however, I cannot find the /date path.

I cannot run more than 2 containers in Docker

I'm running docker for a project on Github. (https://github.com/mydexchain/mydexchain)
I'm running the code below and creating an image file and a container.
docker run -d --rm -p 2020:2020 -v mydexchain:/var/lib/postgresql/ --privileged --log-driver=none --name mydexchain mydexchain/mydexchain:latest
I set my tracker address on port 2020.
I run the container with the attach command. "docker attach mydexchain"
I'm running the code below and creating a second container.
docker run -d --rm -p 2021:2020 -v mydexchain1:/var/lib/postgresql/ --privileged --log-driver=none --name mydexchain1 mydexchain/mydexchain:latest
I set my tracker address on port 2021.
I run the container with the attach command. "docker attach mydexchain1"
-So far everything is normal-
I'm running the code below and creating a third container.
docker run -d --rm -p 2022:2020 -v mydexchain2:/var/lib/postgresql/ --privileged --log-driver=none --name mydexchain2 mydexchain/mydexchain:latest
I'm checking the containers with the docker ps command.
I see this screen.
As soon as I want to do anything with this container like attach or set tracker, the container disappears. Like This.
When I check the logs;
When I did these procedures for the first time, I did not encounter any errors.
I would be glad if you could help. I have been working for 1 week and could not find any solution.
Have you ensured that all volumes are clean and similar? Usually the path /var/lib/postgresql/data is mounted into container.
Might be also related to your pausing/closing your previous attach command which kills that container, since you are missing -i and -t flags when launching the container. Those should be used to prevent it from closing. See more from the documentation of attach command.

Monitor Container is not running

I pulled the latest image from DockerHub Ceph/Daemon. I run the container like this:
docker run -d --net=host \
-v ~/ceph-container1/etc/ceph:/etc/ceph \
-v ~/ceph-container1/var/lib/ceph/:/var/lib/ceph/ \
-e MON_IP=192.168.0.20 \
-e CEPH_PUBLIC_NETWORK=192.168.0.0/24 \
ceph/daemon mon
The container exits immediately after created. I can not use ceph -v or ceph -s to check I deployed right or not. Same thing happens on OSD and MDS as well. Only MGR container will keep running after created.
My system is ArchLinux. Did I miss any thing else to keep it running? Thanks.

PostgreSQL docker container not writing data to disk

I am having some difficulty with docker and the postgres image from the Docker Hub. I am developing an app and using the postgres docker to store my development data. I am using the following command to start my container:
sudo docker run --name some-postgresql -e POSTGRES_DB=AppDB -e POSTGRES_PASSWORD=App123! -e POSTGRES_USER=appuser -e PGDATA="/pgdata" --mount source=mydata,target=/home/myuser/pgdata -p 5432:5432/tcp postgres
When I finish working on my app, I usually have to run "docker container prune", in order to free up the container name and be able to run it again later. This worked until recently, when I upgraded my postgres image to run version 11 of PostgreSQL. Now, when I start my container and create data in it, the next time I use it the data is gone. I've been reading about volumes in the docker documentation cannot find anything that can tell my why this is not working. Can anyone please shed some light on this?
Specify a volume mount with -v $PGDATA_HOST:/var/lib/postgresql/data.
The default PGDATA inside the container is /var/lib/postgresql/data so there is no need to change that if you're not modifying the Docker image.
e.g. to mount the data directory on the host at /srv/pgdata/:
$ PGDATA_HOST=/srv/pgdata/
$ docker run -d -p 5432:5432 --name=some-postgres \
-e POSTGRES_PASSWORD=secret \
-v $PGDATA_HOST:/var/lib/postgresql/data \
postgres
The \ are only needed if you break the command over multiple lines, which I did here for the sake of clarity.
since you specified -e PGDATA="/pgdata", the database data will be written to /pgdata within the container. If you want the files in /pgdata to survive container deletion, that location must be a docker volume. To make that location a docker volume, use --mount source=mydata,target=/pgdata.
In the end, it would be simpler to just run:
sudo docker run --name some-postgresql -e POSTGRES_DB=AppDB -e POSTGRES_PASSWORD=App123! -e POSTGRES_USER=appuser --mount source=mydata,target=/var/lib/postgresql/data -p 5432:5432/tcp postgres

Docker container not starting when connect to postgresql external

I have docker container with redmine in it and I have postgresql-95 running on my host machine, and I want to connect my redmine container to my postgresql. I followed this step : https://github.com/sameersbn/docker-redmine, to connect my container with external postgresql.
assuming my host machine with ip 192.168.100.6, so I ran this command :
docker run --name redmine -it -d \
--publish=10083:80 \
--env='REDMINE_PORT=10083' \
--env='DB_ADAPTER=postgresql' \
--env='DB_HOST=192.168.100.6' \
--env='DB_NAME=redmine_production' \
--env='DB_USER=redmine' --env='DB_PASS=password' \
redmine-docker
The container running for 1 minute but suddenly it stopped even before it runs nginx and redmine in it. I Need help about this configuration. Thank you.