Changes in container not persisted after docker commit [duplicate] - postgresql

This question already has answers here:
docker postgres with initial data is not persisted over commits
(5 answers)
Closed 1 year ago.
I just create a container like:
docker run --name mypostgres \
--publish 5432:5432 \
--net=apinetwork \
-e POSTGRES_PASSWORD=pwd \
-e POSTGRES_DB=db_uso \
-d postgres
I connect in the database and create some tables and objects...
After this I commit;
docker commit mypostgres user/mypostgres
But when I create other container from this image:
docker run --name mypostgres2 \
--publish 5432:5432 \
--net=apinetwork \
-e POSTGRES_PASSWORD=mysecretpassword \
-e POSTGRES_DB=correntista \
-d user/mypostgres
I don't see my objects that I created inside!
Please help me.

It's most likely you are not seeing the changes because the volume created with the original container is not being used by the new container.
See Persist the DB Docker tutorial.
You need to explicitly create a volume, and make sure it is being used in the new container.
$ docker volume create my-pg-db
$ docker run --name mypostgres \
--publish 5432:5432 \
--net=apinetwork \
-v my-pg-db:/var/lib/postgresql/data \
-e POSTGRES_PASSWORD=pwd \
-e POSTGRES_DB=db_uso \
-d postgres
This means it may not even be necessary to make a copy of your container, if all the changes you are doing are on the DB. The volume which contains the DB is what you need to preserve!
And make sure you mount and use it whenever you want to start the container.

The reason data is not persisted is explained in this other SO question:
docker postgres with initial data is not persisted over commits
The problem is that the postgres Dockerfile declares "/var/lib/postgresql/data" as a volume. This is a just a normal directory that lives outside of the Union File System used by images. Volumes live until no containers link to them and they are explicitly deleted.
which is why you need externally managed volumes like #smac90 wrote.
Source:
https://github.com/docker-library/postgres/blob/5c0e796bb660f0ae42ae8bf084470f13417b8d63/Dockerfile-debian.template#L186

Related

Spun up a Postgres Docker container without exposing ports. How can I expose without losing data? [duplicate]

This question already has answers here:
How do I assign a port mapping to an existing Docker container?
(15 answers)
Closed 7 months ago.
To preface, I'm not a Docker novice (or expert), just unsure how to complete this without blowing up my current container.
I'm playing around with Postgres and spun up a container using this command without realizing it won't expose Postgres outside of Docker
docker run --name postgres -e POSTGRES_PASSWORD=<password> -d postgres
I want to be able to have this data accessible to other systems. I do not want to remove the container since I have quite a bit of configuration already done.
I've tried editing the container hostconfig.json file in /var/lib/docker/containers after stopping the instance, but the file gets overwritten after starting the container back up.
What's the best practice to expose a port outside of the container?
Any change in within the container will be reset to initial state if you restart the container
so any data you need to persist must be added to a docker volume
so you need first to persist data
then expose ports
To do this you may run
$ docker run -d \
--name some-postgres \
-p 5432:5432 \
-e POSTGRES_PASSWORD=mysecretpassword \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-v ./postgresql:/var/lib/postgresql/data \
postgres
more details here
https://hub.docker.com/_/postgres

Install pgadmin III for a linked database container in docker

There are two running docker containers. One container containing a web application and the second is a linked postgres database.
Where should the Pgadmin III tool be installed?
pgAdmin can be deployed
in a container using the image at hub.docker.com/r/dpage/pgadmin4/
E.g. to run a TLS secured container using a shared config/storage directory in /private/var/lib/pgadmin on the host, and servers pre-loaded from /tmp/servers.json on the host:
docker pull dpage/pgadmin4
docker run -p 443:443 \
-v /private/var/lib/pgadmin:/var/lib/pgadmin \
-v /path/to/certificate.cert:/certs/server.cert \
-v /path/to/certificate.key:/certs/server.key \
-v /tmp/servers.json:/pgadmin4/servers.json \
-e 'PGADMIN\_DEFAULT\_EMAIL=user#domain.com' \
-e 'PGADMIN\_DEFAULT\_PASSWORD=SuperSecret' \
-e 'PGADMIN\_ENABLE\_TLS=True' \
-d dpage/pgadmin4

Is it possible to bundle data-file into a docker image?

I have performed the following steps,
docker run -d --name demo-mongo -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=mongoadmin -e MONGO_INITDB_ROOT_PASSWORD=secret -e MONGO_INITDB_DATABASE=testdb mongo ** to create a new mongodb container
Create data-base inside the running container, by connecting to it using a mongo client
docker commit demo-mongo demo-mongo-updated ** create image from the running container
However, docker does not by default (which seems obvious) retain the data of the newly created data-base (likely to be retained in /data/db) in the newly created image.
Is it possible by any means to preserve the state of a container while creating an image from the same.
You can create a volume that points to a host directory and mount it to the container:
docker run -v <PATH_TO_THE_HOST_DIR>:/data -d --name demo-mongo -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=mongoadmin -e MONGO_INITDB_ROOT_PASSWORD=secret -e MONGO_INITDB_DATABASE=testdb mongo
The next time you create a mongo container pointing to the same host path, it will be available on the container.

PostgreSQL docker container not writing data to disk

I am having some difficulty with docker and the postgres image from the Docker Hub. I am developing an app and using the postgres docker to store my development data. I am using the following command to start my container:
sudo docker run --name some-postgresql -e POSTGRES_DB=AppDB -e POSTGRES_PASSWORD=App123! -e POSTGRES_USER=appuser -e PGDATA="/pgdata" --mount source=mydata,target=/home/myuser/pgdata -p 5432:5432/tcp postgres
When I finish working on my app, I usually have to run "docker container prune", in order to free up the container name and be able to run it again later. This worked until recently, when I upgraded my postgres image to run version 11 of PostgreSQL. Now, when I start my container and create data in it, the next time I use it the data is gone. I've been reading about volumes in the docker documentation cannot find anything that can tell my why this is not working. Can anyone please shed some light on this?
Specify a volume mount with -v $PGDATA_HOST:/var/lib/postgresql/data.
The default PGDATA inside the container is /var/lib/postgresql/data so there is no need to change that if you're not modifying the Docker image.
e.g. to mount the data directory on the host at /srv/pgdata/:
$ PGDATA_HOST=/srv/pgdata/
$ docker run -d -p 5432:5432 --name=some-postgres \
-e POSTGRES_PASSWORD=secret \
-v $PGDATA_HOST:/var/lib/postgresql/data \
postgres
The \ are only needed if you break the command over multiple lines, which I did here for the sake of clarity.
since you specified -e PGDATA="/pgdata", the database data will be written to /pgdata within the container. If you want the files in /pgdata to survive container deletion, that location must be a docker volume. To make that location a docker volume, use --mount source=mydata,target=/pgdata.
In the end, it would be simpler to just run:
sudo docker run --name some-postgresql -e POSTGRES_DB=AppDB -e POSTGRES_PASSWORD=App123! -e POSTGRES_USER=appuser --mount source=mydata,target=/var/lib/postgresql/data -p 5432:5432/tcp postgres

I can't access mounted volume of docker-postgres from host

I create my container like this:
docker run --name postgresql -itd --restart always \
--publish 5432:5432 \
--volume /my/local/host:/var/lib/postgresql \
sameersbn/postgresql:9.4-11
but, when I do ls on the root directory, I see something like this:
drwxrwxr-x 3 messagebus messagebus 4,0K Ιαν 10 00:44 host/
or, in other words, I cannot access the /my/local/host directory. I have no idea about the messagebus user. is that normal? if this is the case, then how could I move the database from one machine to another in the future?
Try using a data container to hold your DB data. The pattern is described in the docs and is designed to promote clean separation between run-time and data.
$ docker create -v /var/lib/postgresql --name dbdata sameersbn/postgresql:9.4-11
$ docker run --name postgresql1 -itd --restart always \
--publish 5432:5432 \
--volumes-from dbdata
sameersbn/postgresql:9.4-11
A separate data container makes backup and recovery simpler and more obvious
docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /var/lib/postgresql
The following posting I think gives a good explanation of data containers:
https://medium.com/#ramangupta/why-docker-data-containers-are-good-589b3c6c749e#.zarkr5sxc