I'm running a PostgreSQL db via docker postgres.
I have populated the db with lots of data and would like to share it with others.
Is there a way to 'save' this database with all the data as a new image and publish it to a Docker registry so it can be easily pulled and used?
You can use docker container commit https://docs.docker.com/engine/reference/commandline/commit/ to create an image from a container.
Then you can publish that image to a docker registry for use by others.
Related
i have very strange truble with Docker containers communication, here what i did:
1)I configured a Docker bridge network
2)I created PostgreSQL image and started it in bridge network (same of step 1)
3)I connected and filled PostgreSQL container with data using DBeaver from the host.
4)I created and started SpringBoot app container inside the bridge network. The SprigBoot app container connects correctly with PostgreSQL container.
Here's the problem: i can retrieve all data i want from PostgreSQL container using Dbeaver on same host or using my not-conteinerized SpringBoot app. But i can't retrieve data from the same PostgreSQL container using SpringBoot app container (in the same bridge network described in step 1).
It looks like SpringBoot app container can access only table's structures but it can't access data.
-I tried to modify postgresql conf allowing access from all hosts
I solved the problem my own and here is the solution:
Even though I started the docker postgres:12 image the default postgres server that was being started was v15. I noticed this problem by doing "test connection" from DBeaver. For this reason i could see only table's structures but can't access data. I solved the problem uninstalling Postgres server v15 (I was not using it) so the default version became v12.
I have running MongoDB as Docker container. Now I want to create one more MongoDB container but read-only and read data from the existing one.
What should I do? I don't use Docker Swarm mode!
I want to have 2 MongoDB container run, the existing is keep running, the new one is read-only and read data from the existing container.
Thanks for reading!
Almost the same as Import osm data in Docker postgresql BUT I want to load the osm data into the postgres via osm2pgsql during the docker build phase.
The reason for this are:
I only want to load a fixed osm file inside my postgres, meaning this data will not change.
I want to reuse this docker image as many times as possible.
It is not possible to mount any volume with my current environment.
I know that this will make the docker image big but that is something I already took into consideration.
I have a mongo-seed image and mongo image set up
How can i run the mongo database up with the seeded data in docker.
use docker link in-order to link 2 dependent containers
It explained here , it might helpfull
https://rominirani.com/docker-tutorial-series-part-8-linking-containers-69a4e5bf50fb
If your using docker-compose then here is the solution which you looking
https://gist.github.com/jschwarty/6f9907e2871d1ece5bde53d259c18d5f
a quick question about how docker and mongo coexist.
When I deploy my app to docker hub, does it include db records?
When docker removes mongo records. When I stop container, or only when I remove it?
The answer is depends...
You could create a image with your records, but that would increase your image size, and if someone mount a volume to the path /data/db they would lose your database. So I do not recommend to upload a image with a loaded database, instead use a custom entrypoint script to init your database.
About when the records are destroyed, it will happen when you remove the container, but only if you did not mount a volume to the folder /data/db in the container, then the database will be persisted even if you remove the container.
You can see more info about how to use the image at: https://hub.docker.com/_/mongo/