Create Read-Only MongoDB container read data of the existing MongoDB container - mongodb

I have running MongoDB as Docker container. Now I want to create one more MongoDB container but read-only and read data from the existing one.
What should I do? I don't use Docker Swarm mode!
I want to have 2 MongoDB container run, the existing is keep running, the new one is read-only and read data from the existing container.
Thanks for reading!

Related

Persisting a single, static, large Postgres database beyond removal of the db cluster?

I have an application which, for local development, has multiple Docker containers (organized under Docker Compose). One of those containers is a Postgres 10 instance, based on the official postgres:10 image. That instance has its data directory mounted as a Docker volume, which persists data across container runs. All fine so far.
As part of testing the creation and initialization of the postgres cluster, it is frequently the case that I need to remove the Docker volume that holds the data. (The official postgres image runs cluster init if-and-only-if the data directory is found to be empty at container start.) This is also fine.
However! I now have a situation where in order to test and use a third party Postgres extension, I need to load around 6GB of (entirely static) geocoding lookup data into a database on the cluster, from Postgres backup dump files. It's certainly possible to load the data from a local mount point at container start, and the resulting (very large) tables would persist across container restarts in the volume that holds the entire cluster.
Unfortunately, they won't survive the removal of the docker volume which, again, needs to happen with some frequency. I am looking for a way to speed up or avoid the rebuilding of the single database which holds the geocoding data.
Approaches I have been or currently am considering:
Using a separate Docker volume on the same container to create persistent storage for a separate Postgres tablespace that holds only the geocoder database. This appears to be unworkable because while I can definitely set it up, the official PG docs say that tablespaces and clusters are inextricably linked such that the loss of the rest of the cluster would render the additional tablespace unusable. I would love to be wrong about this, since it seems like the simplest solution.
Creating an entirely separate container running Postgres, which mounts a volume to hold a separate cluster containing only the geocoding data. Presumably I would then need to do something kludgy with foreign data wrappers (or some more arcane postgres admin trickery that I don't know of at this point) to make the data seamlessly accessible from the application code.
So, my question: Does anyone know of a way to persist a single database from a dockerized Postgres cluster, without resorting to a dump and reload strategy?
If you want to speed up then you could convert your database dump to a data directory (import your dump to a clean postgres container, stop it and create a tarball of the data directory, then upload it somewhere). Now when you need to create a new postgres container use use a init script to stop the database, download and unpack your tarball to the data directory and start the database again, this way you skip the whole db restore process.
Note: The data tarball has to match the postgres major version so the container has no problem to start from it.
If you want to speed up things even more then create a custom postgres image with the tarball and init script bundled so everytime it starts then it will wipe the empty cluster and copy your own.
You could even change the entrypoint to use your custom script and load the database data, then call docker-entrypoint.sh so there is no need to delete a possible empty cluster.
This will only work if you are OK with replacing the whole cluster everytime you want to run your tests, else you are stuck with importing the database dump.

Share MongoDB across 2 Docker containers having different MongoDB versions

Already a fully-fledged mongo v3.2 instance with data is running on a container.
I need to create a mongo v3.6 container instance with the same data as v3.2.
I do not have space to clone the data on the server.
I tried a lot of stuff.
Can I point to the data of the v3.2 from my v3.6 so that it is shared and I save space?
You can try this. Don't know if it would work(because of different versions of mongodb).
You can create a sharded cluster and add your old DB as a shard.
I got the space cleared.
Got the dump and made a new instance of mongo.
It works like a charm now.
The sharing volume was corrupting the data.

Docker postgres how to share database

I'm running a PostgreSQL db via docker postgres.
I have populated the db with lots of data and would like to share it with others.
Is there a way to 'save' this database with all the data as a new image and publish it to a Docker registry so it can be easily pulled and used?
You can use docker container commit https://docs.docker.com/engine/reference/commandline/commit/ to create an image from a container.
Then you can publish that image to a docker registry for use by others.

How to add seeded data into the mongodb using docker?

I have a mongo-seed image and mongo image set up
How can i run the mongo database up with the seeded data in docker.
use docker link in-order to link 2 dependent containers
It explained here , it might helpfull
https://rominirani.com/docker-tutorial-series-part-8-linking-containers-69a4e5bf50fb
If your using docker-compose then here is the solution which you looking
https://gist.github.com/jschwarty/6f9907e2871d1ece5bde53d259c18d5f

How MongoDB works with Docker

a quick question about how docker and mongo coexist.
When I deploy my app to docker hub, does it include db records?
When docker removes mongo records. When I stop container, or only when I remove it?
The answer is depends...
You could create a image with your records, but that would increase your image size, and if someone mount a volume to the path /data/db they would lose your database. So I do not recommend to upload a image with a loaded database, instead use a custom entrypoint script to init your database.
About when the records are destroyed, it will happen when you remove the container, but only if you did not mount a volume to the folder /data/db in the container, then the database will be persisted even if you remove the container.
You can see more info about how to use the image at: https://hub.docker.com/_/mongo/