Map data container volume to volume - mongodb

I am using the (not official, as mentioned by Usman) mongodb image (https://registry.hub.docker.com/u/dockerfile/mongodb/) which creates a volume at "/data/db"
create mongdb container:
docker build -t="dockerfile/mongodb" github.com/dockerfile/mongodb
Run data container:
docker run -v /data/db --name databox ubuntu:latest true
run mongdb container with the data container (write mongo data into data container)
docker run -d -p 27017:27017 --volumes-from databox --name mongodb_shared_persistence dockerfile/mongodb
I tested it with:
docker run --volumes-from=databox busybox ls /data/db
...db files are created. So far so good.
But what if the data container has a volume at /mongodb/data and I want to map that to the /data/db volume of the mongodb container?
...like this:
docker run -d -p 27017:27017 -v <?data_container_volume?>:/data/db --name mongodb dockerfile/mongodb
is that even possible?

If you read the comments by shykes on Issue 111:
Volumes don't have top-level names. At no point does the user provide
a name, or is a name given to him. Volumes are identified by the path
at which they are mounted inside their container.
So I don't think there is any way to achieve this.

Related

Unable to name a Mongo database in Docker compose

Can anyone tell me where I am going wrong. All I am trying to do is name a Mongo database using docker compose.
I have a docker compose file that looks like this:
version: "3"
services:
mongo-db:
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
- MONGO_INITDB_DATABASE=mydbname
ports:
- 27017:27017
volumes:
- mongo-db:/data/db
volumes:
mongo-db:
I run docker docker-compose -f docker-compose.yml up -d --build and it runs. I then open Robo 3T and connect to my container but every time I do the database is called test and not mydbname. Any ideas? TIA
The environment variables are only used to create a new database if no database already exists. You map a volume to /data/db and that volume probably contains an existing database named 'test'.
Find the volume using docker volume ls. It's called something like <directory name>_mongo-db. Then delete it using docker volume rm <volume name>.
Now Docker will create a new, empty volume and Mongo will create a new database when you start the container. And it'll use the values from the environment variables.

Create Postgres Docker Image with Database

How to Create Postgres Docker Image with Data?
I have this folder/file structure:
- initdb
- 01-createSchema.sql
- 02-createData.sql
- Dockerfile
The Dockerfile:
FROM postgres:13.5-bullseye
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD PASSWORD
ENV POSTGRES_DB mydatabase
COPY ./initdb/*.sql /docker-entrypoint-initdb.d/
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]
I can build my-database image:
docker build . -t me/my-database
Then start a container build on the image:
docker run --name my-db -p 5432:5432 -d me/my-database
When I connect to the database, I can find my tables with my data.
So far so good.
But this is not exactly what I want, because my database is build when I start the first time my container (with the docker run command).
What I want, is an image that already has build the database, so when I start the container, no further database creation (which takes a few minutes in my case) is needed.
Anything like this 'Dockerfile':
FROM postgres:13.5-bullseye
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD PASSWORD
ENV POSTGRES_DB mydatabase
COPY ./initdb/*.sql /docker-entrypoint-initdb.d/
## The tricky part, I could not figure out how to do:
BUILD DATABASE
REMOVE /docker-entrypoint-initdb.d/*.sql
##
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]
How can I build my pre-build-database-image?
The proper way to persist data in docker is to create an empty volume that will store the database files created by the container.
docker volume create pgdata
When running your image, you need to mount the volume at the path in the container that contains your database data. By default, the official Postgres uses /var/lib/postgresql/data.
docker run -d \
--name db \
-v pgdata:/var/lib/postgresql/data \
me/my-database
When the pgdata volume is empty, the container's entrypoint script will initialize the database. The next time you run the image, the entrypoint script will determine that the directory already contains data provided from the volume and it won't attempt to re-initialize the database or run any of the scripts located in /docker-entrypoint-initdb.d -so you do not need to remove this directory.

How can I connect Odoo not running on Docker to a Postgres container running on Docker?

I am trying to connect Odoo to a Postgres database instance which is running in Docker, but having trouble figuring out how to connect them. I created my instance like so:
$ docker run -d -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo -e POSTGRES_DB=postgres --name mydb postgres:10
Only Postgres is running in Docker, not Odoo. How would I connect the Postgres running inside Docker to the outside Odoo?
Shortly:
You have to open the port of your docker instance
-p 5432:5432
Example:
docker run -d -p 5432:5432 -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo -e POSTGRES_DB=postgres --name mydb postgres:10
Description
Because when you run a container with docker, it is not exposed by default to the host network. So when you run Postgres, it is not accessible outside of the container. In order to make it accessible, you could :
Export a specific port : docker run -d -p 5432:5432 ...
Use the host network: docker run --network=host ...
Bonus:
If you wish to run odoo within a container in the future, you might need to create a docker network docker network create odooNetwork and use it for your Postgres and Odoo instances :
docker run -d --network=odooNetwork ...
More details about docker network in the documentation

equivalent docker run command for working docker-compose (postgres)

If have a docker-compose file for postgres that works as expected and I'm able to access it from R. See relevant content below. However, I also need an equivalent "docker run" command but for some reason cannot get this to work. As far as I can tell the commands / setup are equivalent. Any suggestions?
postgres:
image: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
PGDATA: /var/lib/postgresql/data
ports:
- 5432:5432
restart: always
volumes:
- ~/postgresql/data:/var/lib/postgresql/data
The docker run command I'm using is:
docker run -p 5432:5432 \
--name postgres \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=postgres \
-e PGDATA=/var/lib/postgresql/data \
-v ~/postgresql/data:/var/lib/postgresql/data \
-d postgres
EDIT 1: In both settings I'm trying to connect from another docker container/service. In the docker-compose setting the different services are described in one and the same yml file
EDIT 2: David's answer provided all the information I needed. Create a docker network and reference that network in each docker run call. For those interested in a shell script that uses this setup to connect postgres, pgadmin4, and a data science container with R and Python see the link below:
https://github.com/radiant-rstats/docker/blob/master/launch-rsm-msba-pg.sh
Docker Compose will automatically create a Docker network for you (per Compose file). For inter-container DNS to work, you can't use the default Docker network but any named network will work. So you need to add that bit of setup:
docker network create some-name # default options are fine
docker run --net some-name --name postgres ...
# will be accessible as "postgres" from other containers on
# the "some-name" network

Correct syntax to do mongodump of mongoDb docker instance?

I'm running an ubuntu 16.04 LTS server with some docker container. One of these containers is a mongoDB container, where my data is stored.
Now I'm trying to make a backup by mongodump.
The problem for me is, that mongoDb is running as a docker container, and the backup should be stored outside of the docker container.
I think the syntax for this is something like this:
docker run \
--rm \
-it \
--link DOCKER_CONTAINER_NAME:mongo_alias \
-v /backup:/backup \
mongo mongodump \
--host mongo_alias \
--out /backup/
But I'm not sure for the parameters I have to use...
This is what I get for my mongoDb container via docker ps:
7bee41bfa08a mongo:3.4 "docker-entrypoint..." 4 months ago Up 2 months 27017/tcp mongo_db
And this is my docker-compose file:
version: "3"
services:
mongo_db:
container_name: mongo_db
image: 'mongo:3.4'
restart: 'always'
volumes:
- '/opt/mongo/project/live:/data/db'
So it should look like this?
docker run \
--rm \
-it \
--link mongo_db:mongo_alias \ # mongo_alias can be choosen freely?
-v /backup:/backup \ # Don't understand /backup:/backup
mongo mongodump \
--host mongo_alias \
--out /backup/ # This is in the root of the server?
Define the backup to run via compose as well. This will create the new container on the same network as the main mongo container. If you have any compose network definitions you will need to duplicate them in each compose file.
Create a second compose file for the backup command: docker-compose-backup.yml
version: "3"
services:
mongo_db_backup:
image: 'mongo:3.4'
volumes:
- '/opt/mongo/project/live_backup:/backup'
command: |
mongodump --host mongo_db --out /backup/
Then run the backup
docker-compose -f docker-compose-backup.yml run mongo_db_backup
You can also do this without docker-composer, directly from your host
Backup all databases
docker exec -t your-db-container mongodump --host mongo_db --out /backup/
Restore all databases
Move your backup folder to your host volume folder
docker exec -t your-db-container mongorestore /backup/