Where does docker stores my MongoDB data? - mongodb

I have created the following docker-compose.yml file
# Use root/example as user/password credentials
version: '3.1'
services:
mongo:
image: mongo
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
ME_CONFIG_MONGODB_URL: mongodb://root:example#mongo:27017/
Then started it with
sudo docker-compose up
Then connected to mongo and created few documents. Then I have restarted my compose. Surprisingly, the data persists. As far I remeber, Docker was forgetting data if no volumes configured. Is this changed?
Where is it keeping my data in this situation?

By default, MongoDB’s default data directory path is \data\db and be keeped inside your container, so as long as the container is not completely removed, you can resume your container and data still stay there. In this case, if you restart MongoDB by docker-compose stop/start, it'll not remove your containers, just stop them and start again, so your data may persist. Unless you was using docker-compose down/up, which will completely remove and recreate new containers, then your data will be wiped.

take a look at the directory /var/lib/docker
https://betterprogramming.pub/persistent-databases-using-dockers-volumes-and-mongodb-9ac284c25b39
You can list you docker container with
docker ps
and then get the mount volume
docker inspect -f '{{ .Mounts }}' containerid
https://stackoverflow.com/a/30133768/5857581

Related

Docker postgres persisting data without volumes (should not persist)

I am using a docker container to run postgres for testing purposes, it should NOT persist data between different runs.
This is the dockerfile:
FROM postgres:alpine
ENV POSTGRES_PASSWORD=1234
EXPOSE 5432
And this is my compose file:
version: "3.9"
services:
web:
build:
context: ../../.
dockerfile: ./services/web/Dockerfile
ports:
- "3000:3000"
db:
build: ../db
ports:
- "5438:5432"
graphql:
build:
context: ../../.
dockerfile: ./services/graphql/Dockerfile
ports:
- "4000:4000"
indexer:
build:
context: ../../.
dockerfile: ./services/indexer-ts/Dockerfile
volumes:
- ~/.aws/:/root/.aws:ro
However, I find that between sessions all data is being persisted and I have no clue why. This is totally messing my tests and is not expected to happen.
Even after running docker system prune, all data still persists, meaning that the container is probably using a volume somehow
Does anyone know why this is happening and how to not persist the data?
When your stop your docker-compose environment by typing CTRL-C or similar, next time you run docker-compose up it will restart the same container if the configuration hasn't changed. So even absent volumes, any data that was there previously will continue to be there.
To ensure you're starting with fresh containers, always run:
docker-compose down
If you have explicit volumes defined in your configuration, adding -v will also delete those volumes:
docker-compose down -v
(That's not necessary in this situation.)
Unrelated to your question, but why are you building a custom postgres image? You could just set things up in your docker-compose.yaml file:
db:
image: postgres:alpine
environment:
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
ports:
- "5438:5432"
(And then set POSTGRES_PASSWORD in your .env file.)
You are correct, it is using a volume.
You can use the -v switch to clean up:
docker-compose rm -v db

Docker with postgresql in flask web application (part 2)

I am building a Flask application in Python. I'm using SQLAlchemy to connect to PostgreSQL.
In the flask application, I'm using this to connect SQLAlchemy to PostgreSQL
engine = create_engine('postgresql://postgres:[mypassword]#db:5432/employee-manager-db')
And this is my docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
volumes:
- .:/app
links:
- db:db
depends_on:
- pgadmin
db:
image: postgres:14.5
restart: always
volumes:
- .dbdata:/var/lib/postgresql
hostname: postgres
environment:
POSTGRES_PASSWORD: [mypassword]
POSTGRES_DB: employee-manager-db
pgadmin:
image: 'dpage/pgadmin4'
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: [myemail]
PGADMIN_DEFAULT_PASSWORD: [mypassword]
ports:
- "5050:80"
depends_on:
- db
I can do "docker build -t employee-manager ." to build the image. However, when I do "docker run -p 5000:5000 employee-manager" to run the image, I get an error saying
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "db" to address: Try again
Does anybody know how to fix this? Thank you so much for your help
Your containers are on different networks and that is why they don't see each other.
When you run docker-compose up, docker-compose creates a separate network and puts all the services defined inside docker-compose.yml on that network. You can see that with docker network ls.
When you run a container with docker run, it is attached to the default bridge network, which is isolated from other networks.
There are several ways to fix this, but this one will serve you in many other scenarios:
Run docker container ls and identify the name or ID of the db container that was started with docker-compose
Then run your container with:
# ID_or_name from the previous point
docker run -p 5000:5000 --network container:<ID_or_name> employee-manager
This attached the new container to the same network as your database container.
Other ways include creating a network manually and defining that network as default in the docker-compose.yml. Then you can use docker run --network <network_name> ... to attach other containers to that network.
docker run doesn't read any of the information in the docker-compose.yml file, and it doesn't see things like the Docker network that Compose automatically creates.
In your case you already have the service fully-defined in the docker-compose.yml file, so you can use Compose commands to build and restart it
docker-compose build
docker-compose up -d # will delete and recreate changed containers
(If the name of the image is important to you – maybe you're pushing to a registry – you can specify image: alongside build:. links: are obsolete and you should remove them. I'd also avoid replacing the image's content with volumes:, since this misses any setup or modification that's done in the Dockerfile and it means you're running untested code if you ever deploy the image without the mount.)

Docker-Compose MariaDB creates my volume but still uses container id volume

I can't seem to get docker/Maria to use my named docker volume. The host docker volumes directory is empty. But, there is a new container id right next to my named volume that looks like it has all of the MariaDB parts in it. The question is why?
My docker compose file:
version: "3.7"
#
# [Volumes]
#
volumes:
data-mysql:
#
# [Services]
#
services:
mariadb:
volumes:
- data-mysql:/var/lib/mysql
image: linuxserver/mariadb
container_name: mariadb
environment:
- PUID=1000
- GUID=1000
- MYSQL_ROOT_PASSWORD=<snipped>
- TZ=Etc/UTC
ports:
- 3306:3306
restart: unless-stopped
I've tried moving the volume part before and after the services line with no difference. When I do a docker-compose up, it does say it's creating the volume: mariadb_data-mysql, but when I shut down docker, there is nothing in the folder.
Thanks for any insight!
Nick
The data folder for MARIADB image you are using (linuxserver/mariadb) is /config/databases/ and not /var/lib/mysql. Replace this in your docker-compose.yml and it will work.
Also, the order in your docker-compose.yml does not matter: docker-compose will compile it and order everything alphabetically anyway before processing.

Docker DB Migration/Deployment to DigitalOcean

Warning: I am fairly new to docker and cloud hosting, this is likely a dumb question.
I have a local web app which has 3 images associated with it, the app itself, the db and a phpmyadmin image. All works well locally, and if I transfer all the files to my digital ocean droplet and bring up my containers it works fine there as well, but this is not how I want to deploy having every file from every library residing in my droplet.
I have been experimenting with creating a docker-machine on my droplet and deploying my containers remotely to it. This seems to work fine other than the fact that my db image does not reference my database and is simply an empty db. I tried to migrate the db in this fashion which I saw in a tutorial:
docker-compose run --rm web db:create db:migrate
But got the following error, I assume this is because my dev machine is running Windows 10 not Linux, but I cannot find anywhere what the equivalent command would be for a Windows machine.
Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"db:create\": executable file not found in $PATH": unknown
I know I am probably missing something really stupid and easy but I am having difficulties figuring out how to migrate the data for my db image. Thanks in advance.
UPDATE:
As requested here is my docker-compose:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ./data:/docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always
volumes:
data:
UPDATE #2:
transfered db file to /docker-entrypoint-initdb.d (I tried this yesterday too but couldn't get it working) and created a new production docker-compose-prod.yml I must be missing something still though as the DB is still empty. Below is my new docker-compose-prod.yml:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- /docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always
Your strategy is sound.
Actually, you can take it a further step by automating the Droplet provisioning to e.g. use a container-oriented OS and access your Compose file. But that's not this question ;-)
I think it is not relevant that you're using Windows and probably makes little difference; it may require some answer tweaks but that's about it.
The challenge is that you need to move (or recreate) the database state on the remote machine. There are several ways that the DB state could be persisted: in-container (not ideal); using volume mounts (good), other.
Each is "moveable" but it would help if you could add your Compose file to your question so that we may see which approach is being used.
In full-disclosure Im not familiar with the approach that you referencesd but that does not mean that it's inaccurate; I'm just not familiar with it.
Update: docker-entrypoint-initdb.d
See: "Initializing a fresh instance" on MySQL
So, any files within that directory are run to initialize the database container when it's created from the image.
In your Compose file you mount your host's ./data directory into this file. Presumably that directory contains >=1 file that performs your intended initialization.
NB The section volumes: data: at the end of the Compose file appears redundant. You're actually using a host-mounted directory ./data not this volume.
When you run the Compose file on the Droplet, those files aren't present and you'll need to copy them.
The simplest way to do this is to use scp and this provides 2 alternatives:
Either retain the data directory:
IP=[DROPLET-IP]
scp -r ./data root#${IP}:/data
NB The remote destination is /data not ./data. You will need to revise the Compose file on the Droplet (!) too:volumes: - /data:/docker-entrypoint-initdb.d
Or move the files directly to the Droplet's /docker-entrypoint-initdb.d:
scp -r ./data root#${IP}/docker-entrypointy-initdb.d
NB Now there's no need for the volume mapping. You may remove: volumes: - ./data:/docker-entrypoint-initdb.d
Update: repro (works)
I used a tweaked docker-compose.yaml but it's essentially the same:
version: "3.4"
services:
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ${PWD}/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
restart: always
adminer:
image: adminer
restart: always
ports:
- 8080:8080
Then mkdir ${PWD}/docker-entrypoint-initdb.d and created a file in it called freddie.sql:
create database if not exists frederik;
use frederik;
create table treats (
TreatID INT NOT NULL AUTO_INCREMENT,
TreatName VARCHAR(255) NOT NULL,
PRIMARY KEY (TreatId));
insert into treats (TreatName)
values
("Dried Salmon"),
("Meatballs");
Then docker-compose rm --force && docker-compose up
I was able to browse the adminer UI (:8080), login (root|mypass) and browse the database frederik:

Data does not persist to host volume with docker-compose.yml for MongoDB

Have docker installed on my Mac along with MongoDB image. When I start the container with volume specified directly in the docker run command, the data gets persisted even after I stop the container. But the same does not work when I do it via docker-compose.yml. Can someone suggest what's wrong with the docker-compose.yml file?
When I do this, the data gets persisted:
docker run --name mongodb -v /Users/shrap7/projects/mongodb/data:/data/db --rm -d mongo
But when I try the same with docker-compose.yml - it does not persist data.
version: '3.4'
services:
mongodb:
image: mongo:latest
container_name: "mongodb"
volumes:
- /Users/shrap7/projects/mongodb/data:/data/db
ports:
- 27017:27017
I tried with double quotes, tried relative path, i.e. ./data:/data/db, but no luck. Thank you.
I'm using Windows and I had the same issue today. I found this gist that shows a mongodb configuration for docker-compose, and it has support for named volumes. I tried using named volumes and it worked for me.
So in your case, you could try the following configuration (with named volumes):
services:
mongodb:
image: mongo:latest
container_name: "mongodb"
volumes:
- mongodb:/data/db
- mongodb_config:/data/configdb
ports:
- 27017:27017
volumes:
mongodb:
mongodb_config: