Docker-Compose MariaDB creates my volume but still uses container id volume - docker-compose

I can't seem to get docker/Maria to use my named docker volume. The host docker volumes directory is empty. But, there is a new container id right next to my named volume that looks like it has all of the MariaDB parts in it. The question is why?
My docker compose file:
version: "3.7"
#
# [Volumes]
#
volumes:
data-mysql:
#
# [Services]
#
services:
mariadb:
volumes:
- data-mysql:/var/lib/mysql
image: linuxserver/mariadb
container_name: mariadb
environment:
- PUID=1000
- GUID=1000
- MYSQL_ROOT_PASSWORD=<snipped>
- TZ=Etc/UTC
ports:
- 3306:3306
restart: unless-stopped
I've tried moving the volume part before and after the services line with no difference. When I do a docker-compose up, it does say it's creating the volume: mariadb_data-mysql, but when I shut down docker, there is nothing in the folder.
Thanks for any insight!
Nick

The data folder for MARIADB image you are using (linuxserver/mariadb) is /config/databases/ and not /var/lib/mysql. Replace this in your docker-compose.yml and it will work.
Also, the order in your docker-compose.yml does not matter: docker-compose will compile it and order everything alphabetically anyway before processing.

Related

Volumes in docker are not longer created

I working with a simple docker-compose file (node alpine), i got three anon volumens, this already work in the pass but now, not longer created.
I delete the folder from the host side (Windows), to try if docker creates against the folders and put inside the files, but nothing is happend.
version: "3.3"
services:
api:
#restart: always
build:
context: .
image: foo-foo-platform:1.1.0.0
#container_name: foo-foo-platform
env_file: docker-compose-debug.env
labels:
- "traefik.enable=false"
- "traefik.http.routers.api-gw.rule=PathPrefix(`/`)"
- "traefik.http.services.api-gw.loadbalancer.server.port=8090"
networks:
- internal
volumes:
- /mnt/logs:/mnt/logs
- /mnt/cc:/mnt/cc
ports:
- "8084:8084"
networks:
internal:
I have tried to prune volumes with docker volume prune, anyway noone of volumes listed is from this docker.
Al tried "docker-compose -f docker-compose-debug.yml up --build --force-recreate --renew-anon-volumes"
Note: "/mnt/logs:/mnt/logs" this notation works in windows.

Volume mount in docker for postgres data

I am trying to insert data into postgres using docker.
I have a folder in my code named data which has insert commands and has one file named init.sql.
I want to insert the data from init.sql present in folder data to tables present in docker.
version: '3.1'
services:
postgres:
image: postgres:11.6-alpine
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_PORT: 5432
volumes:
- ./tables:/docker-entrypoint-initdb.d/
- ./data:/var/lib/postgresql/data
volumes:
data: {}
I am trying this but I get the error:
initdb: directory "/var/lib/postgresql/data" exists but is not empty
I think I am not using the correct use case, I am new to docker compose.
But is there any way, my use case can get satisfied?
This is caused by an improper usage of the volumes syntax for your named volume.
In order to mount a named volume you have to just use its name like this:
volumes:
- data:/var/lib/postgresql/data
If your syntax begins with a . then it will be a bind mount from your host.
volumes:
- ./data:/var/lib/postgresql/data
The above code is mounting the host folder data relative to where you your docker-compose.yml is located.
This docker-compose.yml should do what you expect.
version: '3.1'
services:
postgres:
image: postgres:11.6-alpine
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_PORT: 5432
volumes:
- ./tables:/docker-entrypoint-initdb.d/
- data:/var/lib/postgresql/data
volumes:
data:
If for some reason your volume has been created already, with an empty or no database, your first step should be running:
docker-compose down --volumes
From the documentation:
-v, --volumes Remove named volumes declared in the `volumes`
section of the Compose file and anonymous volumes
attached to containers.
From: https://docs.docker.com/compose/reference/down/

Docker DB Migration/Deployment to DigitalOcean

Warning: I am fairly new to docker and cloud hosting, this is likely a dumb question.
I have a local web app which has 3 images associated with it, the app itself, the db and a phpmyadmin image. All works well locally, and if I transfer all the files to my digital ocean droplet and bring up my containers it works fine there as well, but this is not how I want to deploy having every file from every library residing in my droplet.
I have been experimenting with creating a docker-machine on my droplet and deploying my containers remotely to it. This seems to work fine other than the fact that my db image does not reference my database and is simply an empty db. I tried to migrate the db in this fashion which I saw in a tutorial:
docker-compose run --rm web db:create db:migrate
But got the following error, I assume this is because my dev machine is running Windows 10 not Linux, but I cannot find anywhere what the equivalent command would be for a Windows machine.
Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"db:create\": executable file not found in $PATH": unknown
I know I am probably missing something really stupid and easy but I am having difficulties figuring out how to migrate the data for my db image. Thanks in advance.
UPDATE:
As requested here is my docker-compose:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ./data:/docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always
volumes:
data:
UPDATE #2:
transfered db file to /docker-entrypoint-initdb.d (I tried this yesterday too but couldn't get it working) and created a new production docker-compose-prod.yml I must be missing something still though as the DB is still empty. Below is my new docker-compose-prod.yml:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- /docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always
Your strategy is sound.
Actually, you can take it a further step by automating the Droplet provisioning to e.g. use a container-oriented OS and access your Compose file. But that's not this question ;-)
I think it is not relevant that you're using Windows and probably makes little difference; it may require some answer tweaks but that's about it.
The challenge is that you need to move (or recreate) the database state on the remote machine. There are several ways that the DB state could be persisted: in-container (not ideal); using volume mounts (good), other.
Each is "moveable" but it would help if you could add your Compose file to your question so that we may see which approach is being used.
In full-disclosure Im not familiar with the approach that you referencesd but that does not mean that it's inaccurate; I'm just not familiar with it.
Update: docker-entrypoint-initdb.d
See: "Initializing a fresh instance" on MySQL
So, any files within that directory are run to initialize the database container when it's created from the image.
In your Compose file you mount your host's ./data directory into this file. Presumably that directory contains >=1 file that performs your intended initialization.
NB The section volumes: data: at the end of the Compose file appears redundant. You're actually using a host-mounted directory ./data not this volume.
When you run the Compose file on the Droplet, those files aren't present and you'll need to copy them.
The simplest way to do this is to use scp and this provides 2 alternatives:
Either retain the data directory:
IP=[DROPLET-IP]
scp -r ./data root#${IP}:/data
NB The remote destination is /data not ./data. You will need to revise the Compose file on the Droplet (!) too:volumes: - /data:/docker-entrypoint-initdb.d
Or move the files directly to the Droplet's /docker-entrypoint-initdb.d:
scp -r ./data root#${IP}/docker-entrypointy-initdb.d
NB Now there's no need for the volume mapping. You may remove: volumes: - ./data:/docker-entrypoint-initdb.d
Update: repro (works)
I used a tweaked docker-compose.yaml but it's essentially the same:
version: "3.4"
services:
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ${PWD}/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
restart: always
adminer:
image: adminer
restart: always
ports:
- 8080:8080
Then mkdir ${PWD}/docker-entrypoint-initdb.d and created a file in it called freddie.sql:
create database if not exists frederik;
use frederik;
create table treats (
TreatID INT NOT NULL AUTO_INCREMENT,
TreatName VARCHAR(255) NOT NULL,
PRIMARY KEY (TreatId));
insert into treats (TreatName)
values
("Dried Salmon"),
("Meatballs");
Then docker-compose rm --force && docker-compose up
I was able to browse the adminer UI (:8080), login (root|mypass) and browse the database frederik:

Using a docker volume with postgresql to verify it is saving on the hosts filesystem

I am trying to get docker volumes working.
I have defined a volume in my Dockerfile as follows:
version: "3"
services:
redis:
image: redis:alpine
ports:
- "6379:6379"
db:
image: postgres:9.4
#container_name: db
volumes:
- "db-data:/var/lib/postgresql/data"
volumes:
db-data:
Now my question is, when I do a docker-compose up I don't see my data persisted on my local laptop (or server).
I just want to test/verify that my data is saving to the host's filesystem, so if I start/stop docker when it restarts it reads from the database file on the host.
That construct saves data in a named volume; if you look under /var/lib/docker/volumes as root you should be able to see it there (though mucking around in /var/lib/docker generally isn't advisable; and I believe this is one of the things where Docker Compose will change the name to try to make it unique).
If you want the data to be saved in a host directory, change the volume declaration to explicitly have a relative or absolute path. You won't need the explicit volume declaration, and for this you can remove the volumes block entirely. That would leave you with a docker-compose.yml that looks like:
version: "3"
services:
db:
image: postgres:9.4
volumes:
- "./db-data:/var/lib/postgresql/data"

Persisting database using docker volumes

I'm trying to persist postgres data in a docker container so that as soon as you docker-compose down and docker-compose up -d you don't lose data from your previous session. I haven't been able to make it do much of anything - pulling the container down and back up again routinely deletes the data.
Here's my current docker-compose.yml:
version: '2'
services:
api:
build: .
ports:
- '8245:8245'
volumes:
- .:/home/app/api
- /home/app/api/node_modules
- /home/app/api/public/src/bower_components
links:
- db
db:
build: ./database
env_file: .env
ports:
- '8246:5432'
volumes_from:
- dbdata
dbdata:
image: "postgres:9.5.2"
volumes:
- /var/lib/postgresql/data
Help?
According to the Dockment of Docker Compose, when you write something like:
volumes:
- /var/lib/postgresql/data
It creates a new docker volume and map it to /var/lib/postgresql/data inside the container.
Therefore, each time you run docker-compose up and docker-compose down, it creates new volume. You can confirm the behavior with docker volume ls.
To avoid it, you have two options:
(A) Map host directory into container
You can map directory of host into container using <HOST_PATH>:<CONTAINER_PATH>.
volumes:
- /path/to/your/host/directory:/var/lib/postgresql/data
The data of postgresql will be saved into /path/to/your/host/directory of the container host.
(B) Use external container
docker-compose has an option of external container.
When it is set to true, it won't always create volume.
Here's an example.
version: '2'
services:
dbdata:
image: postgres:9.5.2
volumes:
- mypostgresdb:/var/lib/postgresql/data
volumes:
mypostgresdb:
external: true
With external: true, docker-compose won't create the mypostgredb volume, so you have to create it by your own using following command:
docker volume create --name=mypostgredb
The data of postgresql will be saved into docker volume named mypostgredb. Read reference for more detail.