Docker DB Migration/Deployment to DigitalOcean - docker-compose

Warning: I am fairly new to docker and cloud hosting, this is likely a dumb question.
I have a local web app which has 3 images associated with it, the app itself, the db and a phpmyadmin image. All works well locally, and if I transfer all the files to my digital ocean droplet and bring up my containers it works fine there as well, but this is not how I want to deploy having every file from every library residing in my droplet.
I have been experimenting with creating a docker-machine on my droplet and deploying my containers remotely to it. This seems to work fine other than the fact that my db image does not reference my database and is simply an empty db. I tried to migrate the db in this fashion which I saw in a tutorial:
docker-compose run --rm web db:create db:migrate
But got the following error, I assume this is because my dev machine is running Windows 10 not Linux, but I cannot find anywhere what the equivalent command would be for a Windows machine.
Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"db:create\": executable file not found in $PATH": unknown
I know I am probably missing something really stupid and easy but I am having difficulties figuring out how to migrate the data for my db image. Thanks in advance.
UPDATE:
As requested here is my docker-compose:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ./data:/docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always
volumes:
data:
UPDATE #2:
transfered db file to /docker-entrypoint-initdb.d (I tried this yesterday too but couldn't get it working) and created a new production docker-compose-prod.yml I must be missing something still though as the DB is still empty. Below is my new docker-compose-prod.yml:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- /docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always

Your strategy is sound.
Actually, you can take it a further step by automating the Droplet provisioning to e.g. use a container-oriented OS and access your Compose file. But that's not this question ;-)
I think it is not relevant that you're using Windows and probably makes little difference; it may require some answer tweaks but that's about it.
The challenge is that you need to move (or recreate) the database state on the remote machine. There are several ways that the DB state could be persisted: in-container (not ideal); using volume mounts (good), other.
Each is "moveable" but it would help if you could add your Compose file to your question so that we may see which approach is being used.
In full-disclosure Im not familiar with the approach that you referencesd but that does not mean that it's inaccurate; I'm just not familiar with it.
Update: docker-entrypoint-initdb.d
See: "Initializing a fresh instance" on MySQL
So, any files within that directory are run to initialize the database container when it's created from the image.
In your Compose file you mount your host's ./data directory into this file. Presumably that directory contains >=1 file that performs your intended initialization.
NB The section volumes: data: at the end of the Compose file appears redundant. You're actually using a host-mounted directory ./data not this volume.
When you run the Compose file on the Droplet, those files aren't present and you'll need to copy them.
The simplest way to do this is to use scp and this provides 2 alternatives:
Either retain the data directory:
IP=[DROPLET-IP]
scp -r ./data root#${IP}:/data
NB The remote destination is /data not ./data. You will need to revise the Compose file on the Droplet (!) too:volumes: - /data:/docker-entrypoint-initdb.d
Or move the files directly to the Droplet's /docker-entrypoint-initdb.d:
scp -r ./data root#${IP}/docker-entrypointy-initdb.d
NB Now there's no need for the volume mapping. You may remove: volumes: - ./data:/docker-entrypoint-initdb.d
Update: repro (works)
I used a tweaked docker-compose.yaml but it's essentially the same:
version: "3.4"
services:
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ${PWD}/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
restart: always
adminer:
image: adminer
restart: always
ports:
- 8080:8080
Then mkdir ${PWD}/docker-entrypoint-initdb.d and created a file in it called freddie.sql:
create database if not exists frederik;
use frederik;
create table treats (
TreatID INT NOT NULL AUTO_INCREMENT,
TreatName VARCHAR(255) NOT NULL,
PRIMARY KEY (TreatId));
insert into treats (TreatName)
values
("Dried Salmon"),
("Meatballs");
Then docker-compose rm --force && docker-compose up
I was able to browse the adminer UI (:8080), login (root|mypass) and browse the database frederik:

Related

Problem with create database in docker-compose / postgres

This is my docker-compose.yml
version: '3.8'
services:
db:
container_name: "postgres"
image: postgres:14
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB:database
ports:
- "127.0.0.1:5434:5432"
volumes:
- ./db:/var/lib/postgresql/data
Docker don't create database for this container, what is wrong?
IF IT IS NOT AN ISSUE WITH TYPO here - POSTGRES_DB:database instead of - POSTGRES_DB=database
Did you ever start the Postgres container without specifying the user and database?
This could happen when the docker volume had already stored data from previous runs, where you didn't set the user and DB, then later runs with different env variables will not change the users. That is only done on the first initialization.
So you need to run
docker volume prune
There is already a closed issue on GitHub about that, see: https://github.com/docker-library/postgres/issues/453

Moved docker-compose.yml creates a new postgres database

I have set up a Postgres database on docker on ubuntu with the docker-compose.yml just for that database within the folder ~/postgres and I'd run docker-compose up -d to run my database from within the ~/postgres folder.
Here is my docker-compose.yml:
version: '3'
services:
database:
image: "postgres"
ports:
- "0.0.0.0:5432:5432"
env_file:
- database.env
volumes:
- database-data:/var/lib/postgresql/data/
volumes:
database-data:
This database is set up and working perfectly, so I decided to set up my web application as well and, because the docker-compose.yml file was inside that folder, I moved it outside to ~/ so I could use it for my web app as well.
This is what the docker-compose.yml in ~/ looks like:
version: "3"
services:
database:
image: "postgres"
ports:
- "0.0.0.0:5432:5432"
env_file:
- postgres/database.env
volumes:
- database-data:/var/lib/postgresql/data/
webapp:
image: webapp/site
build:
context: ./retro-search-engine
dockerfile: Dockerfile
args:
buildno: 1
links:
- "database:db"
ports:
- "0.0.0.0:8000:80"
volumes:
- webapp:/var/www
environment:
db_host: db
db_username: xxxx
db_password: xxxx
db_database: xxxx
db_port: 5432
volumes:
database-data:
webapp:
As you can see, the database docker configuration is basically the same, the only thing that changes is the path to the database.env file since it's still in the previous folder.
So, the problem here is that when I run docker-compose up -d from ~/, everything starts normally but when I access the database, all of my tables are gone.
If I go back to ~/postgres and do docker-compose up -d in that folder (with the previous docker-compose.yml) and connect to the db, I can access my tables.
So what I think is happening is that it's either creating a new container or somehow the folder where the data is stored is relative to the docker-compose.yml file and it's creating a new database because it can't find the old files.
I have no idea how to solve this issue, I have googled around and couldn't find anything so I decided to ask here before I dump my whole db and restore it into a new container, which I don't want to do because it's a 16gb database and it's gonna take forever.
Does anyone have any idea how I can use my new docker-compose.yml with the data from my database?
Thanks in advance.
First:
Replace : postgres/database.env by ./postgres/database.env
Use docker compose up --build :
it will rebuild the image (usefull if you made some change to your dockerfile). try to avoid to use -d when developing, you'll avoid to have tons of container running.
Second:
I suggest you to follow the following reco, It will resolve your problem and it will be cleaner if you want to use a pipeline CI/CD and to create more "autonomous" image and container on demand.
rootfolder
|-docker-compose.yaml
|-postgres/
| |--All_other_files_for_the_postgres_docker_image
|-webapp/
|-- Dockerfile
|-- All_other_files_for_the_webapp_docker_image
bellow you will find my "correction" :
version: "3"
services:
database:
image: "postgres"
container_name: "my_postgres_container"
ports:
- "0.0.0.0:5432:5432"
env_file:
- ./postgres/database.env
volumes:
- database-data:/var/lib/postgresql/data/
webapp:
image: webapp/site
container_name: "my_webapp_container"
build:
context: ./retro-search-engine
dockerfile: Dockerfile
links:
- "database:db"
ports:
- "0.0.0.0:8000:80"
volumes:
- webapp:/var/www
environment:
db_host: db
db_username: xxxx
db_password: xxxx
db_database: xxxx
db_port: 5432
volumes:
database-data:
webapp:
If you want to use an existing postgres image that is already present (to see if an image already existe you can do : docker image | grep postgres)
then you can do directly in your docker-compose :
image: "<your_image_name>"

Docker-compose postgresql integration

I'm new to docker and am trying to make a composed image consisting of services, nginx and postgresql database. I'm following the tutorial here : http://www.patricksoftwareblog.com/how-to-use-docker-and-docker-compose-to-create-a-flask-application/
And have been successful up to adding postgresql where I'm having difficulties and questions.
My docker-compose.yml:
version : '2'
services:
web:
restart: always
build: ./home/admin/
expose:
- "8000"
nginx:
restart: always
build: ./etc/nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
depends_on:
- web
data:
image: postgres:9.6
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
build: ./var/lib/postgresql
volumes_from:
- data
ports:
- "5432:5432"
I have included his docker generator script under /var/lib/postgresql but keep facing ERROR: Dockerfile parse error line 1: unknown instruction: IMPORT when I run 'docker-compose build'.
If I leave in the 'data' section & remove the postgres section in my docker-compose.yml file, my containers seemingly run fine but I'm unsure if postgresql is properly running at all. I'm able to GET using curl but still - I'm unsure how to go about confirming postgres specifics to confirm a proper environment and would appreciate examples on this topic in particular.
I was also wondering if running my docker-compose containers then simply running a separate postgresql container could also function if provided the correct ports.
Thank you!
Check the content of your docker-compose.yml:
yaml format (see for instance codebeautify.org/yaml-validator)
eol or encoding issue
multi-line instructions

How to make persistent storage with docker-compose up-down-up?

I have a multiple container application, that is using the postgres image in docker-compose.yml file. Postgres container has volume on host machine for persistent storage.
When I run docker-compose up at first time all is fine, postgres creates db files in my host folder.
After it I need to shut down application temporarily with docker-compose down if I'll change code of web container.
When I run docker-compose up second time, postgres overwriting all db files, but I need that data not changes. How can I solve this issue?
My docker-compose.yml
version: '2'
services:
web:
build: ./web
command: python3 main.py
volumes:
- ./web:/app
ports:
- "80:80"
depends_on:
- db
- redis
links:
- db:db
- redis:redis
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD:0000
volumes:
- ./pgdb:/var/lib/postgresql/data
redis:
image: redis
ports:
- "6379:6379"
command: redis-server --appendonly yes
volumes:
- ./redisdb:/data
I solve this problem. It occurs probably because I changed permissions for pgdb directory with host root user. By default I couldn't open pgdb in host machine because owner is postgres user. I could be wrong but after I stopped to change the resolutions the problem was gone.

Docker / Postgres: Mounting an existing database within a dockerized Postgresql

So I'm having a problem mounting an existing set of data for Docker Postgres that I cannot figure out for the life of me. Here's my docker compose file.
version: '2'
services:
postgresql:
image: postgres:9.5
environment:
- PGDATA=/data
ports:
- '5432:5432'
volumes:
- ~/.postgresql:/data
web:
build: .
command: sbt/sbt run
volumes:
- .:/app
ports:
- '9001:9001'
depends_on:
- postgresql
Here's the error I see
ostgresql_1 | FATAL: data directory "/data" has wrong ownership
postgresql_1 | HINT: The server must be started by the user that owns the data directory.
Does anyone have any clue how to fix it? Thank you
PS I am using Docker Machine through OSX if that makes a difference in this problem.
The error message is pretty clear. I think the container runs postgres with user postgres which has a uid/gid of 999 (see https://github.com/docker-library/postgres/blob/3f8e9784438c8fe54f831c301a45f4d55f6fa453/9.5/Dockerfile line 5). You need to chown your host data folder to a user with the same uid.