Postgres Database not being Created by Docker-Compose.yml file - postgresql

This error is ONLY occurring on one of my 4 devices, and I am trying to debug it. This device is a Macbook pro with an Intel processor.
The database container (db service) spins up but doesn't create the database.
version: "3.7"
services:
db:
networks:
new:
aliases:
- database
restart: always
container_name: db
image: postgres:latest
ports:
- 5433:5432
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=user
- POSTGRES_DB=core
# - PGDATA=/tmp
volumes:
- ./pgdata:/var/lib/postgresql/data
migrate:
image: migrate/migrate
depends_on:
- db
networks:
- new
volumes:
- ./db/migrations:/migrations
command: ["-path", "/migrations", "-database", "postgres://user:password#database:5432/core?sslmode=disable", "up"]
links:
- db
web:
networks:
- new
build: .
ports:
- "8080:8080"
volumes:
- .:/server
links:
- db
depends_on:
- db
- redis
environment:
PORT: 8080
CONNECTION_STRING_DEV: db://user:password#db:5433/db
DSN: "db://user:password#db:5433/core"
redis:
networks:
- new
image: "redis"
ports:
- "6379:6379"
networks:
new:
The container stops at 2022-01-19 15:37:02.916 UTC [49] LOG: database system is ready to accept connections and never actually executes "CREATE DATABASE"
Because the database isn't created, my connected Go API isn't functioning properly. The docker-compose should be creating the database "core", spinning up the redis instance, and then spinning up the web service. Afterwards, I typically pull up the migrate container which makes my database migrations. All of my other devices (macOS, windows, and linux), function properly and bring up the database when docker-compose up web is run.

here is warning from postgresql docker image page:
Warning: scripts in /docker-entrypoint-initdb.d are only run if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup. One common problem is that if one of your /docker-entrypoint-initdb.d scripts fails (which will cause the entrypoint script to exit) and your orchestrator restarts the container with the already initialized data directory, it will not continue on with your scripts.
so one of the reason that your host you are using has something in ./pgdata
also you they have pretty detailed documentation on how you can extend image or run something on startup - you can actually clean up everything on first startup.
https://hub.docker.com/_/postgres

Related

DOCKER_Cannot run multiple services by docker-compose

I'm set up docker compose for my project with 2 services: spring-boot and postgresql. I created Dockerfile and docker-compose,yml as below:
Dockerfile :
FROM openjdk:8-jdk-alpine
MAINTAINER linhan.com
COPY target/LinhAn-0.0.1-SNAPSHOT.jar linhan-server-1.0.0.jar
ENTRYPOINT ["java","-jar","/linhan-server-1.0.0.jar"]
docker-compose.yml:
version: '2'
services:
spring_boot:
image: 'linhan'
build: .
container_name: api
ports:
- "8080:8080"
depends_on:
- postgres
environment:
- SPRING_DATASOURCE_URL=jdbc:jdbc:postgresql://localhost:5432/test_db
- SPRING_DATASOURCE_USERNAME=user
- SPRING_DATASOURCE_PASSWORD=123456
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
postgres:
image: 'postgres:13.1-alpine'
container_name: db
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=123456
Then, when I type docker-compose up in terminal, postgres ran only, spring boot still not.
I searched google for solution but seems no hope. Please help me, thanks a lot!!!!!
I think you need to change the SPRING_DATASOURCE_URL to reference your service name instead of localhost. The service name is resolved automatically to your service since all services are part of the default_network by default in docker-compose.
- SPRING_DATASOURCE_URL=jdbc:jdbc:postgresql://postgres:5432/test_db
Also, for clarity I would suggest you add the port to your docker-compose postgres service, so it is clear which port is being used, even if it is the default:
postgres:
image: 'postgres:13.1-alpine'
container_name: db
ports:
- "5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=123456
Also, another suggestion would be to try and use a healthcheck to see if your database service becomes available instead of a simple depends_on. The short version will mark the dependency fulfilled as soon as the container is Running, regardless of the availability of the database.
Either that, or you can add application logic to retry database connection in case of failure.

Access Postgres database remotely that's hosted on Azure in docker container with webapi

I am new to Azure cloud services so excuse me if this is a dumb question.
I have a docker-compose file with a .Net core webapi and postgres database. I have it running on Azure as a web-app and its working (I can see when I query the API that there's data in the database). However I would like to get access to the database remotely so that I can inspect and see the data in the database via pgAdmin or something similar.
I did bind a port to my pgAdmin site in my docker-compose but it does not seem like that port is open. I've read somewhere that only port 80 and 443 can be exposed from Azure web-apps when using multi-image containers. (This docker-compose works locally 100% and I can access the pgAdmin site and see the database with all its tables).
So my question is, how do I run my web-api with my postgres database on azure and have visibility to my database?
Docker-compose file:
version: '3.8'
services:
web:
container_name: 'bootcampapi'
image: 'myimage'
build:
context: .
dockerfile: backend.dockerfile
restart: always
ports:
- 8080:80
depends_on:
postgres:
condition: service_healthy
networks:
- bootcampbackend-network
postgres:
container_name: 'postgres'
restart: always
image: 'postgres:latest'
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
environment:
- POSTGRES_USER=myusername
- POSTGRES_PASSWORD=mypassword
- POSTGRES_DB=database-name
- PGDATA=database-data
networks:
- bootcampbackend-network
ports:
- 5432:5432
volumes:
- database-data:/var/lib/postgresql/data/
pgadmin:
image: dpage/pgadmin4
ports:
- 15433:80
env_file:
- .env
depends_on:
- postgres
networks:
- bootcampbackend-network
volumes:
- database-other:/var/lib/pgadmin/
networks:
bootcampbackend-network:
driver: bridge
As you have found, App Service only listens on one port. One solution around that is to use a reverse proxy like Nginx to route the traffic to both your containers.
BTW, build, depends_on and networks are unsupported. See doc

Docker DB Migration/Deployment to DigitalOcean

Warning: I am fairly new to docker and cloud hosting, this is likely a dumb question.
I have a local web app which has 3 images associated with it, the app itself, the db and a phpmyadmin image. All works well locally, and if I transfer all the files to my digital ocean droplet and bring up my containers it works fine there as well, but this is not how I want to deploy having every file from every library residing in my droplet.
I have been experimenting with creating a docker-machine on my droplet and deploying my containers remotely to it. This seems to work fine other than the fact that my db image does not reference my database and is simply an empty db. I tried to migrate the db in this fashion which I saw in a tutorial:
docker-compose run --rm web db:create db:migrate
But got the following error, I assume this is because my dev machine is running Windows 10 not Linux, but I cannot find anywhere what the equivalent command would be for a Windows machine.
Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"db:create\": executable file not found in $PATH": unknown
I know I am probably missing something really stupid and easy but I am having difficulties figuring out how to migrate the data for my db image. Thanks in advance.
UPDATE:
As requested here is my docker-compose:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ./data:/docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always
volumes:
data:
UPDATE #2:
transfered db file to /docker-entrypoint-initdb.d (I tried this yesterday too but couldn't get it working) and created a new production docker-compose-prod.yml I must be missing something still though as the DB is still empty. Below is my new docker-compose-prod.yml:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- /docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always
Your strategy is sound.
Actually, you can take it a further step by automating the Droplet provisioning to e.g. use a container-oriented OS and access your Compose file. But that's not this question ;-)
I think it is not relevant that you're using Windows and probably makes little difference; it may require some answer tweaks but that's about it.
The challenge is that you need to move (or recreate) the database state on the remote machine. There are several ways that the DB state could be persisted: in-container (not ideal); using volume mounts (good), other.
Each is "moveable" but it would help if you could add your Compose file to your question so that we may see which approach is being used.
In full-disclosure Im not familiar with the approach that you referencesd but that does not mean that it's inaccurate; I'm just not familiar with it.
Update: docker-entrypoint-initdb.d
See: "Initializing a fresh instance" on MySQL
So, any files within that directory are run to initialize the database container when it's created from the image.
In your Compose file you mount your host's ./data directory into this file. Presumably that directory contains >=1 file that performs your intended initialization.
NB The section volumes: data: at the end of the Compose file appears redundant. You're actually using a host-mounted directory ./data not this volume.
When you run the Compose file on the Droplet, those files aren't present and you'll need to copy them.
The simplest way to do this is to use scp and this provides 2 alternatives:
Either retain the data directory:
IP=[DROPLET-IP]
scp -r ./data root#${IP}:/data
NB The remote destination is /data not ./data. You will need to revise the Compose file on the Droplet (!) too:volumes: - /data:/docker-entrypoint-initdb.d
Or move the files directly to the Droplet's /docker-entrypoint-initdb.d:
scp -r ./data root#${IP}/docker-entrypointy-initdb.d
NB Now there's no need for the volume mapping. You may remove: volumes: - ./data:/docker-entrypoint-initdb.d
Update: repro (works)
I used a tweaked docker-compose.yaml but it's essentially the same:
version: "3.4"
services:
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ${PWD}/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
restart: always
adminer:
image: adminer
restart: always
ports:
- 8080:8080
Then mkdir ${PWD}/docker-entrypoint-initdb.d and created a file in it called freddie.sql:
create database if not exists frederik;
use frederik;
create table treats (
TreatID INT NOT NULL AUTO_INCREMENT,
TreatName VARCHAR(255) NOT NULL,
PRIMARY KEY (TreatId));
insert into treats (TreatName)
values
("Dried Salmon"),
("Meatballs");
Then docker-compose rm --force && docker-compose up
I was able to browse the adminer UI (:8080), login (root|mypass) and browse the database frederik:

How can i link mongodb with other services in docker-compose?

i got a problem.
I made a docker-compose that runs mongo and node.
The problem is there is no way i use mongo from the container, i cannot start my node server.
Here there is my docker-compose :
version: '3'
services:
database:
build: ./Database
container_name: "dashboard_database"
ports:
- "27017:27017"
backend:
build: ./Backend
container_name: "dashboard_backend"
ports:
- "8080:8080"
depends_on:
- database
links:
- database
But when i start mongo without the container my node can reach it, i don't know why ...
Any idea ?
Thanks !
Dont define ports in the DB service. But afterwards only application will be able to access DB. Most probably it will work then. If you still want to access it from your PC then you should define a network. Try this
version: '3'
services:
database:
build: ./Database
container_name: "dashboard_database"
backend:
build: ./Backend
container_name: "dashboard_backend"
ports:
- "8080:8080"
depends_on:
- database
links:
- database
And for creating network
version: '3'
networks:
back-tier:
services:
database:
build: ./Database
container_name: "dashboard_database"
networks:
- back-tier
ports:
- "27017:27017"
backend:
build: ./Backend
container_name: "dashboard_backend"
networks:
- back-tier
ports:
- "8080:8080"
depends_on:
- database
All services in docker-compose are within the docker-compose created network, and can be addressed by their service names from other services. In your case the service names are database and backend, so for instance the database can be accessed by the backend with something like tcp://database:27017. You don't need to link them anymore.
https://runnable.com/docker/docker-compose-networking
Be aware depends_on only waits until the process has been started and does not wait for the process to be ready to accept connections.
https://docs.docker.com/compose/compose-file/#depends_on
https://docs.docker.com/compose/startup-order
The port mappings are only necessary if you want to make a service accessible from the local machine. In your examplte the backend service is accessible via localhost:8080.
If you want an external container to access a docker-compose service tne localhost:8080 wont work because localhost in the container isn't the same localhost as on your local machine where docker containers are running. You can create manually a docker network and connect the container and docker-compose services to it. See docker-compose-networking link and take a look at section Pre-existing Networks.
Does that help you?

How to make persistent storage with docker-compose up-down-up?

I have a multiple container application, that is using the postgres image in docker-compose.yml file. Postgres container has volume on host machine for persistent storage.
When I run docker-compose up at first time all is fine, postgres creates db files in my host folder.
After it I need to shut down application temporarily with docker-compose down if I'll change code of web container.
When I run docker-compose up second time, postgres overwriting all db files, but I need that data not changes. How can I solve this issue?
My docker-compose.yml
version: '2'
services:
web:
build: ./web
command: python3 main.py
volumes:
- ./web:/app
ports:
- "80:80"
depends_on:
- db
- redis
links:
- db:db
- redis:redis
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD:0000
volumes:
- ./pgdb:/var/lib/postgresql/data
redis:
image: redis
ports:
- "6379:6379"
command: redis-server --appendonly yes
volumes:
- ./redisdb:/data
I solve this problem. It occurs probably because I changed permissions for pgdb directory with host root user. By default I couldn't open pgdb in host machine because owner is postgres user. I could be wrong but after I stopped to change the resolutions the problem was gone.