psycopg2.OperationalError: FATAL: the database system is starting up, Docker + Odoo - postgresql

i have this docker-compose file:
version: '3.7'
services:
db:
image: "postgres:9.6"
container_name: postgres-container
ports: ["6543:5432"]
environment:
- POSTGRES_USER=odoo
- POSTGRES_PASSWORD=admin
- POSTGRES_DB=odoo
restart: always
volumes:
- ./data/postgres:/var/lib/postgresql/data
odoo:
#build: ./odoo-container
image: odoo-image
container_name: odoo-container
ports: ["8069:8069"]
tty: true
command: opt/odoo/odoo-bin -c opt/odoo.conf -d teste
depends_on:
- db
the problem is that when i start docker compose, my db service runs and when docker runs the odoo service i get an error:
psycopg2.OperationalError: FATAL: the database system is starting up
and when i restart the odoo container, its works

im added the restart method, and works:
odoo:
#build: ./odoo-container
image: odoo-image
container_name: odoo-container
ports: ["8069:8069"]
command: opt/odoo/odoo-bin -c opt/odoo.conf -d teste
depends_on:
- db
restart: always

Related

Unable to access postgres in docker from web app in another container

I have a samle app I'm using docker-compose to run locally on my machine. The web app is in one container and the db (postgres) in another.
I am having an connection issue that I can't work through.
docker-compose
version: '3.8'
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: 'password'
POSTGRES_DB: 'postgres'
POSTGRES_USER: 'postgres'
volumes:
- ./postgres-db:/var/lib/postgresql/data
ports:
- '5432:5432'
app:
build:
context: .
dockerfile: app/Dockerfile
restart: always
environment:
APP_FRONTEND_PORT: '8080'
DB_PORT: '5433'
DB_HOST: 'db'
ports:
- '8080:8080'
depends_on:
- 'db'
volumes:
postgres-db:
Dockerfile
FROM golang:latest
WORKDIR /scratch
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /bin/frontend ./...
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /go/bin/
COPY --from=build /bin/frontend /go/bin/frontend
ENTRYPOINT ["/go/bin/frontend"]
Both containers are running and I'm able to log into the running postgres container and postgres us running.
When I try to run a update from the US I get a 500 error and it does not seem like the app container can communicate with the db container. I'm not sure what I'm missing
client side error when trying to make a call to update date:
encountered err: failed to begin transaction: failed to connect to `host=db user=postgres database=postgres`: dial error (dial tcp 172.29.0.2:5433: connect: connection refused)
docker ps yeilds:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19eeed869434 sample_app "/go/bin/frontend" 48 minutes ago Up 48 minutes 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp sample_app_1
84804f00c751 postgres "docker-entrypoint.s…" 48 minutes ago Up 48 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp sample_app_db_1
$
As per stated in https://docs.docker.com/network/bridge/, you need to put both services into a user-defined bridge network for them to refer to each other by the container names. Here is how to do it inside docker-compose.yml:
Define a custom bridge network:
networks:
some-name:
driver: bridge
Put both services into that network:
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: 'password'
POSTGRES_DB: 'postgres'
POSTGRES_USER: 'postgres'
volumes:
- ./postgres-db:/var/lib/postgresql/data
ports:
- '5432:5432'
networks:
- some-name
app:
build:
context: .
dockerfile: app/Dockerfile
restart: always
environment:
APP_FRONTEND_PORT: '8080'
DB_PORT: '5433'
DB_HOST: 'db'
ports:
- '8080:8080'
depends_on:
- 'db'
networks:
- some-name
Force specific container names especially the one being referred to by the other, otherwise docker-compose will add prefix and suffix to the service name as the container name like sample_app_db_1:
services:
db:
container_name: db
image: postgres
environment:
POSTGRES_PASSWORD: 'password'
POSTGRES_DB: 'postgres'
POSTGRES_USER: 'postgres'
volumes:
- ./postgres-db:/var/lib/postgresql/data
ports:
- '5432:5432'
networks:
- some-name

Docker file wait for postgres container

Hello I have the following error in my node project:
(node:51) UnhandledPromiseRejectionWarning: Error: getaddrinfo
ENOTFOUND ${DB_HOST}
I'm thinking the problem is that my postgress is not yet started when my project starts
and so I'm not able to think of a solution on how to start my container after my postgres is ready, I read something about dockerize, but I'm not able to imagine how to apply
my docker file:
FROM node:lts-alpine
RUN mkdir -p /home/node/api/node_modules && chown -R node:node /home/node/api
WORKDIR /home/node/api
COPY ormconfig.json .env package.json yarn.* ./
USER node
RUN yarn
COPY --chown=node:node . .
EXPOSE 4000
CMD ["yarn", "dev"]
my docker compose:
version: '3.7'
services:
ci-api:
build: .
container_name: ci-api
volumes:
- .:/home/node/api
- /home/node/api/node_modules
ports:
- '${SERVER_PORT}:${SERVER_PORT}'
depends_on:
- ci-postgres
networks:
- ci-network
ci-postgres:
image: postgres:12
container_name: ci-postgres
ports:
- '${DB_PORT}:5432'
environment:
- ALLOW_EMPTY_PASSWORD=no
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
- POSTGRES_DB=${DB_NAME}
volumes:
- ci-postgres-data:/data
networks:
- ci-network
volumes:
ci-postgres-data:
networks:
ci-network:
driver: bridge
and this is my .env
SERVER_PORT=4000
DB_HOST=ci-postgres
DB_PORT=5432
DB_USER=spirit
DB_PASS=api
DB_NAME=emasa_ci
You can reference the below docker-compose.yml in which depends_on, healthcheck and links are added as web service depends on db service.
Reference:
Postgresql Container is not running in docker-compose file - Why is this?
version: "3"
services:
webapp:
build: .
container_name: webapp
ports:
- "5000:5000"
links:
- postgres
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:11-alpine
container_name: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_DB=tmp
- POSTGRES_USER=tmp
- POSTGRES_PASSWORD=tmp_password
volumes: # Persist the db data
- database-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
volumes:
database-data:

I cannot connect from adminer to postgresql

I try to up postgresql and adminer via docker container. But from adminer I cannot enter to postgresql with password and user I whote.
SQLSTATE[08006] [7] FATAL: password authentication failed for user "root"
I tried all.
version: '3'
services:
web:
build: .
environment:
- APACHE_RUN_USER=www-data
volumes:
- ./blog:/var/www/html/
ports:
- 8080:80
working_dir: /var/www/html/
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: kisphp
POSTGRES_USER: root
POSTGRES_DB: kisphp
ports:
- "5432:5432"
volumes:
- ./postgres:/var/lib/postgresql/data
adminer:
image: adminer
restart: always
ports:
- "6080:8080"
This docker-compose configuration works well.
Try recreating it from scratch:
Delete ./postgres folder
docker-compose stop
docker-compose down
docker-compose up -d

Connecting pgadmin to postgres in docker

I have a docker-compose file with services for python, nginx, postgres and pgadmin:
services:
postgres:
image: postgres:9.6
env_file: .env
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5431:5431"
pgadmin:
image: dpage/pgadmin4
links:
- postgres
depends_on:
- postgres
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: pwdpwd
volumes:
- pgadmin:/root/.pgadmin
ports:
- "5050:80"
backend:
build:
context: ./foobar # This refs a Dockerfile with Python and Django requirements
command: ["/wait-for-it.sh", "postgres:5431", "--", "/gunicorn.sh"]
volumes:
- staticfiles_root:/foobar/static
depends_on:
- postgres
nginx:
build:
context: ./foobar/docker/nginx
volumes:
- staticfiles_root:/foobar/static
depends_on:
- backend
ports:
- "0.0.0.0:80:80"
volumes:
postgres_data:
staticfiles_root:
pgadmin:
When I run docker-compose up and visit localhost:5050, I see the pgadmin interface. When I try to create a new server there, with localhost or 0.0.0.0 as host name and 5431 as port, I get an error "Could not connect to server". If I remove these and instead enter postgres in the "Service" field, I get the error "definition of service "postgres" not found". How can I connect to the database with pgadmin?
the docker container name changes when you run docker-compose to prefix the folder name (to keep container names unique). You could force the name of the container with container_name property
version: "3"
services:
# postgres database
postgres:
image: postgres:12.3
container_name: postgres
environment:
- POSTGRES_DB=admin
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
- POSTGRES_HOST_AUTH_METHOD=trust # allow all connections without a password. This is *not* recommended for prod
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
ports:
- "5432:5432"
# pgadmin for managing postgis db (runs at localhost:5050)
# To add the above postgres server to pgadmin, use hostname as defined by docker: 'postgres'
pgadmin:
image: dpage/pgadmin4
container_name: pgadmin
environment:
- PGADMIN_DEFAULT_EMAIL=admin
- PGADMIN_DEFAULT_PASSWORD=admin
- PGADMIN_LISTEN_PORT=5050
ports:
- "5050:5050"
volumes:
database-data:
Another option is to connect the postgres container to localhost with
network_mode: host
But you lose the nice network isolation from docker that way
Be careful that the default postgres port is 5432 not 5431. You should update the port mapping for the postgres service in your compose file. The wrong port might be the reason for the issues you reported. Change the port mapping and then try to connect to postgres:5432. localhost:5432 will not work.

How to connect to localhost postgres database from docker container?

I'm configured my project to docker. I have database that have been used in non-docker period and now I want to connect my docker-compose db service to it. But when I write docker-compose up - existing database not used - new one created instead (I suspect, docker container simply doesn't see the database). If I do nonsense please let me know. Maybe I shoud migrate my server db into container.
Here is my docker-compose.yml:
services:
db:
restart: always
image: postgres:latest
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=p#ssw0rd
- POSTGRES_USER=root
ports:
- "5432:5432"
volumes:
# We'll mount the 'postgres-data' volume into the location Postgres stores it's data:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
depends_on:
- db
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
I think, the canonic approach is to have your DB engine running in container while storing the data on the persistent storage (map the volume to your hard disk).
So I would use the Postgres in docker as ServerDB, as you suggested.
If you only want your application connect to the external database, declare it as an external host:
version: '2'
services:
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
extra_hosts:
- "db:192.168.1.2"
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
Just be sure your application reference the database as db and replace the ip I put there with your host ip.
Regards