DATABASE_URL environment variable not found - docker-compose

I have the following in my docker compose yml file, but my node app with prisma can't access the DATABASE_URL environment variable.
I'm new to docker, do I have to manually set the url, if so whats the url ?
postgres:
image: postgres:10-alpine
environment:
- POSTGRES_DB=local
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
ports:
- "5438:5432"

Related

Keycloak 18.0 with Postgres 10.21

I am trying to run Keycloak 18 with postgres 10.21
Here is my docker compose
version: "3.5"
services:
keycloaksvc:
image: quay.io/keycloak/keycloak:18.0
user: '1000:1000'
container_name: "testkc"
environment:
- DB_VENDOR=postgres
- DB_ADDR=postgressvc
- DB_DATABASE=keycloak
- DB_PORT=5432
- DB_SCHEMA=public
- DB_USER=KcUser
- DB_PASSWORD=KcPass
- KC_HOSTNAME=localhost
- ROOT_LOGLEVEL=DEBUG
- PROXY_ADDRESS_FORWARDING=true
- REDIRECT_SOCKET=proxy-https
- KEYCLOAK_LOGLEVEL=DEBUG
- KEYCLOAK_ADMIN=admin
- KEYCLOAK_ADMIN_PASSWORD=testing
volumes:
- ./ssldir:/etc/x509/https
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- "/etc/passwd:/etc/passwd:ro"
- ./kcthemes:/opt/keycloak/themes
entrypoint: /opt/keycloak/bin/kc.sh start --auto-build --hostname-strict-https=false --http-relative-path=/auth --features=token-exchange --https-certificate-file=/etc/x509/https/tls.crt --https-certificate-key-file=/etc/x509/https/tls.key
network_mode: "host"
depends_on:
- postgressvc
postgressvc:
image: postgres:10.21-alpine
user: '1000:1000'
container_name: "kc_postgres"
environment:
- POSTGRES_DB=keycloak
- POSTGRES_USER=KcUser
- POSTGRES_PASSWORD=KcPass
volumes:
- ./pgdta:/var/lib/postgresql/data
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- "/etc/passwd:/etc/passwd:ro"
network_mode: "host"
It runs fine and I can get to admin console https://localhost:8443/auth/admin
I can also add new realm and users. However I do not see any data in postgres. If I make change in docker-compose file and restart, all the realms and users are lost
Exact same postgres setup works fine with image: jboss/keycloak:16.1.1
What setup am I missing for keycloak 18 ?
I am also facing the same issue with keycloak v19.0.0 . It was storing data in memory.
But with below configuration able to store data in postgres.
keycloak:
container_name: keycloak
environment:
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://localhost:5432/keycloak
KC_DB_USERNAME: postgres
KC_DB_PASSWORD: user
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
KC_HOSTNAME_STRICT: false
KC_EDGE: proxy
ports:
- 8080:8080
image: quay.io/keycloak/keycloak:19.0.0
network_mode: host
depends_on:
- postgres
command:
- start-dev --auto-build
Keycloak from version 17 has major changes (it is based on the Quarkus) and also config has been changed. So don't use config, which is working with Keycoak 16, but check the current Keycloak doc, e.g. https://www.keycloak.org/server/containers
You will find that DB env variables are now:
KC_DB_URL,KC_DB_USERNAME,KC_DB_PASSWORD,...
Also other env variables have been changed, so it is not only about DB env variables.

Docker compose: Error: role "hleb" does not exist

Kindly ask you to help with docker and Postgres.
I have a local Postgres database and a project on NestJS.
I killed 5432 port.
My Dockerfile
FROM node:16.13.1
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install
COPY . .
COPY ./dist ./dist
CMD ["yarn", "start:dev"]
My docker-compose.yml
version: '3.0'
services:
main:
container_name: main
build:
context: .
env_file:
- .env
volumes:
- .:/app
- /app/node_modules
ports:
- 4000:4000
- 9229:9229
command: yarn start:dev
depends_on:
- postgres
restart: always
postgres:
container_name: postgres
image: postgres:12
env_file:
- .env
environment:
PG_DATA: /var/lib/postgresql/data
POSTGRES_HOST_AUTH_METHOD: 'trust'
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
restart: always
volumes:
pgdata:
.env
DB_TYPE=postgres
DB_HOST=postgres
DB_PORT=5432
DB_USERNAME=hleb
DB_NAME=artwine
DB_PASSWORD=Mypassword
running sudo docker-compose build - NO ERRORS
running sudo docker-compose up --force-recreate - ERROR
ERROR [ExceptionHandler] role "hleb" does not exist.
I've tried multiple suggestions from existing issues but nothing helped.
What am I doing wrong?
Thanks!
Do not use sudo - unless you have to.
Use the latest Postgres release if possible.
The Postgresql Docker Image provides some environment variables, that will help you bootstrapping your database.
Be aware:
The PostgreSQL image uses several environment variables which are easy to miss. The only variable required is POSTGRES_PASSWORD, the rest are optional.
Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.
When you do not provide the POSTGRES_USER environment variable in the docker-compose.yml file, it will default to postgres.
Your .env file used for Docker Compose does not contain the docker specific environment variables.
So amending/extending it to:
POSTGRES_USER=hleb
POSTGRES_DB=artwine
POSTGRES_PASSWORD=Mypassword
should do the trick. You will have to re-create the volume (delete it) to make this work, if the data directory already exists.

Docker swarm stack ignores postgres password env variable

I'm trying to deploy postgres and pgadmin as a swarm stack via docker stack deploy with this compose file
version: '3.7'
services:
postgres:
image: postgres
ports:
- "5432:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=87654321
pgadmin:
image: dpage/pgadmin4
ports:
- "5433:80"
environment:
- PGADMIN_DEFAULT_EMAIL=developer#happycode.io
- PGADMIN_DEFAULT_PASSWORD=12345678
depends_on:
- postgres
volumes:
postgres-data:
With docker stack deploy - POSTGRES_PASSWORD is never applied to postgres, I can echo env variable inside the container and it contains correct value 87654321 but postgres still uses the default one. However if I use the same compose file with docker-compose everything works fine
I think the volume postgres-data has already all the data required for postgres.
try to delete it first and re-deploy the stack.
docker-compose down --remove-orphans --volumes
or stop the stack and run:
docker volume rm postgres-data

Starting Tryton server with docker-compose file

I am trying to link an external postgres to tryton/tryton from docker hub.
docker-compose.yaml
version: '3.7'
services:
tryton-postgres:
image: postgres
ports:
- 5432:5432
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_DB=tryton
restart: always
gnuserver:
image: tryton/tryton:4.6
links:
- tryton-postgres:postgres
ports:
- 8000:8000
depends_on:
- tryton-postgres
entrypoint: /entrypoint.sh trytond
when i ssh into the container and run trytond-admin --all -d tryton it seems to be looking for sqlite file instead of the connected postgres database. Are there some env variagbles i must set? What am i missing in my docker compose file?
Instead of changing the configuration file, with Docker it is simpler to set environment variable like:
DB_USER=
DB_PASSWORD=
DB_HOSTNAME=tryton-postgres
DB_PORT=5432
you need to edit /etc/tryton/trytond.conf to look at postgresql:
uri = postgresql://USERNAME:PASSWORD#tryton-postgres:5432/
see the Docs

Docker is not getting Postgres environment variables

Docker is using the variables from my .env file and i keep getting the error:
Unhandled rejection SequelizeConnectionError: role "eli" does not
exist
I would like for Postgres to get the variables from the environment set in docker-compose.yml
.env
POSTGRES_PORT=5432
POSTGRES_DB=elitest4
POSTGRES_USER=eli
POSTGRES_PASSWORD=
docker-compose.yml
# docker-compose.yml
version: "3"
services:
app:
build: .
depends_on:
- database
ports:
- 8000:8000
environment:
- POSTGRES_HOST=database
env_file:
- .env
database:
image: postgres:9.6.8-alpine
environment: # postgress should be getting these variables, not the variables set in the env file thats for localhost
POSTGRES_PASSWORD: password
POSTGRES_USER: user
POSTGRES_DB: db
volumes:
- pgdata:/var/lib/postgresql/pgdata
ports:
- 8002:5432
env_file:
- .env
react_client:
build:
context: ./client
dockerfile: Dockerfile
image: react_client
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 8001:8001
env_file:
- ./client/.env
volumes:
pgdata:
TL/DR
Try updating the docker-compose service database environment section as follows:
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
POSTGRES_USER: ${POSTGRES_USER:-user}
POSTGRES_DB: ${POSTGRES_DB:-db}
Also notice that if you would like to see how each bound variable ultimately evaluates in Compose you can run the following command to see the "effective" compose file:
$ docker-compose config
This command will print out what your compose file looks like with all variable substitution replaced with its evaluation.
See the Environment variables in Compose and the Variable substitution sections in the documentation.
Pay close attention to this section:
When you set the same environment variable in multiple files, here’s
the priority used by Compose to choose which value to use:
1. Compose file
2. Shell environment variables
3. Environment file Dockerfile
4. Variable is not defined
In the example below, we set the same environment variable on an Environment file, and the Compose file:
$ cat ./Docker/api/api.env
NODE_ENV=test
$ cat docker-compose.yml
version: '3'
services:
api:
image: 'node:6-alpine'
env_file:
- ./Docker/api/api.env
environment:
- NODE_ENV=production
When you run the container, the environment variable defined in the Compose file takes precedence.
$ docker-compose exec api node
process.env.NODE_ENV 'production'
Having any ARG or ENV setting in a Dockerfile evaluates only if there is no Docker Compose entry for environment or env_file.
Specifics for NodeJS containers
If you have a package.json entry for script:start like NODE_ENV=test node server.js, then this overrules any setting in your docker-compose.yml file.