I want to setup a Docker network that contains a keycloak, postgres, and webapp instances.
Is there a way to have network communications between containers but also understand oidc client redirects as well? I am having an issue where containers can talk to each other just fine if i setup OIDC with container names for the docker network, but then I run into issues with the client that cannot connect to the those same URLs outside of the docker network on the host machine.
Can anyone point me to the right docker documentation to look at for possible solutions with DNS or host to container communication?
---- EDIT ----
To clarify. The containers can talk to each other just fine under their container names, but the client (i.e., Chrome) has to use localhost to talk to everything. In my setup for my OIDC connection in the ui web application I have to use container names or localhost. How do I get my client to understand container names in order to make the right request?
version: '2'
services:
ui:
container_name: 'ui'
image: 'bdparrish/ui:0.1'
build:
context: .
dockerfile: ./ui/Dockerfile
ports:
- "8085:80"
depends_on:
- "postgres"
- "keycloak"
networks:
- auth-network
environment:
- ASPNETCORE_ENVIRONMENT=Docker
postgres:
container_name: postgres
image: 'postgres'
environment:
POSTGRES_PASSWORD: password
ports:
- "5432:5432"
networks:
- auth-network
keycloak:
container_name: keycloak
image: jboss/keycloak
ports:
- "8080:8080"
depends_on:
- postgres
environment:
DB_VENDOR: "POSTGRES"
DB_ADDR: postgres
DB_PORT: 5432
DB_USER: keycloak
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: password
restart: always
networks:
- auth-network
networks:
auth-network:
driver: bridge
You don't have to modify the etc/hosts file.
There is an environment variable for keycloak named KEYCLOAK_FRONTEND_URL especial for this purpose.
Edit your docker compose file to look like this:
version: '2'
services:
ui:
container_name: 'ui'
image: 'bdparrish/ui:0.1'
build:
context: .
dockerfile: ./ui/Dockerfile
ports:
- "8085:80"
depends_on:
- "postgres"
- "keycloak"
networks:
- auth-network
environment:
- ASPNETCORE_ENVIRONMENT=Docker
postgres:
container_name: postgres
image: 'postgres'
environment:
POSTGRES_PASSWORD: password
ports:
- "5432:5432"
networks:
- auth-network
keycloak:
container_name: keycloak
image: jboss/keycloak
ports:
- "8080:8080"
depends_on:
- postgres
environment:
DB_VENDOR: "POSTGRES"
DB_ADDR: postgres
DB_PORT: 5432
DB_USER: keycloak
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: password
KEYCLOAK_FRONTEND_URL: http://localhost:8080/auth
restart: always
networks:
- auth-network
networks:
auth-network:
driver: bridge
Then the login should be redirected to that url.
All you need to do is add an entry to your hosts file:
Windows: C:\Windows\System32\drivers\etc\hosts
Linux: /etc/hosts
Append this to the end of the file:
127.0.0.1 keycloak
Then use keycloak:8080 from your UI to talk to your keycloak server instead of localhost:8080. You can still use localhost:8580 to visit the UI in the browser.
Related
I try to run DataHub (https://datahub.io/) and / or OpenMetaData (https://open-metadata.org/) locally for testing. Both are used via docker-compose files.
For OpenMetaData I used:
version: "3.9"
volumes:
ingestion-volume-dag-airflow:
ingestion-volume-dags:
ingestion-volume-tmp:
services:
mysql:
container_name: openmetadata_mysql
image: openmetadata/db:0.13.0
restart: always
environment:
MYSQL_ROOT_PASSWORD: password
expose:
- 3306
volumes:
- ./docker-volume/db-data:/var/lib/mysql
networks:
- app_net
healthcheck:
test: mysql --user=root --password=$$MYSQL_ROOT_PASSWORD --silent --execute "use openmetadata_db"
interval: 15s
timeout: 10s
retries: 10
elasticsearch:
container_name: openmetadata_elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
environment:
- discovery.type=single-node
- ES_JAVA_OPTS=-Xms1024m -Xmx1024m
networks:
- app_net
ports:
- "9200:9200"
- "9300:9300"
openmetadata-server:
container_name: openmetadata_server
restart: always
image: openmetadata/server:0.13.0
environment:
# OpenMetadata Server Authentication Configuration
AUTHORIZER_CLASS_NAME: ${AUTHORIZER_CLASS_NAME:-org.openmetadata.service.security.DefaultAuthorizer}
AUTHORIZER_REQUEST_FILTER: ${AUTHORIZER_REQUEST_FILTER:-org.openmetadata.service.security.JwtFilter}
AUTHORIZER_ADMIN_PRINCIPALS: ${AUTHORIZER_ADMIN_PRINCIPALS:-[admin]}
AUTHORIZER_ALLOWED_REGISTRATION_DOMAIN: ${AUTHORIZER_ALLOWED_REGISTRATION_DOMAIN:-["all"]}
AUTHORIZER_INGESTION_PRINCIPALS: ${AUTHORIZER_INGESTION_PRINCIPALS:-[ingestion-bot]}
AUTHORIZER_PRINCIPAL_DOMAIN: ${AUTHORIZER_PRINCIPAL_DOMAIN:-"openmetadata.org"}
AUTHORIZER_ENFORCE_PRINCIPAL_DOMAIN: ${AUTHORIZER_ENFORCE_PRINCIPAL_DOMAIN:-false}
AUTHORIZER_ENABLE_SECURE_SOCKET: ${AUTHORIZER_ENABLE_SECURE_SOCKET:-false}
AUTHENTICATION_PROVIDER: ${AUTHENTICATION_PROVIDER:-basic}
CUSTOM_OIDC_AUTHENTICATION_PROVIDER_NAME: ${CUSTOM_OIDC_AUTHENTICATION_PROVIDER_NAME:-""}
AUTHENTICATION_PUBLIC_KEYS: ${AUTHENTICATION_PUBLIC_KEYS:-[http://localhost:8585/api/v1/config/jwks]}
AUTHENTICATION_AUTHORITY: ${AUTHENTICATION_AUTHORITY:-https://accounts.google.com}
AUTHENTICATION_CLIENT_ID: ${AUTHENTICATION_CLIENT_ID:-""}
AUTHENTICATION_CALLBACK_URL: ${AUTHENTICATION_CALLBACK_URL:-""}
AUTHENTICATION_JWT_PRINCIPAL_CLAIMS: ${AUTHENTICATION_JWT_PRINCIPAL_CLAIMS:-[email,preferred_username,sub]}
AUTHENTICATION_ENABLE_SELF_SIGNUP: ${AUTHENTICATION_ENABLE_SELF_SIGNUP:-true}
# JWT Configuration
RSA_PUBLIC_KEY_FILE_PATH: ${RSA_PUBLIC_KEY_FILE_PATH:-"./conf/public_key.der"}
RSA_PRIVATE_KEY_FILE_PATH: ${RSA_PRIVATE_KEY_FILE_PATH:-"./conf/private_key.der"}
JWT_ISSUER: ${JWT_ISSUER:-"open-metadata.org"}
JWT_KEY_ID: ${JWT_KEY_ID:-"Gb389a-9f76-gdjs-a92j-0242bk94356"}
# OpenMetadata Server Airflow Configuration
AIRFLOW_HOST: ${AIRFLOW_HOST:-http://ingestion:8080}
SERVER_HOST_API_URL: ${SERVER_HOST_API_URL:-http://openmetadata-server:8585/api}
AIRFLOW_AUTH_PROVIDER: ${AIRFLOW_AUTH_PROVIDER:-no-auth}
# OpenMetadata Airflow Azure SSO Configuration
OM_AUTH_AIRFLOW_AZURE_CLIENT_SECRET: ${OM_AUTH_AIRFLOW_AZURE_CLIENT_SECRET:-""}
OM_AUTH_AIRFLOW_AZURE_AUTHORITY_URL: ${OM_AUTH_AIRFLOW_AZURE_AUTHORITY_URL:-""}
OM_AUTH_AIRFLOW_AZURE_SCOPES: ${OM_AUTH_AIRFLOW_AZURE_SCOPES:-[]}
OM_AUTH_AIRFLOW_AZURE_CLIENT_ID: ${OM_AUTH_AIRFLOW_AZURE_CLIENT_ID:-""}
# OpenMetadata Airflow Google SSO Configuration
OM_AUTH_AIRFLOW_GOOGLE_SECRET_KEY_PATH: ${OM_AUTH_AIRFLOW_GOOGLE_SECRET_KEY_PATH:- ""}
OM_AUTH_AIRFLOW_GOOGLE_AUDIENCE: ${OM_AUTH_AIRFLOW_GOOGLE_AUDIENCE:-"https://www.googleapis.com/oauth2/v4/token"}
# OpenMetadata Airflow Okta SSO Configuration
OM_AUTH_AIRFLOW_OKTA_CLIENT_ID: ${OM_AUTH_AIRFLOW_OKTA_CLIENT_ID:-""}
OM_AUTH_AIRFLOW_OKTA_ORGANIZATION_URL: ${OM_AUTH_AIRFLOW_OKTA_ORGANIZATION_URL:-""}
OM_AUTH_AIRFLOW_OKTA_PRIVATE_KEY: ${OM_AUTH_AIRFLOW_OKTA_PRIVATE_KEY:-""}
OM_AUTH_AIRFLOW_OKTA_SA_EMAIL: ${OM_AUTH_AIRFLOW_OKTA_SA_EMAIL:-""}
OM_AUTH_AIRFLOW_OKTA_SCOPES: ${OM_AUTH_AIRFLOW_OKTA_SCOPES:-[]}
# OpenMetadata Airflow Auth0 SSO Configuration
OM_AUTH_AIRFLOW_AUTH0_CLIENT_ID: ${OM_AUTH_AIRFLOW_AUTH0_CLIENT_ID:-""}
OM_AUTH_AIRFLOW_AUTH0_CLIENT_SECRET: ${OM_AUTH_AIRFLOW_AUTH0_CLIENT_SECRET:-""}
OM_AUTH_AIRFLOW_AUTH0_DOMAIN_URL: ${OM_AUTH_AIRFLOW_AUTH0_DOMAIN_URL:-""}
# OpenMetadata Airflow Custom OIDC SSO Configuration
OM_AUTH_AIRFLOW_CUSTOM_OIDC_CLIENT_ID: ${OM_AUTH_AIRFLOW_CUSTOM_OIDC_CLIENT_ID:-""}
OM_AUTH_AIRFLOW_CUSTOM_OIDC_SECRET_KEY: ${OM_AUTH_AIRFLOW_CUSTOM_OIDC_SECRET_KEY:-""}
OM_AUTH_AIRFLOW_CUSTOM_OIDC_TOKEN_ENDPOINT_URL: ${OM_AUTH_AIRFLOW_CUSTOM_OIDC_TOKEN_ENDPOINT_URL:-""}
# OpenMetadata Airflow JWT Token Configuration
OM_AUTH_JWT_TOKEN: ${OM_AUTH_JWT_TOKEN:-""}
# Database configuration for MySQL
DB_DRIVER_CLASS: ${DB_DRIVER_CLASS:-com.mysql.cj.jdbc.Driver}
DB_SCHEME: ${DB_SCHEME:-mysql}
DB_USE_SSL: ${DB_USE_SSL:-false}
DB_USER: ${DB_USER:-openmetadata_user}
DB_USER_PASSWORD: ${DB_USER_PASSWORD:-openmetadata_password}
DB_HOST: ${DB_HOST:-mysql}
DB_PORT: ${DB_PORT:-3306}
OM_DATABASE: ${OM_DATABASE:-openmetadata_db}
# Airflow SSL Configurations
AIRFLOW_VERIFY_SSL: ${AIRFLOW_VERIFY_SSL:-"no-ssl"}
AIRFLOW_SSL_CERT_PATH: ${AIRFLOW_SSL_CERT_PATH:-""}
# ElasticSearch Configurations
ELASTICSEARCH_HOST: ${ELASTICSEARCH_HOST:- elasticsearch}
ELASTICSEARCH_PORT: ${ELASTICSEARCH_PORT:-9200}
ELASTICSEARCH_SCHEME: ${ELASTICSEARCH_SCHEME:-http}
ELASTICSEARCH_USER: ${ELASTICSEARCH_USER:-""}
ELASTICSEARCH_PASSWORD: ${ELASTICSEARCH_PASSWORD:-""}
# Heap OPTS Configurations
OPENMETADATA_HEAP_OPTS: ${OPENMETADATA_HEAP_OPTS:--Xmx1G -Xms1G}
expose:
- 8585
- 8586
ports:
- "8585:8585"
- "8586:8586"
depends_on:
elasticsearch:
condition: service_started
mysql:
condition: service_healthy
networks:
- app_net
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:8586/healthcheck" ]
ingestion:
container_name: openmetadata_ingestion
image: openmetadata/ingestion:0.13.0
depends_on:
elasticsearch:
condition: service_started
mysql:
condition: service_healthy
openmetadata-server:
condition: service_healthy
environment:
AIRFLOW__API__AUTH_BACKENDS: airflow.api.auth.backend.basic_auth
AIRFLOW__CORE__EXECUTOR: LocalExecutor
AIRFLOW__OPENMETADATA_AIRFLOW_APIS__DAG_GENERATED_CONFIGS: "/opt/airflow/dag_generated_configs"
DB_HOST: ${AIRFLOW_DB_HOST:-mysql}
DB_PORT: ${AIRFLOW_DB_PORT:-3306}
AIRFLOW_DB: ${AIRFLOW_DB:-airflow_db}
AIRFLOW_DB_SCHEME: ${AIRFLOW_DB_SCHEME:-mysql+pymysql}
DB_USER: ${AIRFLOW_DB_USER:-airflow_user}
DB_PASSWORD: ${AIRFLOW_DB_PASSWORD:-airflow_pass}
entrypoint: /bin/bash
command:
- "/opt/airflow/ingestion_dependency.sh"
expose:
- 8080
ports:
- "8080:8080"
networks:
- app_net
volumes:
- ingestion-volume-dag-airflow:/opt/airflow/dag_generated_configs
- ingestion-volume-dags:/opt/airflow/dags
- ingestion-volume-tmp:/tmp
networks:
app_net:
ipam:
driver: default
config:
- subnet: "172.16.240.0/24"
and for DataHub I followed the steps described here: https://datahubproject.io/docs/quickstart/
Everything runs locally on my Mac.
Now my problem - I have also a local postgres up and running. I can access the database via PGAdmin.
version: '3.8'
services:
db:
container_name: pg_container
image: postgres
restart: always
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
POSTGRES_DB: phd_test
ports:
- "5432:5432"
pgadmin:
container_name: pgadmin4_container
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: admin
ports:
- "5050:80"
In both cases DataHub & OpenMetaData I can not connect to this local Postgres DB. In both cases I get errors like for example:
'sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "127.0.0.1", port 5432 failed: Connection refused\n'
'\tIs the server running on that host and accepting TCP/IP connections?\n'
'\n'
'(Background on this error at: https://sqlalche.me/e/14/e3q8)\n',
"2022-12-04 13:58:08.971818 [exec_id=9e2a2163-24e0-4afb-8d2d-cb1b3033bc91] INFO: Failed to execute 'datahub ingest'",
I also tried different endpoints within DataHub and OpenMetadata for the Postgres like
127.0.0.1:5432
0.0.0.0:5432
pg_container:5432 (Name of the postgres container - this works in PG Admin)
172.25.0.2:5432 (Ip Address of the pg_container)
Does anybody else have a local connection between one of these tools up and running? As the errors are similar in both tools I think that it could be a docker network error, so that the postgres container can not be seen by other containers (like DataHub or OpenMetaData).
Maybe you figured it out, but the problem here is that the open metadata is running in a docker container; thus, localhost is the same open metadata container. If you are running Postgres locally you need to access the host's localhost and not your own container address therefore you need to use the host host.docker.internal.
Similarly, if you are using a docker container for Postgres and not exposing the port to your local network you need to find out the host of that container which is usually the id. For this, you can run docker network ls to see all the containers in docker bridge network and then inspect the container to see the IP address and container id's using docker inspect <NETWORK ID>.
I'm using bitnami's Keycloak v.20.0.0 (Quarkus) image with docker-compose. Everything works fine and I have no problems with any configuration, however when I want to see the tables in the PostgreSQL database, I access without problems but I don't see anything! I don't see that any table or database exists.
I understand that I have to start Keycloak in dev mode which I configured but I still don't see anything.
What am I doing wrong?
this is my setup:
version: "3.7"
services:
keycloak:
image: bitnami/keycloak:20.0.1
container_name: keycloak_20
environment:
DB_VENDOR: POSTGRES
DB_ADDR: postgres
KEYCLOAK_ADMIN_USER: admin
KEYCLOAK_ADMIN_PASSWORD: admin
KEYCLOAK_DATABASE_HOST: postgres
KEYCLOAK_DATABASE_PORT: 5432
KEYCLOAK_DATABASE_NAME: postgres
KEYCLOAK_DATABASE_USER: postgres
KEYCLOAK_DATABASE_PASSWORD: postgres
KEYCLOAK_DATABASE_SCHEMA: public
KEYCLOAK_EXTRA_ARGS: "-Dkeycloak.profile.feature.scripts=enabled"
KC_HOSTNAME: postgres
ENV KC_HOSTNAME_STRICT: false
ENV KC_HTTP_ENABLED: true
ports:
- 8080:8080
volumes:
- ./keycloak/export:/tmp/export
- ./rus-theme:/opt/bitnami/keycloak/themes/my-theme
- ./keycloak/configuration/standalone-ha.xml:/bitnami/keycloak/configuration/standalone-ha.xml:ro
command:
- /bin/bash
- -c
- |
/opt/bitnami/keycloak/bin/kc.sh start-dev
depends_on:
- postgres
postgres:
image: postgres:10
container_name: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports:
- "5432:5432"
mailhog:
# Conectarse al nombre del container para acceder
# Ejemplo: mailhog:1025
image: mailhog/mailhog
logging:
driver: 'none' # disable saving logs
container_name: mailhog
ports:
- 1025:1025 # smtp server
- 8025:8025 # web ui
volumes:
postgres_data:
driver: local
KEYCLOAK_DATABASE_* properties were used in the old versions of Keycloak (pre-Quarkus).
New properties are defined as KC_DB_* (see https://www.keycloak.org/server/all-config?q=db)
I need to setup the Keycloak docker server with the External postgres Database connection URL.
Here's my current yaml file content which is working with POstgres docker container image as mentioned
version: '3'
volumes:
postgres_data:
driver: local
services:
postgres:
image: postgres:11
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
ports:
- 5433:5432
keycloak:
image:jboss/keycloak:latest
environment:
DB_VENDOR: POSTGRES
DB_ADDR: postgres
DB_DATABASE: keycloak
DB_USER: keycloak
DB_SCHEMA: public
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: password
KEYCLOAK_LOGLEVEL: DEBUG
ROOT_LOGLEVEL: DEBUG
ports:
- 8080:8080
- 8443:8443
depends_on:
- postgres
I checked the official documentation for passing external DB connection URL.
But exactly didn't get what changes will be needed in YAML file
ref: https://hub.docker.com/r/jboss/keycloak/
I tried removing the Postgres and depends_on section from services and passed the Database connection details in Kecyloak environment section in yaml but it did not worked for me
Can anyone suggest the correct YAML file changes to use PostgresDB connection URL
Thank You.
Docker containers can see each other by their service name, so here service name postgres is actually the connection url for keycloak container.
version: '3'
volumes:
postgres_data:
driver: local
services:
postgres: # Service name
image: postgres:11
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
ports:
- 5433:5432
keycloak:
image: jboss/keycloak:latest
environment:
DB_VENDOR: POSTGRES
DB_ADDR: postgres # <<< This is the address, change it to your external db ip/domain
DB_DATABASE: keycloak
DB_USER: keycloak
DB_SCHEMA: public
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: password
KEYCLOAK_LOGLEVEL: DEBUG
ROOT_LOGLEVEL: DEBUG
ports:
- 8080:8080
- 8443:8443
depends_on:
- postgres
I am trying to use my docker-compose file to run 2 instances of both my database, and my rest api, so that I can run tests on a test instance of the database.
version: "3.8"
services:
db:
image: postgres:13.2-alpine
container_name: "db-prod"
ports:
- "5432:5432"
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
networks:
- fullstack
volumes:
- database_postgres:/var/lib/postgresql/data
db_test:
image: postgres:13.2-alpine
container_name: "db-test"
ports:
- "5433:5432"
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
networks:
- fullstack-test
volumes:
- database_postgres_test:/var/lib/postgresql/data
api:
build: .
container_name: "rest-api"
environment:
DB_USERNAME: "postgres"
DB_PASSWORD: "password"
DB_HOST: "db-prod"
DB_TABLE: "postgres"
DB_DB: "postgres"
DB_PORT: "5432"
ports:
- "8080:8080"
depends_on:
- db
networks:
- fullstack
api_test:
build: .
container_name: "rest-api-test"
environment:
DB_USERNAME: "postgres"
DB_PASSWORD: "password"
DB_HOST: "db-test"
DB_TABLE: "postgres"
DB_DB: "postgres"
DB_PORT: "5433"
ports:
- "8081:8080"
depends_on:
- db_test
networks:
- fullstack-test
volumes:
database_postgres:
database_postgres_test:
networks:
fullstack:
driver: bridge
fullstack-test:
driver: bridge
When i run this, my prod database starts, and my regular API connects to it fine.
My test DB also starts, and I can connect to it using
psql -U postgres -h localhost -p 5433
however my test rest API wi
dial tcp 192.168.112.2:5433: connect: connection refused
The goal is to set up my go tests to run on the test DB and just clear after each test as needed, and not affect the prod db.
I am not sure if I am going about this the right way - perhaps there is a better construct for this - and if so please correct me. But regardless, I do not understand why im getting this error?
I dont get why one connection works well and the other fails?
Edit: Also interesting, i just noticed if i change the api_test container to use:
DB_HOST: "host.docker.internal"
it works. But i still dont understand why one can use a container name and the other cannot? And i cant leave it this way as it needs to work on a mac as well, and host.docker.internal doesnt work on my mac (hence why the first one was changed to the container name)
I've done a docker-compose up and been able to run my web service attached to a postgresql image. Problem is, I can't view the data on postico when I try to access the database. The name of the image is db and when i try to specify hostname to be "db" on postico before i connect, i get an error saying hostname not found. I've entered my credentials, port and database name the same way i keyed them in my docker-compose file.
Does anybody know how i can find the correct setup to connect to within the container?
version: '3.6'
services:
phoenix:
# tell docker-compose which Dockerfile it needs to build
build:
context: .
dockerfile: Dockerfile.phoenix.development
# map the port of phoenix to the local dev port
ports:
- 4000:4000
# mount the code folder inside the running container for easy development
volumes:
- ./my_app:/app
# make sure we start mongodb when we start this service
# links:
# - db
depends_on:
- db
- redis
environment:
GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID}
GOOGLE_CLIENT_SECRET: ${GOOGLE_CLIENT_SECRET}
FACEBOOK_CLIENT_ID: ${FACEBOOK_CLIENT_ID}
FACEBOOK_CLIENT_SECRET: ${FACEBOOK_CLIENT_SECRET}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
go:
build:
context: .
dockerfile: Dockerfile.go.development
ports:
- 8080:8080
volumes:
- ./genesys-api:/go/src/github.com/sc4224/genesys-api
depends_on:
- db
- redis
- phoenix
db:
container_name: db
image: postgres:latest
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
volumes:
- ./data/db:/data/db
restart: always
redis:
container_name: redis
image: redis:latest
ports:
- "6379:6379"
volumes:
- ./data/redis:/data/redis
entrypoint: redis-server
restart: always
use hostname as localhost.
You can't use the hostname db outside the internal docker network. That would work in the applications running in the same network.
Since you exposed the db to run on port 5432, it's exposed via 0.0.0.0:5432->5432/tcp and therefore is accessible with localhost as host and port 5432