Docker-compose unable to make cross container requests - docker-compose

I'm having network issues running services in docker-compose. Essentially I'm just trying to make a get request through Kong to a simple Flask API I have setup. The docker-compose.yml is below
version: "3.0"
services:
postgres:
image: postgres:9.4
container_name: kong-database
ports:
- "5432:5432"
environment:
- POSTGRES_USER=kong
- POSTGRES_DB=kong
web:
image: kong:latest
container_name: kong
environment:
- DATABASE=postgres
- KONG_PG_HOST=postgres
restart: always
ports:
- "8000:8000"
- "443:8443"
- "8001:8001"
- "7946:7946"
- "7946:7946/udp"
links:
- postgres
ui:
image: pgbi/kong-dashboard
container_name: kong-dashboard
ports:
- "8080:8080"
employeedb:
build: test-api/
restart: always
ports:
- "5002:5002"
I add the API to kong with the command curl -i -X POST --url http://localhost:8001/apis/ --data name=employeedb --data upstream_url=http://localhost:5002 --data hosts=employeedb --data uris=/employees. I've tried this with many combinations of inputs, including different names, passing in the Docker network IP and the name of the test-api as hostname for the upstreamurl. After adding the API to Kong I get
HTTP/1.1 502 Bad Gateway
Date: Tue, 11 Jul 2017 14:17:17 GMT
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: kong/0.10.3
Additionally I've gotten into the docker containers running docker exec it <container-id> /bin/bash and attempted to make curl requests to the expected flask endpoint. While on the container running the API I was able to make a sucessful call to both localhost:5002/employees as well as to employeedb:5002/employees. However when making it from the container running Kong I see
curl -iv -X GET --url 'http://employeedb:5002/employees'
* About to connect() to employeedb port 5002 (#0)
* Trying X.X.X.X...
* Connection refused
* Failed connect to employeedb:5002; Connection refused
* Closing connection 0
Am I missing some sort of configuration that exposes the containers to one another?

You need to make the employeedb container visible to kong by defining a link like you did with the PostgreSQL database. Just add it as an additional entry directly below - postgres and it should be reachable by Kong:
....
links:
- postgres
- employeedb
....

Related

Keycloak urls setup

I want to run Keycloak and to play with it. So I run a container in Docker with quay.io/keycloak/keycloak:20.0.1 image.
version: '3.8'
networks:
default-dev-network:
external: true
services:
keycloak:
container_name: keycloak
image: quay.io/keycloak/keycloak:20.0.1
environment:
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgresdb:5432/keycloak
KC_DB_USERNAME: postgres
KC_DB_PASSWORD: pass
KC_DB_SCHEMA: public
KC_HOSTNAME: localhost
KC_HTTPS_PORT: 8443
KC_HTTPS_PROTOCOLS: TLSv1.3,TLSv1.2
KC_HTTP_ENABLED: "true"
KC_HTTP_PORT: 8080
KC_METRICS_ENABLED: "true"
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: password
ports:
- 18080:8080
- 8443:8443
command: start-dev
networks:
- default-dev-network
Then I created a realm test-realm, a client test-client. So I want to request a bearer token for it. I run
curl \
-d 'client_id=test-client' \
-d 'client_secret=xajewuZlBHL75rpiPttHday8t34aOnYa' \
-d 'grant_type=client_credentials' \
'http://localhost:18080/auth/realms/test-realm/protocol/openid-connect/token'
and I get
{"error":"RESTEASY003210: Could not find resource for full path: http://localhost:18080/auth/realms/test-realm/protocol/openid-connect/token"}
I'm reading a documentation on https://www.keycloak.org but there're so many details there that I'm afraid it will take weeks to figure everything out. Maybe there's a shorter guide?
New versions of Keycloak (after the rewrite in Quarkus) removed the /auth context path.
You can either remove it from the url or set the property KC_HTTP_RELATIVE_PATH=/auth.

Golang project and postgres image with docker compose up doesn't fail, but doesn't work either

not sure what I did wrong here. I'm trying to make a golang/postgres dockerized project with a persistent db. Below are the files. When I run go run main.go then curl http://localhost:8081/ I get the expected output, but when I try this with docker compose up I'm having issues, but everything seems to be working because I don't see any error messages postgres-1 | 2022-08-29 05:31:59.703 UTC [1] LOG: database system is ready to accept connections, but when I try curl http://localhost:8081/ I'm getting an error curl: (56) Recv failure: Connection reset by peer. I tried removing the postgres part entirely and I'm still getting the same problem. I can see that docker is up and running and the port is listening
sudo lsof -i -P -n | grep 8081
docker-pr 592069 root 4u IPv4 1760430 0t0 TCP *:8081 (LISTEN)
I'm using this on Ubuntu 22.04
main.go:
package main
import (
"fmt"
"log"
"net/http"
)
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "this can be anything")
})
http.HandleFunc("/try-it", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "it worked!")
})
log.Fatal(http.ListenAndServe(":8081", nil))
}
Dockerfile:
FROM golang:1.19-alpine3.16
WORKDIR /app
COPY go.mod ./
RUN go mod download
COPY . .
RUN go build -o main .
RUN go build -v -o /usr/local/bin/app ./...
EXPOSE 8081
CMD [ "./main" ]
docker-compose.yml:
version: '3.9'
services:
api:
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
ports:
- "8081:8080"
depends_on:
- postgres
networks:
- backend
postgres:
image: postgres
restart: unless-stopped
ports:
- "5001:5432"
volumes:
- psqlVolume:/var/lib/postgresql/data
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
networks:
- backend
networks:
backend:
volumes:
psqlVolume:
As #DavidMaze and #Brits explained in the comments the docker container is up and running, but the ports need to be mapped in both the container and the application. For example in main.go this method http.ListenAndServe(":8084", nil) in the app would need to map with the container port (in this case :8084)
version: '3.9'
services:
api:
image: api
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
ports:
- "8083:8084" #host_port:container_port
a curl request could be made on the host port where docker is listening (in this case it's :8083). For example curl http://localhost:8083/. This would make a request to the host machine and that request would be captured by docker which is listening on port :8083 then transmit the request to the container which is listening on :8084 as specified in the docker-compose.yml. If the port mapping isn't correct then curl will return curl: (56) Recv failure: Connection reset by peer.Thank you for the learning experience and I appreciate all your help.

docker-compose - PHP instance seems not to communicate with database service

I'm developing a project based on the Github template dunglas/symfony-docker to which I want to add a postgres database..
It seems that my docker compose.yml file is incorrectly configured because the communication between PHP and postgres is malfunctioning.
Indeed when I try to perform a symfony migration, doctrine returns me the following error :
password authentication failed for user "postgres"
When I inspect the PHP logs I notice that PHP is waiting after the database
php_1 | Still waiting for db to be ready... Or maybe the db is not reachable.
My docker-compose.yml :
version: "3.4"
services:
php:
links:
- database
build:
context: .
target: symfony_php
args:
SYMFONY_VERSION: ${SYMFONY_VERSION:-}
SKELETON: ${SKELETON:-symfony/skeleton}
STABILITY: ${STABILITY:-stable}
restart: unless-stopped
volumes:
- php_socket:/var/run/php
healthcheck:
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
environment:
# Run "composer require symfony/orm-pack" to install and configure Doctrine ORM
DATABASE_URL: ${DATABASE_URL}
# Run "composer require symfony/mercure-bundle" to install and configure the Mercure integration
MERCURE_URL: ${CADDY_MERCURE_URL:-http://caddy/.well-known/mercure}
MERCURE_PUBLIC_URL: https://${SERVER_NAME:-localhost}/.well-known/mercure
MERCURE_JWT_SECRET: ${CADDY_MERCURE_JWT_SECRET:-!ChangeMe!}
caddy:
build:
context: .
target: symfony_caddy
depends_on:
- php
environment:
SERVER_NAME: ${SERVER_NAME:-localhost, caddy:80}
MERCURE_PUBLISHER_JWT_KEY: ${CADDY_MERCURE_JWT_SECRET:-!ChangeMe!}
MERCURE_SUBSCRIBER_JWT_KEY: ${CADDY_MERCURE_JWT_SECRET:-!ChangeMe!}
restart: unless-stopped
volumes:
- php_socket:/var/run/php
- caddy_data:/data
- caddy_config:/config
ports:
# HTTP
- target: 80
published: 80
protocol: tcp
# HTTPS
- target: 443
published: 443
protocol: tcp
# HTTP/3
- target: 443
published: 443
protocol: udp
###> doctrine/doctrine-bundle ###
database:
image: postgres:${POSTGRES_VERSION:-13}-alpine
environment:
POSTGRES_DB: ${POSTGRES_DB:-app}
# You should definitely change the password in production
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-ChangeMe}
POSTGRES_USER: ${POSTGRES_USER:-symfony}
volumes:
- db-data:/var/lib/postgresql/data:rw
# You may use a bind-mounted host directory instead, so that it is harder to accidentally remove the volume and lose all your data!
# - ./docker/db/data:/var/lib/postgresql/data:rw
###< doctrine/doctrine-bundle ###
volumes:
php_socket:
caddy_data:
caddy_config:
###> doctrine/doctrine-bundle ###
db-data:
###< doctrine/doctrine-bundle ###
extract of my .env file :
POSTGRES_DB=proximityNL
POSTGRES_PASSWORD=postgres
POSTGRES_USER=postgres
DATABASE_URL="postgresql://postgres:postgres#database:5432/proximityNL?serverVersion=13&charset=utf8"
Can you help me ?
Best regards ..
UPDATE :
Indeed I understood on Saturday that it was just necessary to remove orphan ..
docker-compose down --remove-orphans --volumes
When running in a container, 127.0.0.1 refers to the container itself. Docker compose creates a virtual network where each container has its own IP address. You can address the containers by their service names.
So your connection string should point to database:5432 instead of 127.0.0.1:5432 like this
DATABASE_URL="postgresql://postgres:postgres#database:5432/proximityNL?serverVersion=13&charset=utf8"
You use database because that's the service name of your postgresql container in your docker compose file.
In docker you can call images via the name of it.
So try to use the name of the docker image for your config
DATABASE_URL="postgresql://postgres:postgres#database:5432/proximityNL?serverVersion=13&charset=utf8"
and maybe add an link between your php and database image
services:
php:
links:
- database
This is the way how i am connect a java app with an mysql db.
Docker should map DNS resolution from the Docker Host into your containers.
Networking in Compose link
Because of that, you DB URL should look like:
"postgresql://postgres:postgres#database:5432/..."
I understood on Saturday that it was just necessary to remove orphan
docker-compose down --remove-orphans --volumes

CouchDB with docker-compose not reachable from host (but from localhost)

I am setting up CouchDB using docker-compose with the following docker-compose.yml (the following is a minimal example):
version: "3.6"
services:
couchdb:
container_name: couchdb
image: apache/couchdb:2.2.0
restart: always
ports:
- 5984:5984
volumes:
- ./test/couchdb/data:/opt/couchdb/data
environment:
- 'COUCHDB_USER=admin'
- 'COUCHDB_PASSWORD=password'
couchdb_setup:
depends_on: ['couchdb']
container_name: couchdb_setup
image: apache/couchdb:2.2.0
command: ['/bin/bash', '-x', '-c', 'cat /usr/local/bin/couchdb_setup.sh | tr -d "\r" | bash']
volumes:
- ./scripts/couchdb_setup.sh:/usr/local/bin/couchdb_setup.sh:ro
environment:
- 'COUCHDB_USER=admin'
- 'COUCHDB_PASSWORD=password'
The setup script of the second container is executing the script ./scripts/couchdb_setup.sh that starts with:
until curl -f http://couchdb:5984; do
sleep 1
done
Now, the issue is that the curl call is always returning The requested URL returned error: 502 Bad Gateway. I figured that CouchDB is only listening on http://localhost:5984 but not on http://couchdb:5984 as is evident when I bash into the couchdb container and issue both curls; for http://localhost:5984 I get the expected response, for http://couchdb:5984 as well as http://<CONTAINER_IP>:5984 (that's http://192.168.32.2:5984, in my case) responds with server 192.168.32.2 is unreachable ...
I looked into the configs and especially into the [chttp] settings and its bind_address argument. By default, bind_address is set to any, but I have also tried using 0.0.0.0, to no avail.
I'm looking for hints what I did wrong and for advice how to set up CouchDB with docker-compose. Any help is appreciated.

Failing to execute nginx proxy_pass directive for a Dancer2 app inside a Docker container

I have tried to orchestrate a Dancer2 app which runs on starman using Docker-compose. I'm failing to integrate nginx it crashes with 502 Bad Gateway error.
Which inside my server looks like this :
*1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.22.0.1,
My docker-compose file looks like this :
version: '2'
services:
web:
image: nginx
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
links:
- pearlbee
volumes_from:
- pearlbee
pearlbee:
build: pearlbee
command: carton exec starman bin/app.psgi
ports:
- "5000:5000"
environment:
- MYSQL_PASSWORD=secret
depends_on:
- mysql
mysql:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_USER=root
My nginx.conf file looks like this :
user root nogroup;
worker_processes auto;
events { worker_connections 512; }
http {
include /etc/nginx/sites-enabled/*;
upstream pb{
# this the localhost that starts starman
#server 127.0.0.1:5000;
#the name of the docker-compose service that creats the app
server pearlbee;
#both return the same error mesage
}
server {
listen *:80;
#root /usr/share/nginx/html/;
#index index.html 500.html favico.ico;
location / {
proxy_pass http://pb;
}
}
}
You're right to use the service name as the upstream server for Nginx, but you need to specify the port:
upstream pb{
server pearlbee:5000;
}
Within the Docker network - which Compose creates for you - services can access each other by name. Also, you don't need to publish ports for other containers to use, unless you also want to access them externally. The Nginx container will be able to access port 5000 on your app container, you don't need to publish it to the host.