How to connect to cockroachdb container from other container using docker-compose? - docker-compose

I've tried to mimic [MySQL way][1] to do this, but didn't work for me.
I've also tried several variation ranging from: adding network interface to
explicitly specifying container IP, none of them work (since container IP
always changed).
The error message is:
"could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"
}
These are my code:
My flask app:
connection = psycopg2.connect(
database=os.environ.get("DB_NAME"),
user=os.environ.get("DB_USER"),
password=os.environ.get("DB_PASSWORD"),
sslmode=os.environ.get("DB_SSL"),
host=os.environ.get("DB_HOST"),
port=os.environ.get("DB_PORT"),
)
My docker-compose file:
version: '3'
services:
flask-api:
image: flask-api:0.7.0
ports:
- '5000:5000'
environment:
- DB_NAME = knotdb
- DB_HOST = roach1
- DB_PORT = 26257
- DB_USER = root
- DB_SSL = disable
links:
- roach1
roach1:
image: cockroachdb/cockroach:v1.1.3
command: start --insecure --host=127.0.0.1
ports:
- "26257:26257"
volumes:
- ./cockroach-data/roach1:/cockroach/cockroach-data
My other various attempts:
# adding network to both service : didn't work
# using ip instead of alias: didn't work
links:
- roach1:127.0.0.1
1: How to handle IP addresses when linking docker containers with each other using docker-compose?

Related

Access container from docker-compose using linuxserver/duckdns IP

I was looking for a software like No-IP to dynamically update my IP using a free domain from them like <domain>.zapto.org, but this time for setting up with docker containers. So I found about duckdns and tried setting it up.
Well, perhaps I got it wrong, but as per what I understood, I can create a service within my docker-compose services setting up the linuxserver/duckdns. When I do that, I suppose that I can then access my other services from that same compose using the domain created on duckdns, is that right?
For instance, I got this docker-compose:
version: "3.9"
services:
dns_server:
image: linuxserver/duckdns:version-13f609b7
restart: always
environment:
TOKEN: ${DUCKDNS_TOKEN}
TZ: ${TZ}
SUBDOMAINS: ${DUCKDNS_SUBDOMAINS}
depends_on:
- server
- db
- phpmyadmin
server:
# ...
restart: always
ports:
- "7171:7171"
- "7172:7172"
# ...
command: sh -c "/wait && screen -S tfs ./tfs"
# Database
db:
image: bitnami/mariadb:10.8.7-debian-11-r1
restart: always
ports:
- "3306:3306"
# ...
# phpmyadmin
phpmyadmin:
# ...
image: bitnami/phpmyadmin:5.2.1-debian-11-r1
restart: always
ports:
- "8080:8080"
- "8443:8443"
# ...
That compose gives me these containers running:
When I try to reach my server service by using 127.0.0.1:7171 or localhost:7171, and also access my phpmyadmin by 127.0.0.1:8080, it works, but it doesn't when I try using <mydomain>.duckdns.org:7171 or <mydomain>.duckdns.org:8080
What is wrong?
As I know, when you define the port - "7171:7171" like this it will bound to your localhost 127.0.0.1, which you can access. If you want to allow public access try something like
server:
ports:
- "0.0.0.0:7171:7171"
- "0.0.0.0:7172:7172"
And you can access the port via your Public IP address or hostname of duckDNS.
FYI: Beware of the security risks of exposing the code to the public.

i'm trying to build a docker image of my strapi backend connecting with postgres. The images builds but running the container fails and i get error [duplicate]

I'm building an app running on NodeJS using postgresql.
I'm using SequelizeJS as ORM.
To avoid using real postgres daemon and having nodejs on my own device, i'm using containers with docker-compose.
when I run docker-compose up
it starts the pg database
database system is ready to accept connections
and the nodejs server.
but the server can't connect to database.
Error: connect ECONNREFUSED 127.0.01:5432
If I try to run the server without using containers (with real nodejs and postgresd on my machine) it works.
But I want it to work correctly with containers. I don't understand what i'm doing wrong.
here is the docker-compose.yml file
web:
image: node
command: npm start
ports:
- "8000:4242"
links:
- db
working_dir: /src
environment:
SEQ_DB: mydatabase
SEQ_USER: username
SEQ_PW: pgpassword
PORT: 4242
DATABASE_URL: postgres://username:pgpassword#127.0.0.1:5432/mydatabase
volumes:
- ./:/src
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: pgpassword
Could someone help me please?
(someone who likes docker :) )
Your DATABASE_URL refers to 127.0.0.1, which is the loopback adapter (more here). This means "connect to myself".
When running both applications (without using Docker) on the same host, they are both addressable on the same adapter (also known as localhost).
When running both applications in containers they are not both on localhost as before. Instead you need to point the web container to the db container's IP address on the docker0 adapter - which docker-compose sets for you.
Change:
127.0.0.1 to CONTAINER_NAME (e.g. db)
Example:
DATABASE_URL: postgres://username:pgpassword#127.0.0.1:5432/mydatabase
to
DATABASE_URL: postgres://username:pgpassword#db:5432/mydatabase
This works thanks to Docker links: the web container has a file (/etc/hosts) with a db entry pointing to the IP that the db container is on. This is the first place a system (in this case, the container) will look when trying to resolve hostnames.
For further readers, if you're using Docker desktop for Mac use host.docker.internal instead of localhost or 127.0.0.1 as it's suggested in the doc. I came across same connection refused... problem. Backend api-service couldn't connect to postgres using localhost/127.0.0.1. Below is my docker-compose.yml and environment variables as a reference:
version: "2"
services:
api:
container_name: "be"
image: <image_name>:latest
ports:
- "8000:8000"
environment:
DB_HOST: host.docker.internal
DB_USER: <your_user>
DB_PASS: <your_pass>
networks:
- mynw
db:
container_name: "psql"
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_DB: <your_postgres_db_name>
POSTGRES_USER: <your_postgres_user>
POSTGRES_PASS: <your_postgres_pass>
volumes:
- ~/dbdata:/var/lib/postgresql/data
networks:
- mynw
If you send database vars separately. You can assign a database host.
DB_HOST=<POSTGRES_SERVICE_NAME> #in your case "db" from docker-compose file.
I had two containers one called postgresdb, and another call node
I changed my node queries.js from:
const pool = new Pool({
user: 'postgres',
host: 'localhost',
database: 'users',
password: 'password',
port: 5432,
})
To
const pool = new Pool({
user: 'postgres',
host: 'postgresdb',
database: 'users',
password: 'password',
port: 5432,
})
All I had to do was change the host to my container name ["postgresdb"] and that fixed this for me. I'm sure this can be done better but I just learned docker compose / node.js stuff in the last 2 days.
If none of the other solutions worked for you, consider manual wrapping of PgPool.connect() with retry upon having ECONNREFUSED:
const pgPool = new Pool(pgConfig);
const pgPoolWrapper = {
async connect() {
for (let nRetry = 1; ; nRetry++) {
try {
const client = await pgPool.connect();
if (nRetry > 1) {
console.info('Now successfully connected to Postgres');
}
return client;
} catch (e) {
if (e.toString().includes('ECONNREFUSED') && nRetry < 5) {
console.info('ECONNREFUSED connecting to Postgres, ' +
'maybe container is not ready yet, will retry ' + nRetry);
// Wait 1 second
await new Promise(resolve => setTimeout(resolve, 1000));
} else {
throw e;
}
}
}
}
};
(See this issue in node-postgres for tracking.)
As mentioned here.
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.
It is important to note the distinction between HOST_PORT and CONTAINER_PORT. In the above example, for db, the HOST_PORT is 8001 and the container port is 5432 (postgres default). Networked service-to-service communication uses the CONTAINER_PORT. When HOST_PORT is defined, the service is accessible outside the swarm as well.
Within the web container, your connection string to db would look like postgres://db:5432, and from the host machine, the connection string would look like postgres://{DOCKER_IP}:8001.
So DATABASE_URL should be postgres://username:pgpassword#db:5432/mydatabase
I am here with a tiny modification about handle this.
As Andy say in him response.
"you need to point the web container to the db container's"
And taking in consideration the official documentation about docker-compose link's
"Links are not required to enable services to communicate - by default, any service can reach any other service at that service’s name."
Because of that, you can keep your docker_compose.yml in this way:
docker_compose.yml
version: "3"
services:
web:
image: node
command: npm start
ports:
- "8000:4242"
# links:
# - db
working_dir: /src
environment:
SEQ_DB: mydatabase
SEQ_USER: username
SEQ_PW: pgpassword
PORT: 4242
# DATABASE_URL: postgres://username:pgpassword#127.0.0.1:5432/mydatabase
DATABASE_URL: "postgres://username:pgpassword#db:5432/mydatabase"
volumes:
- ./:/src
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: pgpassword
But it is a kinda cool way to be verbose while we are coding. So, your approach it is nice.

Cannot connect to PostgreSQL when running docker compose [duplicate]

I'm building an app running on NodeJS using postgresql.
I'm using SequelizeJS as ORM.
To avoid using real postgres daemon and having nodejs on my own device, i'm using containers with docker-compose.
when I run docker-compose up
it starts the pg database
database system is ready to accept connections
and the nodejs server.
but the server can't connect to database.
Error: connect ECONNREFUSED 127.0.01:5432
If I try to run the server without using containers (with real nodejs and postgresd on my machine) it works.
But I want it to work correctly with containers. I don't understand what i'm doing wrong.
here is the docker-compose.yml file
web:
image: node
command: npm start
ports:
- "8000:4242"
links:
- db
working_dir: /src
environment:
SEQ_DB: mydatabase
SEQ_USER: username
SEQ_PW: pgpassword
PORT: 4242
DATABASE_URL: postgres://username:pgpassword#127.0.0.1:5432/mydatabase
volumes:
- ./:/src
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: pgpassword
Could someone help me please?
(someone who likes docker :) )
Your DATABASE_URL refers to 127.0.0.1, which is the loopback adapter (more here). This means "connect to myself".
When running both applications (without using Docker) on the same host, they are both addressable on the same adapter (also known as localhost).
When running both applications in containers they are not both on localhost as before. Instead you need to point the web container to the db container's IP address on the docker0 adapter - which docker-compose sets for you.
Change:
127.0.0.1 to CONTAINER_NAME (e.g. db)
Example:
DATABASE_URL: postgres://username:pgpassword#127.0.0.1:5432/mydatabase
to
DATABASE_URL: postgres://username:pgpassword#db:5432/mydatabase
This works thanks to Docker links: the web container has a file (/etc/hosts) with a db entry pointing to the IP that the db container is on. This is the first place a system (in this case, the container) will look when trying to resolve hostnames.
For further readers, if you're using Docker desktop for Mac use host.docker.internal instead of localhost or 127.0.0.1 as it's suggested in the doc. I came across same connection refused... problem. Backend api-service couldn't connect to postgres using localhost/127.0.0.1. Below is my docker-compose.yml and environment variables as a reference:
version: "2"
services:
api:
container_name: "be"
image: <image_name>:latest
ports:
- "8000:8000"
environment:
DB_HOST: host.docker.internal
DB_USER: <your_user>
DB_PASS: <your_pass>
networks:
- mynw
db:
container_name: "psql"
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_DB: <your_postgres_db_name>
POSTGRES_USER: <your_postgres_user>
POSTGRES_PASS: <your_postgres_pass>
volumes:
- ~/dbdata:/var/lib/postgresql/data
networks:
- mynw
If you send database vars separately. You can assign a database host.
DB_HOST=<POSTGRES_SERVICE_NAME> #in your case "db" from docker-compose file.
I had two containers one called postgresdb, and another call node
I changed my node queries.js from:
const pool = new Pool({
user: 'postgres',
host: 'localhost',
database: 'users',
password: 'password',
port: 5432,
})
To
const pool = new Pool({
user: 'postgres',
host: 'postgresdb',
database: 'users',
password: 'password',
port: 5432,
})
All I had to do was change the host to my container name ["postgresdb"] and that fixed this for me. I'm sure this can be done better but I just learned docker compose / node.js stuff in the last 2 days.
If none of the other solutions worked for you, consider manual wrapping of PgPool.connect() with retry upon having ECONNREFUSED:
const pgPool = new Pool(pgConfig);
const pgPoolWrapper = {
async connect() {
for (let nRetry = 1; ; nRetry++) {
try {
const client = await pgPool.connect();
if (nRetry > 1) {
console.info('Now successfully connected to Postgres');
}
return client;
} catch (e) {
if (e.toString().includes('ECONNREFUSED') && nRetry < 5) {
console.info('ECONNREFUSED connecting to Postgres, ' +
'maybe container is not ready yet, will retry ' + nRetry);
// Wait 1 second
await new Promise(resolve => setTimeout(resolve, 1000));
} else {
throw e;
}
}
}
}
};
(See this issue in node-postgres for tracking.)
As mentioned here.
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.
It is important to note the distinction between HOST_PORT and CONTAINER_PORT. In the above example, for db, the HOST_PORT is 8001 and the container port is 5432 (postgres default). Networked service-to-service communication uses the CONTAINER_PORT. When HOST_PORT is defined, the service is accessible outside the swarm as well.
Within the web container, your connection string to db would look like postgres://db:5432, and from the host machine, the connection string would look like postgres://{DOCKER_IP}:8001.
So DATABASE_URL should be postgres://username:pgpassword#db:5432/mydatabase
I am here with a tiny modification about handle this.
As Andy say in him response.
"you need to point the web container to the db container's"
And taking in consideration the official documentation about docker-compose link's
"Links are not required to enable services to communicate - by default, any service can reach any other service at that service’s name."
Because of that, you can keep your docker_compose.yml in this way:
docker_compose.yml
version: "3"
services:
web:
image: node
command: npm start
ports:
- "8000:4242"
# links:
# - db
working_dir: /src
environment:
SEQ_DB: mydatabase
SEQ_USER: username
SEQ_PW: pgpassword
PORT: 4242
# DATABASE_URL: postgres://username:pgpassword#127.0.0.1:5432/mydatabase
DATABASE_URL: "postgres://username:pgpassword#db:5432/mydatabase"
volumes:
- ./:/src
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: pgpassword
But it is a kinda cool way to be verbose while we are coding. So, your approach it is nice.

Fedora: Application Container Cannot establish connection to DB Container

I am having trouble connecting to a db container from the application container on a Fedora Host. I have verified being able to connect to the database using the same credentials via the psql command line interface. Using the same information in my application does not work.
Here is my docker compose file
version: '3.3'
services:
postgrestest:
build: ./vrs
command: python3 app.py
volumes:
- ./vrs/:/appuser/
ports:
- 5000:5000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER={{user}}
- POSTGRES_PASSWORD={{password}}
- POSTGRES_DB=sharepointvrs
volumes:
postgres_data:
This is the code used to connect to the container, from within the application container:
dbconfig = environment["database"]
try:
connection = psycopg2.connect(
dbname=dbconfig["dbname"], #sharepointvrs
user=dbconfig["user"],
password=dbconfig["password"],
host=dbconfig["host"], # tried 0.0.0.0, localhost, and IP address obtained from docker inspect
port=dbconfig["port"] # 5432
)
connection.autocommit = True
except:
print("Database initialization failed.")
I've tried both using localhost and using the IP obtained from running a docker inspect:
# tried 0.0.0.0, localhost, and IP address obtained from docker inspect
In your app's config, set the database host to 'db'
That exists as a DNS alias, available in the other containers, based on what you set the service name to in your compose file:
services:
db:
# ...
The issue was due to firewall interface policies configured by the docker installation on Fedora.
Docker must be added to the trusted zone before it gets installed.
More information here

Connecting two a database in a another container with docker-compose

I'm trying to set up a docker-compose where one container has the database and the other one has the application. To my best knowledge, I need to connect two containers with a network.
version: "3"
services:
psqldb:
image: "postgres"
container_name: "psqldb"
environment:
- POSTGRES_USER=usr
- POSTGRES_PASSWORD=pwd
- POSTGRES_DB=mydb
ports:
- "5432:5432"
expose:
- "5432"
volumes:
- $HOME/docker/volumes/postgres/:/var/lib/postgresql/data
networks:
- backend
sbapp:
image: "dsb"
container_name: "dsb-cont"
ports:
- "8080:8080"
expose:
- "8080"
depends_on:
- psqldb
networks:
- backend
networks:
backend:
I also tried it with setting network driver to bridge (which didn't change the end result)
networks:
backend:
driver: bridge
I'm using the following url to connect to the database
spring.datasource.url=jdbc:postgresql://psqldb:5432/mydb
I also tried to expose the port and connect using localhost
spring.datasource.url=jdbc:postgresql://localhost:5432/mydb
I also tried to use port 5433 instead of 5432.
No matter what combination I used, I got the following error
Connection to psqldb:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
in my app container. However, my database container remains up and I can connect to it fine from host with the url
spring.datasource.url=jdbc:postgresql://localhost:5432/mydb
I can also connect to the database from host if I remove psqldb container entirely from docker-compose.yml (and use the latter connection url).
If it makes any difference, I'm using Spring
Boot for application with the Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
EXPOSE 8080
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom", "-jar", "app.jar"]
and I'm building the image with the command
docker build --build-arg JAR_FILE=target/*.jar -t dsb .
What am I missing in the two container setup?
The issue I had was that docker depends_on only starts the containers in the defined order but doesn't wait for them to be actually ready.
https://docs.docker.com/compose/startup-order/