How can i link mongodb with other services in docker-compose? - mongodb

i got a problem.
I made a docker-compose that runs mongo and node.
The problem is there is no way i use mongo from the container, i cannot start my node server.
Here there is my docker-compose :
version: '3'
services:
database:
build: ./Database
container_name: "dashboard_database"
ports:
- "27017:27017"
backend:
build: ./Backend
container_name: "dashboard_backend"
ports:
- "8080:8080"
depends_on:
- database
links:
- database
But when i start mongo without the container my node can reach it, i don't know why ...
Any idea ?
Thanks !

Dont define ports in the DB service. But afterwards only application will be able to access DB. Most probably it will work then. If you still want to access it from your PC then you should define a network. Try this
version: '3'
services:
database:
build: ./Database
container_name: "dashboard_database"
backend:
build: ./Backend
container_name: "dashboard_backend"
ports:
- "8080:8080"
depends_on:
- database
links:
- database
And for creating network
version: '3'
networks:
back-tier:
services:
database:
build: ./Database
container_name: "dashboard_database"
networks:
- back-tier
ports:
- "27017:27017"
backend:
build: ./Backend
container_name: "dashboard_backend"
networks:
- back-tier
ports:
- "8080:8080"
depends_on:
- database

All services in docker-compose are within the docker-compose created network, and can be addressed by their service names from other services. In your case the service names are database and backend, so for instance the database can be accessed by the backend with something like tcp://database:27017. You don't need to link them anymore.
https://runnable.com/docker/docker-compose-networking
Be aware depends_on only waits until the process has been started and does not wait for the process to be ready to accept connections.
https://docs.docker.com/compose/compose-file/#depends_on
https://docs.docker.com/compose/startup-order
The port mappings are only necessary if you want to make a service accessible from the local machine. In your examplte the backend service is accessible via localhost:8080.
If you want an external container to access a docker-compose service tne localhost:8080 wont work because localhost in the container isn't the same localhost as on your local machine where docker containers are running. You can create manually a docker network and connect the container and docker-compose services to it. See docker-compose-networking link and take a look at section Pre-existing Networks.
Does that help you?

Related

DOCKER_Cannot run multiple services by docker-compose

I'm set up docker compose for my project with 2 services: spring-boot and postgresql. I created Dockerfile and docker-compose,yml as below:
Dockerfile :
FROM openjdk:8-jdk-alpine
MAINTAINER linhan.com
COPY target/LinhAn-0.0.1-SNAPSHOT.jar linhan-server-1.0.0.jar
ENTRYPOINT ["java","-jar","/linhan-server-1.0.0.jar"]
docker-compose.yml:
version: '2'
services:
spring_boot:
image: 'linhan'
build: .
container_name: api
ports:
- "8080:8080"
depends_on:
- postgres
environment:
- SPRING_DATASOURCE_URL=jdbc:jdbc:postgresql://localhost:5432/test_db
- SPRING_DATASOURCE_USERNAME=user
- SPRING_DATASOURCE_PASSWORD=123456
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
postgres:
image: 'postgres:13.1-alpine'
container_name: db
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=123456
Then, when I type docker-compose up in terminal, postgres ran only, spring boot still not.
I searched google for solution but seems no hope. Please help me, thanks a lot!!!!!
I think you need to change the SPRING_DATASOURCE_URL to reference your service name instead of localhost. The service name is resolved automatically to your service since all services are part of the default_network by default in docker-compose.
- SPRING_DATASOURCE_URL=jdbc:jdbc:postgresql://postgres:5432/test_db
Also, for clarity I would suggest you add the port to your docker-compose postgres service, so it is clear which port is being used, even if it is the default:
postgres:
image: 'postgres:13.1-alpine'
container_name: db
ports:
- "5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=123456
Also, another suggestion would be to try and use a healthcheck to see if your database service becomes available instead of a simple depends_on. The short version will mark the dependency fulfilled as soon as the container is Running, regardless of the availability of the database.
Either that, or you can add application logic to retry database connection in case of failure.

Access Postgres database remotely that's hosted on Azure in docker container with webapi

I am new to Azure cloud services so excuse me if this is a dumb question.
I have a docker-compose file with a .Net core webapi and postgres database. I have it running on Azure as a web-app and its working (I can see when I query the API that there's data in the database). However I would like to get access to the database remotely so that I can inspect and see the data in the database via pgAdmin or something similar.
I did bind a port to my pgAdmin site in my docker-compose but it does not seem like that port is open. I've read somewhere that only port 80 and 443 can be exposed from Azure web-apps when using multi-image containers. (This docker-compose works locally 100% and I can access the pgAdmin site and see the database with all its tables).
So my question is, how do I run my web-api with my postgres database on azure and have visibility to my database?
Docker-compose file:
version: '3.8'
services:
web:
container_name: 'bootcampapi'
image: 'myimage'
build:
context: .
dockerfile: backend.dockerfile
restart: always
ports:
- 8080:80
depends_on:
postgres:
condition: service_healthy
networks:
- bootcampbackend-network
postgres:
container_name: 'postgres'
restart: always
image: 'postgres:latest'
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
environment:
- POSTGRES_USER=myusername
- POSTGRES_PASSWORD=mypassword
- POSTGRES_DB=database-name
- PGDATA=database-data
networks:
- bootcampbackend-network
ports:
- 5432:5432
volumes:
- database-data:/var/lib/postgresql/data/
pgadmin:
image: dpage/pgadmin4
ports:
- 15433:80
env_file:
- .env
depends_on:
- postgres
networks:
- bootcampbackend-network
volumes:
- database-other:/var/lib/pgadmin/
networks:
bootcampbackend-network:
driver: bridge
As you have found, App Service only listens on one port. One solution around that is to use a reverse proxy like Nginx to route the traffic to both your containers.
BTW, build, depends_on and networks are unsupported. See doc

Exception opening socket exception when trying to connect docker container to mongodb

I have a spring boot application which connects to a Mongo database. I have created a docker-compose file. The spring boot application has two instances. First instance runs on 8080 and 27017, which works perfectly fine. Now the second instance runs on 8083 and 27018. I can easily connect to 27017 and 27018 through Mongo GUI. However, when I run docker-compose up for the second instance, the spring boot gives the exception.
Following are my docker-compose files:
First Instance(docker-compose.yml):
version: '3'
services:
app:
container_name: HR-BACKEND
restart: always
build: .
ports:
- "8081:8080" #VF Webservice
links:
- mongo
mongo:
container_name: MONGOHR
image: mongo:4.0.2
ports:
- "27017:27017"
volumes:
- /data/hrdb:/data/db
First Instance (application.properties)
spring.data.mongodb.uri=mongodb://mongo:27017/tsp
Second Instance(docker-compose.yml):
version: '3'
services:
app:
container_name: VF-BACKEND
restart: always
build: .
ports:
- "8083:8080" #VF Webservice
links:
- mongovf
mongovf:
container_name: MONGOVF
image: mongo:4.0.2
ports:
- "27018:27017"
volumes:
- /data/hrdb:/data/db
Second Instance (application.properties)
spring.data.mongodb.uri=mongodb://mongovf:27018/tsp
Docker version:
Docker version 18.09.1, build 4c52b90
. I could not really find a solution in SO. Please let me know if more details are needed
In your second instance you are trying to connect to mongo on the wrong port. application.property should be:
spring.data.mongodb.uri=mongodb://mongovf:27017/tsp
In a docker compose context, you connect directly to containers without going through the docker host. This is why you have to connect directly to the container port instead of trying to connect through the port published on the docker host.

Accessing postgres data in docker-compose network

I'm having trouble accessing a database created from a docker-compose file.
Given the following compose file, I should be able to connect to it from java using something like:
jdbc:postgresql://eprase:eprase#database:7000/eprase
However, the connection is rejected. I can't even use PGAdmin to connect it using the same details to create a new server.
I've entered the database container and ran psql commands to verify that the eprase user and database have been created according to postgres Docker documentation, everything seems fine. I can't tell if the problem is within the database container or something I need to change in the compose network.
The client & server services can largely be ignored, the server is a java based web API and the client is an Angular app.
Compose file:
version: "3"
services:
client:
image: eprase/client:latest
build: ./client/eprase-app
networks:
api:
ports:
- "5000:80"
tty: true
depends_on:
- server
server:
image: eprase/server:latest
build: ./server
networks:
api:
ports:
- "6000:8080"
depends_on:
- database
database:
image: postgres:9
volumes:
- "./database/data:/var/lib/postgresql/data"
environment:
- "POSTGRES_USER=eprase"
- "POSTGRES_PASSWORD=eprase"
- "POSTGRES_DB=eprase"
networks:
api:
ports:
- "7000:5432"
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4:latest
environment:
- "PGADMIN_DEFAULT_EMAIL=admin#eprase.com"
- "PGADMIN_DEFAULT_PASSWORD=eprase"
networks:
api:
ports:
- "8000:80"
depends_on:
- database
networks:
api:
The PostgreSQL database is listening on container port 5432. The 7000:5432 line is mapping host port 7000 to container port 5432. That allows you to connect to the database on port 7000. But, your services on a common network (api) should communicate with each other via the container ports.
So, from the perspective of the containers for the client and server services, the connection string should be:
jdbc:postgresql://eprase:eprase#database:5432/eprase

How to access postgres-docker container other docker container without ip address

How to access postgres-docker container other docker container without ip address?
I want to store data in postgres by using myweb. in jar given host like localhost:5432/db..
Here my compose file:
version: "3"
services:
myweb:
build: ./myweb
container_name: app
ports:
- "8080:8080"
- "9090:9090"
networks:
- front-tier
- back-tier
depends_on:
- "postgresdb"
postgresdb:
build: ./mydb
image: ppk:postgres9.5
volumes:
- dbdata:/var/lib/postgresql
ports:
- "5432:5432"
networks:
- back-tier
volumes:
dbdata: {}
networks:
front-tier:
back-tier:
Instead of localhost:5432/db.. use postgresdb:5432/db.. connection string.
By default the container has the same hostname as the service name.
Here is my minimal working example, which is connecting a java client (boxfuse/flyway) with postgres server. The most important part is the heath check, which is delaying the start of the myweb container to the time when postgres is ready to accept connections.
Note that this can be directly executed by docker-compose up, it dosen't have any other dependencies. Both the images are from docker hub.
version: '2.1'
services:
myweb:
image: boxfuse/flyway
command: -url=jdbc:postgresql://postgresdb/postgres -user=postgres -password=123 info
depends_on:
postgresdb:
condition: service_healthy
postgresdb:
image: postgres
environment:
- POSTGRES_PASSWORD=123
healthcheck:
test: "pg_isready -q -U postgres"
That is the Docker Networking problem. The solution is to use postgresdb:5432/db in place of localhost:5432/db because the two service is in the same network named back-tier and docker deamon will use name service like a DNS name to make communication between the two container. I think that my solution will help you so.