I have a node application , a next.js app for front end ,Redis and Postgres as databases . I have dockerized Next.js and node.js in different containers .
Docker-compose.yaml is as follows
version: '3'
services:
redis-server:
image: 'redis'
restart: always
postgres-server:
image: 'postgres:latest'
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
ports:
- "5433:5432"
volumes:
- ./docker/postgres/data/data/pg_hba.conf:/var/lib/postgresql/app_data/pg_hba.conf
- ./src/db/sql/CREATE_TABLES.sql:/docker-entrypoint-initdb.d/CREATE_TABLES.sql
- ./src/db/sql/INSERT_TO_TABLES.sql:/docker-entrypoint-initdb.d/INSERT_TO_TABLES.sql
- ./src/db/sql/CREATE_FUNCTIONS.sql:/docker-entrypoint-initdb.d/CREATE_FUNCTIONS.sql
node-app:
build: .
ports:
- "4200:4200"
client:
build:
context: ./client
dockerfile: Dockerfile
container_name: client
restart: always
volumes:
- ./:/app
- /app/node_modules
- /app/.next
ports:
- 3000:3000
When using SSR , I cannot make a request to localhost:4200 .Now, I get that it is because they are in different containers and if the request is not from client side , then the client container is being checked for server at port 4200. Now , I am not sure how to simply refer to the container using the container name or something to make an API request for the SSR data (like fetch('node-app/users'))
docker-compose sets up a network for all the services in a compose file. To reach another container in the network, you use the name of that container as a hostname, and it will resolve to the right ip.
So in your case doing fetch('http://node-app:4200/users') should do the trick.
Related
I have tried many configurations and scenarios based around this which is mostly a tutorial that stops at one ghost instance. I am trying to scale it to 2 with docker-deploy up -d --scale ghost=2. When I hit the individual IP;s of the ghost containers , they work but port 80 is 503.
version: "3.1"
volumes:
mysql-volume:
ghost-volume:
networks:
ghost-network:
services:
mysql:
image: mysql:5.7
container_name: mysql
volumes:
- mysql-volume:/var/lib/mysql
networks:
- ghost-network
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: db
MYSQL_USER: blog-user
MYSQL_PASSWORD: supersecret
ghost:
build: ./ghost
image: laminar/ghost:3.0
volumes:
- ghost-volume:/var/lib/ghost/content
networks:
- ghost-network
restart: always
ports:
- "2368"
environment:
database__client: mysql
database__connection__host: mysql
database__connection__user: blog-user
database__connection__password: supersecret
database__connection__database: db
depends_on:
- mysql
entrypoint: ["wait-for-it.sh", "mysql", "--", "docker-entrypoint.sh"]
command: ["node", "current/index.js"]
haproxy:
image: eeacms/haproxy
depends_on:
- ghost
ports:
- "80:5000"
- "1936:1936"
environment:
BACKENDS: "ghost"
DNS_ENABLED: "true"
LOG_LEVEL: "info"
What I get on localhost:80 is a 503 error the particular eeacms/haproxy image is supposed to be self-configuring any help appreciated
I needed to add a backend URL to the environment and also tell ghost it was installed in an alternate location by adding URL: localhost:5050
I have a implemented a microservice architecture with several servers and databases. I have installed elasticsearch with docker and when I do docker-compose up, everything seems to run fine.
However I would like to integrate the elasticsearch with the several databases (2 mongodb in this sample below) in the system. How do I synch the two mongodb in two different containers with elasticsearch so that I can search them?
client:
container_name: client
stdin_open: true
build:
context: ./client
dockerfile: Dockerfile
restart: always
volumes:
- './client:/app'
ports:
- '1000:3000'
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
weatherdb:
container_name: weather-db
image: mongo
restart: always
ports:
- '2002:27017'
volumes:
- ./weather_service/weather_db:/data/db
networks:
- backend
weather-service:
container_name: weather-service
build: ./weather_service
restart: always
ports:
- "1002:3000"
depends_on:
- weatherdb
links:
- elasticsearch
networks:
- backend
newsdb:
container_name: news-db
image: mongo
restart: always
ports:
- '2003:27017'
volumes:
- ./news_service/news_db:/data/db
networks:
- backend
news-service:
container_name: news-service
build: ./news_service
restart: always
ports:
- "1003:3000"
depends_on:
- newsdb
links:
- elasticsearch
networks:
- backend
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
container_name: elasticsearch
restart: always
ports:
- 9200:9200
- 9300:9300
environment:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
network.bind_host: 0.0.0.0
network.host: 0.0.0.0
discovery.type: single-node
volumes:
- ./elasticsearch/esdata:/usr/share/elasticsearch/data
networks:
- backend
Its very simple to just add a elasticsearch docker section in any docker-compose file and start it, all these are independent docker containers and as long as their exposed port on host is not interfering each other and you have the correct configuration in place it should work.
Please refer elasticsearch multi-docker installation using docker file for more info.
NOTE: You have not mentioned what exact issue you are facing, you have mentioned everything ie all docker containers are running file, so please explain in detail what exactly you are trying to solve
I'm trying to setup a Flask <-> MongoDB <-> mongo_express application. I have 3 containers defined in my docker-compose.yml, and I start them successfully. However, while the Mongo part is OK (I can access the DB via the express api at localhost:8081), Flask can't access the DB.
What I'm looking for:
I want to be able to send requests from host machine (or any other
in the network) to Flask (running on 0.0.0.0:5000), from Flask to DB
(running on localhost:27017, accessed using pymongo wrapper).
Also, I want to be able to have access to mongo_express (on
localhost:8081) and from it to the DB [This part is already
working!]
In order to debug it, I removed the Flask container from the docker-compose.yml, restarted, and run it locally, and voila! everything works (meaning my pytest runs are ok - I have a test where I send a request to the Flask server, which in turn, uses the pymongo wrapper to access the DB, and return data from it to the client).
I guess my network configuration is flawed, but I don't understand where.
Here is my docker-compose.yml (Flask is commented out, since it is currently running locally):
version: "3.8"
services:
# mgmt_server:
# build: ./server # Dockerfile just copies py files, installs pip requirements, and runs "python server.py"
# container_name: mgmt_server
# restart: always
# networks:
# - backend # Connect to flask
# ports:
# - "5000:5000"
mongo:
image: mongo:latest
container_name: mongodb
restart: always
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=pass
volumes:
- /data/db:/data/db
networks:
- backend # Connect to flask
- frontend # Connect to mongo_express
ports:
- "27017:27017"
mongo-express:
image: mongo-express:latest
container_name: mongo_express
restart: always
environment:
- ME_CONFIG_MONGODB_SERVER=mongo
- ME_CONFIG_MONGODB_PORT=27017
- ME_CONFIG_MONGODB_ENABLE_ADMIN=true
- ME_CONFIG_MONGODB_AUTH_DATABASE=admin
- ME_CONFIG_MONGODB_ADMINUSERNAME=root
- ME_CONFIG_MONGODB_ADMINPASSWORD=pass
# Uncomment if a secure login via browser is required
# - ME_CONFIG_BASICAUTH_USERNAME=root
# - ME_CONFIG_BASICAUTH_PASSWORD=pass
links:
- mongo
networks:
- frontend # Connect to mongo_express
ports:
- 8081:8081
networks:
backend:
driver: bridge
frontend:
driver: bridge
$ docker network ls # When mgmt_server is run locally
NETWORK ID NAME DRIVER SCOPE
10577560a149 bridge bridge local
a037a11e12bb host host local
b153eea9db12 node_mgmt_backend bridge local
c8e1b58ffb44 node_mgmt_frontend bridge local
4f7b75b5695a none null local
The problem was that Flask tried to access Mongo on localhost. When running Flask on the host, this is OK, but when Flask is containerized, it gets its own ip, and so does Mongo, and localhost just doesn't point to the right place.
Editing the docker-compose.yml networks, and redirecting pymongo client to the updated Mongo IP fixed the issue:
version: "3.8"
services:
mgmt_server:
build: ./server
container_name: mgmt_server
restart: always
ports:
- "5000:5000"
networks:
app_net:
ipv4_address: 172.16.238.2
mongo:
image: mongo:latest
container_name: mongodb
restart: always
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=pass
volumes:
- /data/db:/data/db
ports:
- "27017:27017"
networks:
app_net:
ipv4_address: 172.16.238.3
mongo-express:
image: mongo-express:latest
container_name: mongo_express
restart: always
environment:
- ME_CONFIG_MONGODB_SERVER=mongo
- ME_CONFIG_MONGODB_PORT=27017
- ME_CONFIG_MONGODB_ENABLE_ADMIN=true
- ME_CONFIG_MONGODB_AUTH_DATABASE=admin
- ME_CONFIG_MONGODB_ADMINUSERNAME=root
- ME_CONFIG_MONGODB_ADMINPASSWORD=pass
# Uncomment if a secure login via browser is required
# - ME_CONFIG_BASICAUTH_USERNAME=root
# - ME_CONFIG_BASICAUTH_PASSWORD=pass
links:
- mongo
ports:
- 8081:8081
networks:
app_net:
ipv4_address: 172.16.238.4
networks:
app_net:
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
This was helpful, but eventually I found this to be more informative.
i got a problem.
I made a docker-compose that runs mongo and node.
The problem is there is no way i use mongo from the container, i cannot start my node server.
Here there is my docker-compose :
version: '3'
services:
database:
build: ./Database
container_name: "dashboard_database"
ports:
- "27017:27017"
backend:
build: ./Backend
container_name: "dashboard_backend"
ports:
- "8080:8080"
depends_on:
- database
links:
- database
But when i start mongo without the container my node can reach it, i don't know why ...
Any idea ?
Thanks !
Dont define ports in the DB service. But afterwards only application will be able to access DB. Most probably it will work then. If you still want to access it from your PC then you should define a network. Try this
version: '3'
services:
database:
build: ./Database
container_name: "dashboard_database"
backend:
build: ./Backend
container_name: "dashboard_backend"
ports:
- "8080:8080"
depends_on:
- database
links:
- database
And for creating network
version: '3'
networks:
back-tier:
services:
database:
build: ./Database
container_name: "dashboard_database"
networks:
- back-tier
ports:
- "27017:27017"
backend:
build: ./Backend
container_name: "dashboard_backend"
networks:
- back-tier
ports:
- "8080:8080"
depends_on:
- database
All services in docker-compose are within the docker-compose created network, and can be addressed by their service names from other services. In your case the service names are database and backend, so for instance the database can be accessed by the backend with something like tcp://database:27017. You don't need to link them anymore.
https://runnable.com/docker/docker-compose-networking
Be aware depends_on only waits until the process has been started and does not wait for the process to be ready to accept connections.
https://docs.docker.com/compose/compose-file/#depends_on
https://docs.docker.com/compose/startup-order
The port mappings are only necessary if you want to make a service accessible from the local machine. In your examplte the backend service is accessible via localhost:8080.
If you want an external container to access a docker-compose service tne localhost:8080 wont work because localhost in the container isn't the same localhost as on your local machine where docker containers are running. You can create manually a docker network and connect the container and docker-compose services to it. See docker-compose-networking link and take a look at section Pre-existing Networks.
Does that help you?
How to access postgres-docker container other docker container without ip address?
I want to store data in postgres by using myweb. in jar given host like localhost:5432/db..
Here my compose file:
version: "3"
services:
myweb:
build: ./myweb
container_name: app
ports:
- "8080:8080"
- "9090:9090"
networks:
- front-tier
- back-tier
depends_on:
- "postgresdb"
postgresdb:
build: ./mydb
image: ppk:postgres9.5
volumes:
- dbdata:/var/lib/postgresql
ports:
- "5432:5432"
networks:
- back-tier
volumes:
dbdata: {}
networks:
front-tier:
back-tier:
Instead of localhost:5432/db.. use postgresdb:5432/db.. connection string.
By default the container has the same hostname as the service name.
Here is my minimal working example, which is connecting a java client (boxfuse/flyway) with postgres server. The most important part is the heath check, which is delaying the start of the myweb container to the time when postgres is ready to accept connections.
Note that this can be directly executed by docker-compose up, it dosen't have any other dependencies. Both the images are from docker hub.
version: '2.1'
services:
myweb:
image: boxfuse/flyway
command: -url=jdbc:postgresql://postgresdb/postgres -user=postgres -password=123 info
depends_on:
postgresdb:
condition: service_healthy
postgresdb:
image: postgres
environment:
- POSTGRES_PASSWORD=123
healthcheck:
test: "pg_isready -q -U postgres"
That is the Docker Networking problem. The solution is to use postgresdb:5432/db in place of localhost:5432/db because the two service is in the same network named back-tier and docker deamon will use name service like a DNS name to make communication between the two container. I think that my solution will help you so.