with docker-compose my app container cannot see the mongodb container - mongodb

Here is my docker compose file:
version: "3.3"
services:
test:
image: test
networks:
- mongo_net
ports:
- 4000:80
depends_on:
- mongodb
links:
- mongodb
mongodb:
image: mongo:latest
networks:
- mongo_net
ports:
- 27017:27017
volumes:
- local_data:/data/db
volumes:
local_data:
networks:
mongo_net:
driver: bridge
The 'test' image cannot find the 'mongodb' instance.
My assumption is that the 'links' section would connect the two, but it is not happening.
What am I missing?

for your compose file try just using depends_on. links is deprecated and maybe thats why you are currently getting issues since this is a V3 compose file.

Related

How to connect remotely to Mongodb running on Docker-compose?

How to change this script so that I can connect remotely to my Mongodb running in docker-compose, from different machines (that are not connected to the same network/internet provider).
I want to allow all remote connections.
I don't care about security matters as it's just for practice purposes!
docker-compose.yaml script file:
version: "3.8"
services:
mongodb:
image: mongo
container_name: mongodb
ports:
- 27017:27017
volumes:
- data:/data
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
mongo-express:
image: mongo-express
container_name: mongo-express
restart: always
ports:
- 8081:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=admin
- ME_CONFIG_MONGODB_SERVER=mongodb
volumes:
data: {}
networks:
default:
name: mongodb_network
I solved my issue by migrating my data to Atlas cloud.mongodb.com
Answer update: To provide more info as suggested by #nuhkoca, This is the video tutorial to create a mongodb atlas: https://www.youtube.com/watch?v=xrc7dIO_tXk&t=15s And this is the link to the db from my resources file in my Springboot backend api:
server.port=<Port_number>
spring.data.mongodb.uri=<Link_to_mongodb_atlas>

I'm running docker-compose up, terminal show me: (root) Additional property mongo is not allowed

I'm practicing with Docker but I have this message in my terminal. Someone have any solution?
my docker-compose
mongo:
image: mongo
ports:
- "27017:27017"
restart: always
web:
build: .
ports:
- "3000:3000"
links:
- mongo
command: node index.js
Terminal:
(root) Additional property mongo is not allowed
Missing the services keyword.
version: "3.9" # optional since v1.27.0
services:
mongo:
image: mongo
ports:
- "27017:27017"
restart: always
.....
see the official doc
Thanks, my problem was that web had the same level of services.
version: "3"
services:
mongo:
image: mongo
ports:
- "27017:27017"
restart: always
web:
build: .
ports:
- "3000:3000"
links:
- mongo
command: node index.js
Thanks for all.

Intergrate elasticsearch with multiple mongodb in docker-compose

I have a implemented a microservice architecture with several servers and databases. I have installed elasticsearch with docker and when I do docker-compose up, everything seems to run fine.
However I would like to integrate the elasticsearch with the several databases (2 mongodb in this sample below) in the system. How do I synch the two mongodb in two different containers with elasticsearch so that I can search them?
client:
container_name: client
stdin_open: true
build:
context: ./client
dockerfile: Dockerfile
restart: always
volumes:
- './client:/app'
ports:
- '1000:3000'
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
weatherdb:
container_name: weather-db
image: mongo
restart: always
ports:
- '2002:27017'
volumes:
- ./weather_service/weather_db:/data/db
networks:
- backend
weather-service:
container_name: weather-service
build: ./weather_service
restart: always
ports:
- "1002:3000"
depends_on:
- weatherdb
links:
- elasticsearch
networks:
- backend
newsdb:
container_name: news-db
image: mongo
restart: always
ports:
- '2003:27017'
volumes:
- ./news_service/news_db:/data/db
networks:
- backend
news-service:
container_name: news-service
build: ./news_service
restart: always
ports:
- "1003:3000"
depends_on:
- newsdb
links:
- elasticsearch
networks:
- backend
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
container_name: elasticsearch
restart: always
ports:
- 9200:9200
- 9300:9300
environment:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
network.bind_host: 0.0.0.0
network.host: 0.0.0.0
discovery.type: single-node
volumes:
- ./elasticsearch/esdata:/usr/share/elasticsearch/data
networks:
- backend
Its very simple to just add a elasticsearch docker section in any docker-compose file and start it, all these are independent docker containers and as long as their exposed port on host is not interfering each other and you have the correct configuration in place it should work.
Please refer elasticsearch multi-docker installation using docker file for more info.
NOTE: You have not mentioned what exact issue you are facing, you have mentioned everything ie all docker containers are running file, so please explain in detail what exactly you are trying to solve

Making migrations for hasura container and running console

I want to make a developer offline space to develop my database with hasura
I know the existence of the container with the tag cli-migrations, but the command:
hasura-cli console
doesn't work for accessing outside of the container.
My configuration for the docker-compose.yml is:
version: '3'
services:
hasura:
environment:
- HASURA_GRAPHQL_DATABASE_URL=postgres://[some pass]:[some user]#db:5432/[some db]
- HASURA_GRAPHQL_ENABLE_CONSOLE=false
image: hasura/graphql-engine:v1.0.0-rc.1.cli-migrations
container_name: hasura
volumes:
- ./hasura-migrations:/hasura-migrations
networks:
- hasura-db
ports:
- "8081:8080"
- "8082:8081"
restart: always
command: hasura-cli console --console-port 8081 --no-browser
db:
environment:
- POSTGRES_USER=[some user]
- POSTGRES_PASSWORD=[some pass]
- POSTGRES_DB=[some db]
image: postgres:11.4-alpine
container_name: db
restart: always
networks:
- hasura-db
networks:
hasura-db:
There is a Pull request in the hasura graphql project for this issue, but is not merged.
I'm looking for a workaround for this pull request now.
I found a workaround for this!
If I install hasura cli in my machine and use
hasura console --console-port 8080 --endpoint http://127.0.0.1:8081
I can connect to the hasura api and run a console locally.
this is my updated docker-compose.yml
version: '3'
services:
hasura:
environment:
- HASURA_GRAPHQL_DATABASE_URL=postgres://[some pass]:[some user]#db:5432/[some db]
- HASURA_GRAPHQL_ENABLE_CONSOLE=false
image: hasura/graphql-engine:v1.0.0-rc.1.cli-migrations
container_name: hasura
volumes:
- ./hasura-migrations:/hasura-migrations
networks:
- hasura-db
ports:
- "8081:8080"
restart: always
db:
environment:
- POSTGRES_USER=[some user]
- POSTGRES_PASSWORD=[some pass]
- POSTGRES_DB=[some db]
image: postgres:11.4-alpine
container_name: db
restart: always
networks:
- hasura-db
networks:
hasura-db:

docker-compose suppress mongodb output

This isn't a breaking issue for me, but I have about four images stitched together in a service, postgres, redis, mongodb, and my application which is a python-flask application.
What I want to do is disable the console output mainly for the mongodb image because it has a lot of output, so that I can see all the output from my flask unit tests without scrolling up and visually sorting through the mongodb stuff that I don't need to see after running docker-compose up. My docker compose yaml looks like this:
postgres:
image: postgres:9.6.1
ports:
- '5432:5432'
volumes:
- ~/.docker-volumes/docker-login/postgresql/data:/var/lib/postgresql/data
redis:
image: redis:3.0
ports:
- '6379:6379'
volumes:
- ~/.docker-volumes/docker-login/redis/data:/var/lib/redis/data
mongo:
image: mongo:latest
ports:
- '27017:27017'
volumes:
- ~/.docker-volumes/docker-login/mongodb/data:/var/lib/mongo/data
workspace:
build: .
volumes:
- .:/workspace
- ./logs:/workspace/logs
ports:
- '5000:5000'
links:
- mongo
- postgres
- redis
tty: true
entrypoint:
- bash
- workspace/entrypoint.sh
From official documentation you can do:
version: "3.7"
services:
some-service:
image: some-service
logging:
driver: "none"
This works for me!
I would suggest running docker-compose up -d
and then access only those container logs you are wanting to see:
docker-compose logs -f <container_id\name>
Logs Documentation