Cannot able to find the database in the mongoDb - mongodb

I cannot able to find the database that I am using in the project i.e RocketChat. The database location is mongodb://mongodb:27017/rocketchat. When I search the database in mongo it is not appearing. Can anyone help how to locate this database?
docker.compose.yml
volumes:
mongodb_data: { driver: local }
services:
rocketchat:
image: registry.rocket.chat/rocketchat/rocket.chat:${RELEASE:-latest}
restart: on-failure
labels:
traefik.enable: "true"
traefik.http.routers.rocketchat.rule: Host(`localhost`)
traefik.http.routers.rocketchat.tls: "true"
traefik.http.routers.rocketchat.entrypoints: https
traefik.http.routers.rocketchat.tls.certresolver: le
environment:
MONGO_URL: "mongodb://mongo:27017/rocketchat"
MONGO_OPLOG_URL: "mongodb://mongodb:27017/local"
ROOT_URL: "http://localhost:3000"
PORT: ${PORT:-3000}
DEPLOY_METHOD: docker
DEPLOY_PLATFORM: ${Windows}
depends_on:
- mongodb
expose:
- ${PORT:-3000}
ports:
- "${BIND_IP:-0.0.0.0}:${HOST_PORT:-3000}:${PORT:-3000}"
mongodb:
image: docker.io/bitnami/mongodb:${MONGODB_VERSION:-4.4}
restart: on-failure
volumes:
- mongodb_data:/bitnami/mongodb
environment:
MONGODB_REPLICA_SET_MODE: primary
MONGODB_REPLICA_SET_NAME: ${MONGODB_REPLICA_SET_NAME:-rs0}
MONGODB_PORT_NUMBER: ${MONGODB_PORT_NUMBER:-27017}
MONGODB_INITIAL_PRIMARY_HOST: ${MONGODB_INITIAL_PRIMARY_HOST:-mongodb}
MONGODB_INITIAL_PRIMARY_PORT_NUMBER: ${MONGODB_INITIAL_PRIMARY_PORT_NUMBER:-27017}
MONGODB_ADVERTISED_HOSTNAME: ${MONGODB_ADVERTISED_HOSTNAME:-mongodb}
MONGODB_ENABLE_JOURNAL: ${MONGODB_ENABLE_JOURNAL:-true}
ALLOW_EMPTY_PASSWORD: ${ALLOW_EMPTY_PASSWORD:-yes}
mongo
https://docs.mongodb.com/mongodb-shell/install/
================
---
The server generated these startup warnings when booting:
2022-09-03T07:38:42.415+05:30: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
---
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
> show databases;
admin 0.000GB
config 0.000GB
local 0.000GB
>

Related

What can be the reason why I can't connect to CateDB with grafana and locally with Dbeaver?

Cheers
I am currently trying to reproduce the tutorial to develop an application / solution with Fiware seen in: Desarrolla tu primer aplicaciĆ³n en Fiware and I am having difficulty connecting Grafana with the Crate database (recommended by Fiware).
My docker-compose file configuration is:
version: "3.5"
services:
orion:
image: fiware/orion
hostname: orion
container_name: fiware-orion
networks:
- fiware_network
depends_on:
- mongo-db
ports:
- "1026:1026"
command: -dbhost mongo-db -logLevel DEBUG -noCache
mongo-db:
image: mongo:3.6
hostname: mongo-db
container_name: db-mongo
networks:
- fiware_network
ports:
- "27017:27017"
command: --bind_ip_all --smallfiles
volumes:
- mongo-db:/data
grafana:
image: grafana/grafana:8.2.6
container_name: fiware-grafana
networks:
- fiware_network
ports:
- "3000:3000"
depends_on:
- crate
quantumleap:
image: fiware/quantum-leap
container_name: fiware-quantumleap
networks:
- fiware_network
ports:
- "8668:8668"
depends_on:
- mongo-db
- orion
- crate
environment:
- CRATE_HOST=crate
crate:
image: crate:1.0.5
networks:
- fiware_network
ports:
# Admin UI
- "4200:4200"
# Transport protocol
- "4300:4300"
command: -Ccluster.name=democluster -Chttp.cors.enabled=true -Chttp.cors.allow-origin="*"
volumes:
- cratedata:/data
volumes:
mongo-db:
cratedata:
networks:
fiware_network:
driver: bridge
After starting the containers, I have a positive response from OCB (Orion), from Quantumleap and even after creating the subscription between Orion and quantumleap, in the Crate database, the data is stored and updated correctly.
Unfortunately I am not able to get the visualization of the data in grafana.
I thought that the reason was the fact that crateDB was removed as a grafana plugin versions ago, but after researching how to connect crateDB with grafana, through a postgres data source (I read on: Visualizing time series data with Grafana and CrateDB), I'm still having difficulty getting the connection, getting in grafana "Query error
dbquery error: EOF"
grafana settings img
The difference with respect to the guide is the listening port, since with input port 5432 I get a response indicating that it is not possible to connect to the server. I'm using port 4300.
After configuring, and trying to query from grafana, I get the mentioned EOF error
EOF error in grafana img
I tried to connect from a database IDE (Dbeaver) and I get exactly the same problem.
EOF error in Dbeaver img
Is there something I am missing?
What should I change in my docker configuration, or ports, or anything else to fix this?
I think it is worth mentioning that I am studying this because I am being asked in a project to visualize context switches in real time with Fiware and grafana.
Thanks in advance
The PostgreSQL port 5432 must be exposed as well by the CrateDB docker image.
Additionally, I highly recommend to use a recent (or just the latest) CrateDB version, current stable is 5.0.0, your used version 1.0.5 is very old and not maintained anymore.
Full example entry:
crate:
image: crate:latest
networks:
- fiware_network
ports:
# Admin UI
- "4200:4200"
# Transport protocol
- "4300:4300"
# PostgreSQL protocol
- "5432:5432"
command: -Ccluster.name=democluster -Chttp.cors.enabled=true -Chttp.cors.allow-origin="*"
volumes:
- cratedata:/data

How to use fluent-bit with Docker-compose

I want to use the fluent-bit docker image to help me persist the ephemeral docker container logs to a location on my host (and later use it to ship logs elsewhere).
I am facing issues such as:
Cannot start service clamav: failed to initialize logging driver: dial tcp 127.0.0.1:24224: connect: connection refused
I have read a number of post including
configuring fluentbit with docker but I'm still at a lost.
My docker-compose is made up of nginx, our app, keycloak, elasticsearch and clamav. I have added fluent-bit, made it first to starts via depends on. I changed the other services to use the fluentd logging driver.
Part of config:
clamav:
container_name: clamav-app
image: tiredofit/clamav:latest
restart: always
volumes:
- ./clamav/data:/data
- ./clamav/logs:/logs
environment:
- ZABBIX_HOSTNAME=clamav-app
- DEFINITIONS_UPDATE_FREQUENCY=60
networks:
- iris-network
expose:
- "3310"
depends_on:
- fluentbit
logging:
driver: fluentd
fluentbit:
container_name: iris-fluent
image: fluent/fluent-bit:latest
restart: always
networks:
- iris-network
volumes:
- ./fluent-bit/etc:/fluent-bit/etc
ports:
- "24224:24224"
- "24224:24224/udp"
I have tried to proxy_pass 24224 to fluentbit in nginx and start nginx first, and that avoided the error on clamav and es, but same error with keycloak.
So how can I configure the service to use the host or is it that localhost is not the "external" host?

Docker MongoDB seed notification

I want to deploy in Docker Swarm a stack with a MongoDB and a Web Server; the db must be filled with initial data and I found out this valid solution.
Given that Stack services start in casual order, how could I be sure that the Web Server will read initial data correctly?
Probably I need some notification system (Redis?), but I am new to MongoDB, so I am looking for well-known solutions to this problem (that I think is pretty common).
I would highly suggest looking at health checks of docker-compose.yml. You can change the health check command to specific MongoDB check, Once health-check is ready, then only Web-server will start sending the request to the MongoDB container.
Have a look at the example file. Please change the health-check as per your need
version: "3.3"
services:
mongo:
image: mongo
ports:
- "27017:27017"
volumes:
- ./data/mongodb/db:/data/db
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo mongo:27017/test --quiet 1
interval: 10s
timeout: 10s
retries: 5
start_period: 40s
webserver:
image: custom_webserver_image:latest
volumes:
- $PWD:/app
links:
- mongodb
ports:
- 8000:8000
depends_on:
- mongo
Ref:- https://docs.docker.com/compose/compose-file/#healthcheck

Prisma Creating New Database in MongoDB

I've got Prisma connected to my MongoDB but when I make any mutations, a new database is being created in MongoDB called default_default and data is being added there instead of the database I specified when going through the setup here
https://www.prisma.io/docs/get-started/01-setting-up-prisma-existing-database-JAVASCRIPT-a003/
I'd expect the data to be saved in the database specified in the docker-compose.yml file
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.23
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
# managementApiSecret: my-secret
databases:
default:
connector: mongo
uri: 'mongodb://host.docker.internal:27017/my-db-name'
Has anyone experienced something similar and found a solution?
Thanks
This you do in prisma.config by setting endpoint:
endpoint: http://localhost:4466/graphql/default
This will use database graphq_default

How to enable MongoDB access control using a Docker container?

I'm using a Dockerfile in combination with a docker-compose.yml to start two services:
My app service
A MongoDB service
My docker-compose.yml:
web:
build: .
ports:
- "80:3000"
environment:
NODE_ENV: production
links:
- mongo
mongo:
image: mongo
command: --smallfiles
ports:
- "27017:27017"
I can't seem to figure out how to control access to the MongoDB container (like with the --auth flag), and how to have external access (say a GUI) using a username/password.
The two services get redeployed via Tutum by a webhook after a Docker Automated Build. In other words, I don't want to manually configure the database every time.
How do I control access a.k.a. set a root/admin user to secure my MongoDB database using the Dockerfile or the docker-compose.yml file?