Graylog2 mongo profiler plugin won't connect to mongo instance, error: "Exception opening socket" (everything dockerized) - mongodb

I am trying to play with graylog's mongo profiler plugin using docker to run everything. But I can't get any profiling logs into graylog.
When I start the mongo input from the graylog UI it eventually times out with an error:
Timed out after 30000 ms while waiting for a server that matches WritableServerSelector. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:37017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}].
This is my setup based on following the graylog dockerhub installation and the mongo profiler plugin guide and modifying bits:
(1) I bring up graylog, mongo and elastic using a docker-compose file:
version: '2'
services:
some-mongo:
image: "mongo:3"
some-elasticsearch:
image: "elasticsearch:2"
command: "elasticsearch -Des.cluster.name='graylog'"
graylog:
image: graylog2/server:2.2.1-1
environment:
GRAYLOG_PASSWORD_SECRET: somepasswordpepper
GRAYLOG_ROOT_PASSWORD_SHA2: 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
GRAYLOG_WEB_ENDPOINT_URI: http://127.0.0.1:9000/api
links:
- some-mongo:mongo
- some-elasticsearch:elasticsearch
ports:
- "9000:9000"
- "514:514/udp"
- "12202:12202"
- "37017:37017"
That has worked fine so far and I've been able to send in syslog udp messages and gelf http messages.
(2) I created a separate mongo docker container with ports mapped because I worry that if I use 27017, that graylog might look in its own internal mongodb container:
docker run -d -p 37017:27017 mongo:2.4
I start a mongo session and enable profiling for a "graylog" database:
$ mongo --port 37017
> use graylog
> db.setProfilingLevel(2)
{ "was" : 0, "slowms" : 100, "ok" : 1 }
> db.foo.insert({_id:1})
// Check that profiling data is being written to system.profile:
> db.system.profile.find().limit(1).sort( { ts : -1 } ).pretty()
{
"op" : "query",
"ns" : "graylog.foo",
"query" : {
},
"ntoreturn" : 0,
"ntoskip" : 0,
....
"allUsers" : [ ],
"user" : ""
}
So it seems like the mongod instance is running and profiling is working.
(3) I download the plugin jar and docker cp it into the plugins dir inside the graylog docker container. Something like:
docker cp graylog-plugin-mongodb-profiler-2.0.1.jar e89a2decda37:/usr/share/graylog/plugin
Then restart graylog.
I can see that the file is there:
$ docker exec -it e89a2decda37 /bin/sh
# ls /usr/share/graylog/plugin
graylog-plugin-anonymous-usage-statistics-2.2.1.jar graylog-plugin-map-widget-2.2.1.jar
graylog-plugin-beats-2.2.1.jar graylog-plugin-mongodb-profiler-2.0.1.jar
graylog-plugin-collector-2.2.1.jar graylog-plugin-pipeline-processor-2.2.1.jar
graylog-plugin-enterprise-integration-2.2.1.jar
So that part seemed to work fine and I can see an entry "Mongo profiler input" in the list of input types in the graylog UI.
(4) I create a "Mongo profiler input" input with:
hostname: localhost
port: 37017
database: graylog
(5) After I click save, the input tries to start then eventually fails like above. Restarting graylog or trying to restart the input results in the same failures.
I have tried step (2) with different versions of mongo in case there was some driver incompatibility but they all fail with same error. I've tried docker images:
mongo:3
mongo:2.6
mongo:2.4
Thanks in advance!

Like Thilo suggested above, the hostname for the graylog input shouldn't be "localhost" as that will point graylog to the docker container hosting it.
So I found the ip of the host machine using:
docker exec -it [CONTAINER ID] /bin/sh
/sbin/ip route|awk '/default/ { print $3 }'
and rewired the input and Bob's your Uncle!

Related

Running command during docker compose or docker build failed

I am trying to build mongo inside docker and I want to push database, collection and document inside the collection I tried with docker build and below my Dockerfile
FROM mongo
RUN mongosh mongodb://127.0.0.1:27017/demeter --eval 'db.createCollection("Users")'
RUN mongosh mongodb://127.0.0.1:27017/demeter --eval 'var document = {"_id": "61912ebb4b6d7dcc7e689914","name": "Test Account","email":"test#test.net", "role": "admin", "company_domain": "test.net","type": "regular","status": "active","createdBy": "61901a01097cb16e554f5a19","twoFactorAuth": false, "password": "$2a$10$MPDjDZIboLlD8xpc/RfOouAAAmBLwEEp2ESykk/2rLcqcDJJEbEVS"}; db.Users.insert(document);'
EXPOSE 27017
and using Docker Compose
version: '3.9'
services:
web:
build:
context: ./server
dockerfile: Dockerfile
ports:
- "8080:8080"
demeter_db:
image: "mongo"
volumes:
- ./mongodata:/data/db
ports:
- "27017:27017"
command: mongosh mongodb://127.0.0.1:27017/demeter --eval 'db.createCollection("Users")'
demeter_redis:
image: "redis"
I want to add the below records because the Web Server is using them in backend. if there is a better way of doing it I would be thankful.
What I get is the below error
demeter_db_1 | Current Mongosh Log ID: 61dc697509ee790cc89fc7aa
demeter_db_1 | Connecting to: mongodb://127.0.0.1:27017/demeter?directConnection=true&serverSelectionTimeoutMS=2000
demeter_db_1 | MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017
Knowing when I connect to interactive shell inside mongo container and add them manually things works fine.
root#8b20d117586d:/# mongosh 127.0.0.1:27017/demeter --eval 'db.createCollection("Users")'
Current Mongosh Log ID: 61dc64ee8a2945352c13c177
Connecting to: mongodb://127.0.0.1:27017/demeter?directConnection=true&serverSelectionTimeoutMS=2000
Using MongoDB: 5.0.5
Using Mongosh: 1.1.7
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.
------
The server generated these startup warnings when booting:
2022-01-10T16:52:14.717+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
2022-01-10T16:52:15.514+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
------
{ ok: 1 }
root#8b20d117586d:/# exit
exit
Cheers

mongo db docker image authentication failed

I'm using https://hub.docker.com/_/mongo mongo image in my local docker environment, but I'm getting Authentication failed error. In docker-compose I add it like:
my-mongo:
image: mongo
restart: always
container_name: my-mongo
environment:
MONGO_INITDB_ROOT_USERNAME: mongo
MONGO_INITDB_ROOT_PASSWORD: asdfasdf
networks:
- mynet
I also tried to run mongo CLI from inside the container but still getting the same error:
root#76e6db78228b:/# mongo
MongoDB shell version v4.2.3
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("c87c0f0e-fe83-41a6-96e9-4aa4ede8fa25") }
MongoDB server version: 4.2.3
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
> use translations
switched to db translations
> db.auth("mongo", "asdfasdf")
Error: Authentication failed.
0
Also, I'm trying to create a separate user:
> use admin
switched to db admin
db.auth("mongo", "asdfasdf")
1
> db.createUser({
user: "user",
pwd: "asdfasdf",
roles: [ {role: "readWrite", db: "translations" } ]
})
Successfully added user: {
"user" : "user",
"roles" : [
{
"role" : "readWrite",
"db" : "translations"
}
]
}
> use translations
switched to db translations
> db.auth("user", "asdfasdf")
Error: Authentication failed.
0
and the same, what I'm doing wrong???
Updated:
root#8bf81ef1fc4f:/# mongo -u mongo -p asdfasdf --authenticationDatabase admin
MongoDB shell version v4.2.3
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("02231489-eaf4-40be-a108-248cec88257e") }
MongoDB server version: 4.2.3
Server has startup warnings:
2020-02-26T16:24:12.942+0000 I STORAGE [initandlisten]
2020-02-26T16:24:12.943+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2020-02-26T16:24:12.943+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
> db.createUser({user: "someuser", pwd: "asdfasdf", roles: [{role: "readWrite", db: "translations"}]})
Successfully added user: {
"user" : "someuser",
"roles" : [
{
"role" : "readWrite",
"db" : "translations"
}
]
}
> use translations
switched to db translations
> db.auth("someuser", "asdfasdf")
Error: Authentication failed.
0
>
After some time, I figured out.
On the same folder, create docker-compose.yml and init-mongo.js
docker-compose.yml
version: '3.7'
services:
database:
image: mongo
container_name : your-cont-name
command: mongod --auth
environment:
- MONGO_INITDB_DATABASE=my_db
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=root
ports:
- '27017-27019:27017-27019'
volumes:
- mongodbdata:/data/db
- ./init-mongo.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
volumes:
mongodbdata:
driver: local
init-mongo.js
db.createUser(
{
user: "your_user",
pwd: "your_password",
roles: [
{
role: "readWrite",
db: "my_db"
}
]
}
);
db.createCollection("test"); //MongoDB creates the database when you first store data in that database
Auth
First, execute the bash inside the container
docker exec -it your-cont-name bash
Now we can login.
For the admin
mongo -u admin -p root
For the your_user you have to specify the db (with the --authenticationDatabase) otherwise you'll have an auth error
mongo -u your_user -p your_password --authenticationDatabase my_db
After that, you should switch to the right db with
use my_db
If you don't execute this command, you'll be on test db
Note
For being sure of having the right config, i prefer to
docker-compose stop
docker-compose rm
docker volume rm <your-volume>
docker-compose up --build -d
as stated in the Docs
These variables, used in conjunction, create a new user and set that
user's password. This user is created in the admin authentication
database and given the role of root, which is a "superuser" role.
so you need to add --authenticationDatabase admin to your command since the mongod will be started with mongod --auth
example:
mongo -u mongo -p asdfasdf --authenticationDatabase admin
i have the same issue, after google two hours finally sovled;
solution:find out the host machine direcory mounted into mongodb container, delete it,then re-create the mongodb container.
mongo db container create by docker-compose.yaml mount a diretory from host mechine to the container for save the mongo datbases. when you remove the container the mouted direcotry do not deleted, so the default username and password pass by env var could be long time ago you set, now you change the user name and password. just do not work,cause recreate the container will not recreate the "admin" database .
I've fallen in this trap and wasted a day while everything was correct.
I'm writing this for future me(s) because it wasn't mentioned anywhere else and also to avoid my mistake while setting up user/pass combination to connect to their database from other services.
Assuming everything is right:👇
If you are mounting some local folder of yours as storage for your database like below:
services:
your_mongo_db:
// ...some config
volumes:
- ./__TEST_DB_DATA__:/data/db
- ./init-mongo.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
environment:
- "MONGO_INITDB_ROOT_USERNAME=root"
- "MONGO_INITDB_ROOT_PASSWORD=pass"
//...more config
Please remember to remove this folder before re-running your compose file. I think when you run the docker-compose command for the first time, Mongo will create and store the user data there (like any other collections) and then reuse it for the next time (since you mounted that volume).
I had the same problem myself,
Please first remove the username and password from credentials.
environment:
MONGO_INITDB_ROOT_USERNAME: mongo
MONGO_INITDB_ROOT_PASSWORD: asdfasdf
after you remove the credentials you may check dbs or users on your mongodb.
show dbs
show users
Those commands also needs auth, so if you can see them, can be null, then you fix your issue.
Than,
then create a admin user,
use admin
db.createUser({user: "root", pwd: "root", roles:["root"]})
then you can logout and try to connect with credentials to the shell as an admin.
In addition if you are still having some issues about creating new user,
In my case I changed mechanisms to SCRAM-SHA-1 than it worked like a charm.
{
user: "<name>",
pwd: passwordPrompt(), // Or "<cleartext password>"
customData: { <any information> },
roles: [
{ role: "<role>", db: "<database>" } | "<role>",
...
],
authenticationRestrictions: [
{
clientSource: ["<IP>" | "<CIDR range>", ...],
serverAddress: ["<IP>" | "<CIDR range>", ...]
},
...
],
mechanisms: [ "<SCRAM-SHA-1|SCRAM-SHA-256>", ... ],
passwordDigestor: "<server|client>"
}
I had the same problem myself, follows this steps:
Steps 1 and 2 are to delete de old configuration, and set and apply the new configuration, its so important:
Delete to the containers mongo:
docker rm mongo -f
If you have created volumes, delete them:
docker volume rm $(docker volume ls -q) -f
In ports field of docker-compose.yml set:
- 27018:27017 ->
Its so important that ports is not 27017:27017, in my case it was generating conflict.
Up docker compose:
docker-compose up
Try now the connection with authentication!
Example of docker-compose.yml:
mongo:
container_name: mongo
image: mongo:4.4
restart: always
environment:
TZ: "Europe/Madrid"
MONGO_INITDB_ROOT_USERNAME: "user"
MONGO_INITDB_ROOT_PASSWORD: "admin1"
volumes:
- ./mongoDataBase:/data/db
ports:
- 27018:27017
Best regards!

Containerized .net core application using Docker Compose does not resolve MongoDb container name

I have developed in Visual Studio 2017 (version 15.9.16) a simple .Net Core 2.2 unit test project that connects to a MongoDb instance enabling Container Orchrestation Support. My purpose is running a container with a MongoDb instance where the unit tests will connect to whenever they are launched from Test Explorer window. The problem I am facing is that from the unit test code I cannot connect to the MongoDb container, the service name defined in docker-compose.yml cannot be resolved.
Here are the contents of the docker-compose.yml file:
version: '3.4'
services:
myapp:
image: ${DOCKER_REGISTRY-}myapp
build:
context: .
dockerfile: myapp/Dockerfile
depends_on:
- mongo
mongo:
image: mongo:latest
The contents of the Dockerfile of the app are the following:
FROM microsoft/dotnet:2.2-runtime AS base
WORKDIR /app
FROM microsoft/dotnet:2.2-sdk AS build
WORKDIR /src
COPY myapp/myapp.csproj myapp/
RUN dotnet restore myapp/myapp.csproj
COPY . .
WORKDIR /src/myapp
RUN dotnet build myapp.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish myapp.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "myapp.dll"]
Inside the unit test code, if I try to connect to the MongoDb instance using var client = new MongoClient("mongodb://mongo:27017"); when trying to write to the database I get the following exception:
A timeout occured after 30000ms selecting a server using
CompositeServerSelector{ Selectors =
MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector,
LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000
} }. Client view of cluster state is { ClusterId : "1", ConnectionMode
: "Automatic", Type : "Unknown", State : "Disconnected", Servers : [{
ServerId: "{ ClusterId : 1, EndPoint : "Unspecified/mongo:27017" }",
EndPoint: "Unspecified/mongo:27017", State: "Disconnected", Type:
"Unknown", HeartbeatException:
"MongoDB.Driver.MongoConnectionException: An exception occurred while
opening a connection to the server. --->
System.Net.Sockets.SocketException: Unknown host at
System.Net.Dns.HostResolutionEndHelper(IAsyncResult asyncResult) at
System.Net.Dns.EndGetHostAddresses(IAsyncResult asyncResult) at
System.Net.Dns.<>c.b__25_1(IAsyncResult
asyncResult) at
System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult
iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean
requiresSynchronization)
--- End of stack trace from previous location where exception was thrown --- at
MongoDB.Driver.Core.Connections.TcpStreamFactory.ResolveEndPointsAsync(EndPoint
initial) at
MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStreamAsync(EndPoint
endPoint, CancellationToken cancellationToken) at
MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken
cancellationToken) --- End of inner exception stack trace --- at
MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken
cancellationToken) at
MongoDB.Driver.Core.Servers.ServerMonitor.HeartbeatAsync(CancellationToken
cancellationToken)", LastUpdateTimestamp:
"2019-10-03T10:00:40.7989018Z" }] }.'
If I try to resolve MongoDb container service name using System.Net.Dns.GetHostEntry("mongo") I get this exception:
System.Net.SocketException: 'Unknown host'
It seems clear to me that the .Net Core code inside the unit test container cannot resolve docker-compose.yml service names. On the other hand, if I start a session in the unit test container I can do a ping mongo or telnet mongo 27017 with success. I have also tried ensuring that MongoDb container has started as proposed in this question, but with no luck. Something must be missing in my code or the docker configuration files to enable service name resolution. Any help would be much appreciated.
You can declare a network and associate an IP adress for each service, then use the ip address of mongo service instead of resolving its hostname:
version: '3.4'
services:
myapp:
image: ${DOCKER_REGISTRY-}myapp
build:
context: .
dockerfile: myapp/Dockerfile
depends_on:
- mongo
networks:
mynetwork:
ipv4_address: 178.25.0.3
mongo:
image: mongo:latest
networks:
mynetwork:
ipv4_address: 178.25.0.2
networks:
mynetwork:
driver: bridge
ipam:
config:
- subnet: 178.25.0.0/24
MongoClient("mongodb://178.25.0.2:27017");
Finally I found the reason for this behavior after running System.Net.Dns.GetHostName() inside my unit test code and seeing that the returning name was the host name and not the container name. I forgot to mention that I was running my unit test from the Test Explorer window and this way container support is completely ignored. Opposite from what I expected, Docker Compose is not being invoked so neither MongoDb container nor unit test project container are launched, unit test project is just running inside the host. The feature of running unit tests in container when container support is enabled in Visual Studio is already asked at https://developercommunity.visualstudio.com/idea/554907/allow-running-unit-tests-in-docker.html. Please vote up for this feature!

Docker MongoDB Cannot sign in as root

I'm trying to set up mongo using docker but I don't know the correct way to log in to mongo. It doesn't seem to have created the default root user as I have specified:
version: '3'
services:
location-mongo-db:
container_name: location-mongo-db
image: mongo
ports:
- "27017:27017"
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
networks:
default:
external:
name: environment_location-mongo
I run
docker-compose -f docker-compose/db.yml -p location-mongo-db up -d --build
I then do
docker exec -it location-mongo-db sh
to get in to the shell and run # mongo from inside the container. I get the following:
# mongo
MongoDB shell version v4.0.0
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 4.0.0
> show dbs
2018-06-28T15:45:36.337+0000 E QUERY [js] Error: listDatabases failed:{
"ok" : 0,
"errmsg" : "command listDatabases requires authentication",
"code" : 13,
"codeName" : "Unauthorized"
} :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs#src/mongo/shell/mongo.js:65:1
shellHelper.show#src/mongo/shell/utils.js:865:19
shellHelper#src/mongo/shell/utils.js:755:15
#(shellhelp2):1:1
I don't know what the issue is. Here is my output from checking for the list of users:
> db.runCommand({connectionStatus : 1})
{
"authInfo" : {
"authenticatedUsers" : [ ],
"authenticatedUserRoles" : [ ]
},
"ok" : 1
}
What is missing? I've tried signing in a few different ways with -u root and -p etc but nothing so far works
When you start mongo client without params on mongo command, you connect yourself as unauthenticated user, with very limited privileges.
The right command to connect from your docker tty (and from your host terminal since you expose mongodb port) with your root user credentials is :
mongo -u root -p example --authenticationDatabase admin
To fully disable connection without authetication, you need to start mongod process with the --auth option.
EDIT : According to the doc, it seems that with the two env variables MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD set, mongod in docker container must start with --auth option automatically... but this behavior is not working for me.

MongoDB db.serverStatus() gives error when running using tunnel that is targeted to api.cloudfoundry.com

Following is the console session...
C:\Users\xxx>vmc tunnel myMongoDB
Getting tunnel connection info: OK
Service connection info:
username : uuuu
password : pppp
name : db
url : mongodb://uuuu:pppp#172.30.xx.xx:25200/db
Starting tunnel to myMongoDB on port 10000.
1: none
2: mongo
3: mongodump
4: mongorestore
Which client would you like to start?: 2
Launching 'mongo --host localhost --port 10000 -u uuuu -p pppp db'
MongoDB shell version: 2.0.6
connecting to: localhost:10000/db
> db.serverStatus()
{ "errmsg" : "need to login", "ok" : 0 }
>
Which credentials should I use to login (assuming should use db.auth) to get rid of the error "{ "errmsg" : "need to login", "ok" : 0 }".
When I run the same in micro CF on my machine it works ok and gives me the expected output.
P.S. I'm trying this to get to know the current connections on my application, written in node.js. Trying to debug some issues with connections to the DB. If there is any other alternative that I can use please suggest that as well.
this should work! Not sure why your tunnel isn't connecting, my immediate suggestion is to try another instance of MongoDB and see if the same error occurs.
If you are trying to inspect the bound services on you node.js app you should be able to inspect them in the VCAP_SERVICES environment variable. For example;
var http = require('http');
var util = require('util');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.write(util.inspect(process.env.VCAP_SERVICES));
res.write("\n\n************\n\n");
res.end(util.inspect(req.headers));
}).listen(3000);
This code is currently running at http://node-headers.cloudfoundry.com/ to serve as an example. However, mongodb connections for node.js applications should automatically configure to the bound service. If this does not work for you, then please do let me know.