I am trying to use mongoDB MMS backup functionality. I am getting the following error when trying to connect in the "Add Host" part of the wizard.
Unable to detect host within check interval.
I have MongoDB 2.6.4 on a my Windows 7 laptop. I've created an admin user with the following privileges:
> db.createUser(
... {
... user: "admin",
... pwd: "xxx"
... roles: [
... "clusterAdmin",
... "readAnyDatabase",
... "dbAdminAnyDatabase",
... "userAdminAnyDatabase"
... ]
... }
... );
I run mongod --auth.
Now I try to connect via MMS using MONGODB-CR Auth mechanism and get the error I described above.
In the log I get many errors like the following:
[2014/10/21 09:13:59] [monitoring.info] [monitoring-agent/components/agent.go:551]
Starting 2 marshal handlers
[2014/10/21 09:14:21] [monitoring.error] [monitoring-agent/components/agent.go:314]
Failed to fetch Conf
Failure getting conf. Op: Get Err: dial tcp [I've hide the IP]:443: ConnectEx tcp:
A connection attempt failed because the connected party did not properly respond after
a period of time, or established connection failed because connected host has failed
to respond.
at monitoring-agent/components/conf.go:249
at monitoring-agent/components/agent.go:312
at mongodb.com/monitoring-agent/monitoring-agent-service.go:129
at winsvc/svc/service.go:200
at pkg/runtime/proc.c:1445
Using Robomongo 0.8.4 client I was able to log-in using the user and pass.
I want to be able for MMS to connect to my local machine and initialize a backup of the databases on my machine.
Thanks in advance.
I had this error while configuring my mongodb-mms. On my Ops Manager server I had configured my TLS connections correctly, but on the mongo server being monitored I had the incorrect TLS certificate. The log /var/log/mongodb-mms-automation/monitoring-agent.log on the agent I was trying to monitor helped me out:
[2020/04/26 02:05:47.363] [discovery.collector-mongo2:27017.error] [components/discovery.go:contexts:580] Discovery commands requiring authentication will be skipped.
Failed to get connectionStatus. Err: `auth error: round trip error: (UserNotFound) Could not find user "CN=mms,OU=TestClientCertificateOrgUnit,O=TestClientCertificateOrg,L=TestClientCertificateLocality,ST=TestClientCertificateState,C=US" for db "$external"`
at monitoring-agent/components/dialing.go:442
at monitoring-agent/components/dialing.go:200
at monitoring-agent/components/dialing.go:306
at monitoring-agent/components/dialing.go:323
at louisaberger/procexec/concurrency.go:45
at src/runtime/asm_amd64.s:1357
See this page to add in your mms user so that the user can authenticate correctly (or fix your certs if it's just a mixup).
Related
I have tried everyting to connect my Chainlink node up to my postgresql database with no luck. I have scoured the interwebs for answers to no avail...
Here is the error message I am receiving:
[ERROR] failed to initialize database, got error failed to connect to `host=/tmp user=root database=`: dial error (dial unix /tmp/.s.PGSQL.5432: connect: no such file or directory)
Here is my .env file:
ROOT=/chainlink
LOG_LEVEL=debug
ETH_CHAIN_ID=42
MIN_OUTGOING_CONFIRMATIONS=2
LINK_CONTRACT_ADDRESS=0xa36085F69e2889c224210F603D836748e7dC0088
CHAINLINK_TLS_PORT=0
SECURE_COOKIES=false
GAS_UPDATER_ENABLED=true
ALLOW_ORIGINS=*
ETH_URL=wss://kovan.infura.io/ws/v3/id...
DATABASE_URL=https://chainlink-db-url://postgres:Password#chainlink-kovan:5432
I have tried every configuration of the connection string. Also I am able to connect to the db via pgAdmin no problem and the dbs are publicaly accessible.
The postgresql database is on AWS.
Please change the syntax of your DATABASE_URL to:
DATABASE_URL=postgresql://"username":"password"#"public-ip-pg-server":5432/"database-name"
just change:
"username" : you need to configure a new user, because the default/admin user postgres will not work for it.
"password" : password of the user
"public-ip-pg-server" : the public ip address of your postgresql-server
"database-name" : the name of your database
PS: delete all " in your syntax (;
Here is the link to the official documentation: https://docs.chain.link/docs/connecting-to-a-remote-database/
I did following settings on /var/ossec/etc/ossec.conf and after that I restart agent but it's not showing logs on the Kibana dashboard
<localfile>
<log_format>syslog</log_format>
<location>/var/log/mongodb/mongod.log</location>
I performed a basic installation of Wazuh + MongoDB on agent side with the following results:
MongoDB by default writes inside syslog file located at /var/log/syslog.
Inside /var/log/mongodb/mongod.log there are internal mongo daemon logs that are more specific.
We could monitor such logs on Wazuh agent by:
<localfile>
<log_format>syslog</log_format>
<location>/var/log/syslog</location>
</localfile>
This rule is included by default on the agent but anyway is good to remember.
the other one as you point it out:
<localfile>
<log_format>syslog</log_format>
<location>/var/log/mongodb/mongod.log</location>
</localfile>
I only see that you didn't copy the closing tag </location> but it could be copy mistake, whatever is good to take a look at /var/ossec/logs/ossec.log to find some error.
With that configuration we could receive alerts like this:
** Alert 1595929148.661787: - syslog,access_control,authentication_failed,pci_dss_10.2.4,pci_dss_10.2.5,gpg13_7.8,gdpr_IV_35.7.d,gdpr_IV_32.2,hipaa_164.312.b,nist_800_53_AU.14,nist_800_53_AC.7,tsc_CC6.1,tsc_CC6.8,tsc_CC7.2,tsc_CC7.3,
2020 Jul 28 09:39:08 (ubuntu-bionic) any->/var/log/mongodb/mongod.log
Rule: 2501 (level 5) -> 'syslog: User authentication failure.'
2020-07-28T09:39:07.431+0000 I ACCESS [conn38] SASL SCRAM-SHA-1 authentication failed for root on admin from client 127.0.0.1:52244 ; UserNotFound: Could not find user "root" for db "admin"
If we run mongo -u root (with bad password) on agent side.
I've just followed this guide on setting up Auth with Mongo DB, as well as this guide to get a user set up as an administrator.
Running mongo > use admin > show users prints the following:
{
"_id" : "admin.root",
"user" : "root",
"db" : "admin",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
After this, I run the server again with --auth and use the following command:
mongo -u "root" -p "xxx" --authenticationDatabase "admin"
This prints the following:
MongoDB shell version: 3.2.19
connecting to: test
2018-03-29T15:52:32.329+0200 E QUERY [thread1] Error: Authentication failed. :
DB.prototype._authOrThrow#src/mongo/shell/db.js:1441:20
#(auth):6:1
#(auth):1:2
exception: login failed
Trying to run this without the --auth parameter lets me log in just fine.
The --auth parameter also gives me the following output in the server console:
I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access
But I'm actually unsure about why it isn't picking up any root/admin user I create. When trying to connect with Robo 3T, the terminal prints the following:
I NETWORK [initandlisten] connection accepted from xxx:44924 #2 (2 connections now open)
I ACCESS [conn2] SCRAM-SHA-1 authentication failed for root on admin from client xxx ; UserNotFound: Could not find user root#admin
I NETWORK [conn2] end connection xxx:44924 (1 connection now open
Solution by OP.
Issue fixed by following this article.
It seems that, despite using --auth when connecting to the server, by not commenting out the line bindIp: 127.0.0.1 and adding authorization: 'enabled' to the security section in /etc/mongod.conf, I was only allowing access to the local machine - the server itself. The error messages could have been worded a bit better, but that's security. I guess.
Whilst this was a very silly oversight, no documentation I had previously looked at had covered this issue.
I am having difficulty getting the gcloud sql proxy working on my local machine. I have gone through all the steps here however I am getting the following errors. It is unclear to me what is actually going wrong. Important things to note, I am not using a service account, I am authenticating with my own account through gcloud auth login and I am following the TCP sockets steps.
MySQL
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
GCloud SQL Proxy
couldn't connect to "<my instance connection name>": googleapi: Error 400: This operation isn"t valid for this instance., invalidOperation
GCloud Logs
... status: {
code: 2
message: "UNKNOWN"
}
}
receiveTimestamp: "2017-09-08T15:45:10.179994989Z"
resource: {…}
severity: "ERROR"
timestamp: "2017-09-08T15:45:04.289Z"
}
The most likely reason you are receiving this error message is because you are using a First Generation instance.
Proxy connectivity is only supported for Second Generation instances.
I have installed mongodb in different server. I have modified my old code
db: 'mongodb://localhost/dev',
to new code
my username : proUseAdd
password : mongodb#23
ip : 169.213.64.127
I modified like this
db: 'mongodb://proUseAdd:mongodb#23#169.213.64.127:27017/dav',
I am getting this error
Could not connect to MongoDB!
Error: failed to connect to
[23#169.213.64.127:27017]
UPDATE
i have tried changed my password:mongodb23
query strin:
db: 'mongodb://proUseAdd:mongodb23#169.213.64.127:27017/dav',
still i'm getting error
Could not connect to MongoDB!
Error: failed to connect to
[169.213.64.127:27017]
Change your password to something which doesn't contain #.
While parsing your connection string, MongoDB driver looks for the # character to separate your credentials and the host name. Since your password has an # in it, it recognizes your credentials as proUseAdd:mongodb and your host name as 23#169.213.64.127:27017.