KRB5KDC_ERR_S_PRINCIPAL_UNKNOWN. while connection to mongodb with GSSAPI - mongodb

I have setup Active directory with kerberos authentication on windows server 2012 r2, set mongodb server on a 2nd machine. Started mongodb with GSSAPI authentication, Now if I try to connect to mongodb using the follwong url
mongo.exe --host Mongo32Test.ihubtest.com.com --authenticationMechanism=GSSAPI --authenticationDatabase=$external -u mongoService#ihubtest.com --verbose
I am getting the following message.
Error: SASL(-1): generic failure: SSPI: InitializeSecurityContext: The specified target is unknown or unreachable
I have installed wireshark and the packet contains this message
"KRB5 167 KRB Error: KRB5KDC_ERR_S_PRINCIPAL_UNKNOWN"
Searching around I figured that it is related to service principle name
mongoService#ihubtest.com is a domain user and is part of $external database in mongodb.
verified the service principle name, it looks fine.
C:>setspn -l mongoService
Registered ServicePrincipalNames for CN=mongo Service,CN=Users,DC=ihubtest,DC=com:
mongodb/Mongo32test.ihubtest.com#IHUBTEST.COM
tried the troubleshooting steps mentioned in this page, https://docs.mongodb.com/manual/tutorial/troubleshoot-kerberos/, am I missing something on Active directory configuration ?

if not yet looked into this ticket MongoDB Team has a closed ticket with some steps
https://jira.mongodb.org/browse/SERVER-13885

I believe in you misquoted your hostname as "Mongo32Test.ihubtest.com.com" instead of "Mongo32Test.ihubtest.com".
Please verify whether the provided hostname is correct or not

Related

gcloud beta sql connect "server closed the connection unexpectedly"

When trying to get a psql shell (not using iam user) I am receiving:
> gcloud alpha sql connect pg-instance --database mydb --user myuser --project my-project
Starting Cloud SQL Proxy: [/Users/me/google-cloud-sdk/bin/cloud_sql_proxy -instances my-project:us-central1:pg-instance=tcp:9470 -credential_file /Users/me/.config/gcloud/legacy_credentials/me#me.com/adc.json]]
2022/03/15 14:47:59 Rlimits for file descriptors set to {Current = 8500, Max = 9223372036854775807}
2022/03/15 14:47:59 using credential file for authentication; path="/Users/me/.config/gcloud/legacy_credentials/me#me.com/adc.json"
2022/03/15 14:48:00 Listening on 127.0.0.1:9470 for my-project:us-central1:pg-instance
2022/03/15 14:48:00 Ready for new connections
Connecting to database with SQL user [myuser].Password:
psql: error: connection to server at "127.0.0.1", port 9470 failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
I had the same error message when connecting to Postgres(Cloud Sql) using a service account.
In my setup I did run cloud_sql_proxy inside docker container.
In order to make it work I had to add extra configuration defined in step #9 https://cloud.google.com/sql/docs/sqlserver/connect-docker#connect-client
docker run -d \
-v <PATH_TO_KEY_FILE>:/config \
-p 127.0.0.1:5432:5432\
gcr.io/cloudsql-docker/gce-proxy:1.33.1 /cloud_sql_proxy \
-instances=<INSTANCE_CONNECTION_NAME>=tcp:0.0.0.0:5432 -credential_file=/config
The missing bits were: host ip on port mapping and 0.0.0.0: in cloud_sql_proxy command
There are a few things I would like to point out. The best starting point for me would be the About connection options page; both the Overview and the Before you begin sections are very helpful to get the full idea of the process and how to properly configure the user. But the most important part is the Connection Options, for the message connection to server at "127.0.0.1" I’m guessing it is a private IP, but please make sure this section is covered before starting to debug.
In your case, the logs are saying there was an error in the connection to the server…
I used the Troubleshoot guide that includes the Diagnose issues link to get to the Debug connection issues page that has a lot of useful information on how to debug any connectivity issue.
Generally, connection issues fall into one of the following three areas:
Connecting - are you able to reach your instance over the network?
Authorizing - are you authorized to connect to the instance?
Authenticating - does the database accept your database credentials?
Each of those can be further broken down into different paths for investigation.
Once determining the connection method, there are different questions that will help to guide you through the possible troubleshooting paths.
If using these guides doesn’t get you a solution, please make sure to update your answer with the results, steps, and information followed to provide further help. This would be a good example, as it has the same log error, and this other question shows that there are a few different troubleshooting paths for this specific log message, plus they have useful information for you.

Error: querySrv ESERVFAIL _mongodb._tcp.cluster0.abcd0.mongodb.net

My nodejs app was working fine with mongodb connection and suddenly this error got appeared. Then I tried to connect to mongodb with mongo compass and same error is there. I could not find out any reason for this.
Error: querySrv ESERVFAIL _mongodb._tcp.cluster0.abcd0.mongodb.net
[nodemon] app crashed - waiting for file changes before starting...
Then I changed the mongodb connection url to old url and after that I got this error.
Error: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/
[nodemon] app crashed - waiting for file changes before starting...
I have already white listed my ips and my configurations are correct (I double checked).
0.0.0.0/0 (includes your current IP address)
What is the reason for this ?
Thank you.
querySrv ESERVFAIL is a DNS error.
This means that your local machine is not able to get a response from your DNS resolver for the SRV record _mongodb._tcp.cluster0.abcd0.mongodb.net (I assume that's not your real hostname, but it will work for an example)
From your local machine, test SRV lookup from a command line, possibly one of these:
nslookup -type=SRV _mongodb._tcp.cluster0.abcd0.mongodb.net
host -t SRV _mongodb._tcp.cluster0.abcd0.mongodb.net
If that fails, feel free to say bad things about your DNS provider.
Then go to the Atlas UI and get the pre-3.6 connection string. It will start with mongodb:// and not mongodb+srv://.
Joe's identification of the problem is spot on and help me with a resolution. This was fixed for me after adding Google's DNS server (8.8.8.8) to the Wifi settings of my computer.
On MacOS its in Settings > Network > Wi-Fi (select the appropriate network) > Advanced > DNS
Then add the DNS Server 8.8.8.8
I was a windows10 user and I was facing exactly the same problem. I have figure out it's a DNS problem. the following process worked for me
Check this! if you are non windows 10 user
Stop the server and run again your server and it will solve the problem.
Hey Guys!
So i was having this weird error below :(
So what might be causeing this error?
make sure the database you trying to create n your mongoDB collections exist for me it was "userDB" that was the issue for me!
mongoose.connect(
`mongodb+srv://admin-eniola:${process.env.PASSWORD}#cluster0.velr6at.mongodb.net/userDB`
);
makes sure you check whatever password you using, it must correlate with your user password not account password!
check where your password is stored your program either dotenv or secrets file and make sure it match with your user account password.
Thanks and i hope this solutions works for you as well!

Redhat Linux 7 kerberos client is returning localhost in kerberos trace when it should be the fully qualified domain name

We have a RHEL7 Mongo server configured for kerberos authentication for Mongo connections. The Mongo instance start successfully which tells us the server principal keytab is defined correctly in AD and the KRB5_KTNAME value is correct. A kinit is successful for the id that we want to authenticate with, telling us the user keytab is valid. However when attempting to authenticate "Kerberos server not found" is returned. Looking at the kerberos trace it's reporting "localhost" instead of the fqdn.
Mongo Support reviewed the DNS definitions and they are correct so referred us to Redhat support. The relevant message in the trace is:
Getting credentials userid#DOMAIN -> mongodb/localhost# using ccache FILE: filename (values changed to protect me)
Dpes anyone have an idea why localhost is in this message instead of the fqdn as it should be? Again DNS entries look to be correct. The "server not found" message is issued because localhost isn't defined to AD of course.
Help is appreciated.
Problem solved. When executing the shell on the same host as the Mongo server, you must include the --hostname parameter and not let it default. Kerberos uses the hostname value when sending requests to the KDC.

Why do I get an "message len 1347703880 is invalid. Min 16 Max: 48000000" error when trying to connect to an OKD pod running a simple mongo container?

I have created a Mongo container using only the base mongo:3.6.4 official docker image and deployed it to my OpenShift OKD cluster, but cannot connect to this MongoDB instance using a Mongo client from outside the cluster.
I can access the pod at http://mongodb.my.domain and successfully get the "It looks like you are trying to access MongoDB over HTTP on the native driver port." message.
When using the terminal on the pod I can successfully log-in using:
mongo "mongodb://mongoadmin:pass#localhost" --authenticationDatabase admin
But when trying to connect from outside OKD the connection fails.
My client needs to pass through a proxy before it can access the OKD pods and I do have a .der certificate file but am unsure if this is related to the issue.
Some commands I have tried:
mongo "mongodb://mongoadmin:pass#mongodb.my.domain:80" --authenticationDatabase admin
mongo --ssl "mongodb://mongoadmin:pass#mongodb.my.domain:80" --authenticationDatabase admin
I expected to be able to connect successfully but instead get this error message:
MongoDB shell version v3.4.20
connecting to: mongodb://mongoadmin:pass#mongodb.my.domain:80
2019-05-15T11:32:25.514+0100 I NETWORK [thread1] recv(): message len 1347703880 is invalid. Min 16 Max: 48000000
2019-05-15T11:32:25.514+0100 E QUERY [thread1] Error: network error while attempting to run command 'isMaster' on host 'mongodb.my.domain:80' :
connect#src/mongo/shell/mongo.js:240:13
#(connect):1:6
exception: connect failed
I am unsure if it an issue with how I am using my MongoDB client or potentially some proxy settings on my OKD cluster. Any help would be appreciated.
The problem here is that external OpenShift routes aren't great at handling database connections. When you attempt to connect to the Mongo pod via the route, the route will accept the connection and transmit your connection to the Mongo service. I believe this transmission wraps the connection in in a HTTP wrapper, which Mongo doesn't like to handle. The OKD documentation highlights that path based route traffic should be HTTP based, which will cause the connection to fail.
You can see evidence of this when trying to connect to a MongoDB database and it returns "It looks like you are trying to access MongoDB over HTTP on the native driver port." to the browser. The user relief.malone explains this and has proposed a couple of solutions / workarounds in their answer to this question.
To add to relief.malone's answer, I would suggest that you port forward from the MongoDB pod to your local machine for development/debugging. In production, you could deploy an application to OKD that references the MongoDB service via it's internal DNS name, which will look something like this: mongodb.project_namespace.svc:27017. This way you will avoid the route interfering with the connection.
The Openshift OKD documentation on port-forwarding isn't that informative, but, since oc runs the kubectl command under the hood, you can read this Kubernetes guide to get some more information

mongodb replicaset auth not working

I have a problem with replica sets
After I add keyFile path to mongodb.conf I can connect, this is my mongo.conf:
logpath=/path/to/log
logappend=true
replSet = rsname
fork = true
keyFile = /path/to/key
And this is what is showed in the command line:
XXXX#XXXX:/etc$ sudo service mongodb restart
stop: Unknown instance:
mongodb start/running, process 10540
XXXX#XXXX:/etc$ mongo
MongoDB shell version: 2.4.6
connecting to: test
Mon Sep 30 18:44:20.984 Error: couldn't connect to server 127.0.0.1:27017 at src/mongo/shell/mongo.js:145
exception: connect failed
XXXX#XXXX:/etc$
if I comment the keyFile line in mongo.conf it works fine.
I solve the problem.
It was related with the key file permissions, I fixed the permissionas and ownership and work like charm:
As a root user I did:
$ chmod 700 keyfile
$ chown monogdb:mongodb keyfile
If the authentication would be the problem you should get a different message (and should be able to start the shell without the authenticated session just prevent you to run most of the commands).
This one means more like a socket exception that where you likely to connect there is no service listening. You can check with netstat if the process is listening that ip:port which is in the message. I assume that the mongod process have not started which can be for several reasons check the logs for the current one. One thing can be that the keyfile is not exists at the specified path or not the appropriate privileges have set on.
Adding a keyfile automaticly turns on the auth option too. This means you have to use a user to authenticate, but you can bypass this authentication with a localhost exception: . Read the documentation.