Redhat Linux 7 kerberos client is returning localhost in kerberos trace when it should be the fully qualified domain name - mongodb

We have a RHEL7 Mongo server configured for kerberos authentication for Mongo connections. The Mongo instance start successfully which tells us the server principal keytab is defined correctly in AD and the KRB5_KTNAME value is correct. A kinit is successful for the id that we want to authenticate with, telling us the user keytab is valid. However when attempting to authenticate "Kerberos server not found" is returned. Looking at the kerberos trace it's reporting "localhost" instead of the fqdn.
Mongo Support reviewed the DNS definitions and they are correct so referred us to Redhat support. The relevant message in the trace is:
Getting credentials userid#DOMAIN -> mongodb/localhost# using ccache FILE: filename (values changed to protect me)
Dpes anyone have an idea why localhost is in this message instead of the fqdn as it should be? Again DNS entries look to be correct. The "server not found" message is issued because localhost isn't defined to AD of course.
Help is appreciated.

Problem solved. When executing the shell on the same host as the Mongo server, you must include the --hostname parameter and not let it default. Kerberos uses the hostname value when sending requests to the KDC.

Related

gcloud beta sql connect "server closed the connection unexpectedly"

When trying to get a psql shell (not using iam user) I am receiving:
> gcloud alpha sql connect pg-instance --database mydb --user myuser --project my-project
Starting Cloud SQL Proxy: [/Users/me/google-cloud-sdk/bin/cloud_sql_proxy -instances my-project:us-central1:pg-instance=tcp:9470 -credential_file /Users/me/.config/gcloud/legacy_credentials/me#me.com/adc.json]]
2022/03/15 14:47:59 Rlimits for file descriptors set to {Current = 8500, Max = 9223372036854775807}
2022/03/15 14:47:59 using credential file for authentication; path="/Users/me/.config/gcloud/legacy_credentials/me#me.com/adc.json"
2022/03/15 14:48:00 Listening on 127.0.0.1:9470 for my-project:us-central1:pg-instance
2022/03/15 14:48:00 Ready for new connections
Connecting to database with SQL user [myuser].Password:
psql: error: connection to server at "127.0.0.1", port 9470 failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
I had the same error message when connecting to Postgres(Cloud Sql) using a service account.
In my setup I did run cloud_sql_proxy inside docker container.
In order to make it work I had to add extra configuration defined in step #9 https://cloud.google.com/sql/docs/sqlserver/connect-docker#connect-client
docker run -d \
-v <PATH_TO_KEY_FILE>:/config \
-p 127.0.0.1:5432:5432\
gcr.io/cloudsql-docker/gce-proxy:1.33.1 /cloud_sql_proxy \
-instances=<INSTANCE_CONNECTION_NAME>=tcp:0.0.0.0:5432 -credential_file=/config
The missing bits were: host ip on port mapping and 0.0.0.0: in cloud_sql_proxy command
There are a few things I would like to point out. The best starting point for me would be the About connection options page; both the Overview and the Before you begin sections are very helpful to get the full idea of the process and how to properly configure the user. But the most important part is the Connection Options, for the message connection to server at "127.0.0.1" I’m guessing it is a private IP, but please make sure this section is covered before starting to debug.
In your case, the logs are saying there was an error in the connection to the server…
I used the Troubleshoot guide that includes the Diagnose issues link to get to the Debug connection issues page that has a lot of useful information on how to debug any connectivity issue.
Generally, connection issues fall into one of the following three areas:
Connecting - are you able to reach your instance over the network?
Authorizing - are you authorized to connect to the instance?
Authenticating - does the database accept your database credentials?
Each of those can be further broken down into different paths for investigation.
Once determining the connection method, there are different questions that will help to guide you through the possible troubleshooting paths.
If using these guides doesn’t get you a solution, please make sure to update your answer with the results, steps, and information followed to provide further help. This would be a good example, as it has the same log error, and this other question shows that there are a few different troubleshooting paths for this specific log message, plus they have useful information for you.

Error: querySrv ESERVFAIL _mongodb._tcp.cluster0.abcd0.mongodb.net

My nodejs app was working fine with mongodb connection and suddenly this error got appeared. Then I tried to connect to mongodb with mongo compass and same error is there. I could not find out any reason for this.
Error: querySrv ESERVFAIL _mongodb._tcp.cluster0.abcd0.mongodb.net
[nodemon] app crashed - waiting for file changes before starting...
Then I changed the mongodb connection url to old url and after that I got this error.
Error: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/
[nodemon] app crashed - waiting for file changes before starting...
I have already white listed my ips and my configurations are correct (I double checked).
0.0.0.0/0 (includes your current IP address)
What is the reason for this ?
Thank you.
querySrv ESERVFAIL is a DNS error.
This means that your local machine is not able to get a response from your DNS resolver for the SRV record _mongodb._tcp.cluster0.abcd0.mongodb.net (I assume that's not your real hostname, but it will work for an example)
From your local machine, test SRV lookup from a command line, possibly one of these:
nslookup -type=SRV _mongodb._tcp.cluster0.abcd0.mongodb.net
host -t SRV _mongodb._tcp.cluster0.abcd0.mongodb.net
If that fails, feel free to say bad things about your DNS provider.
Then go to the Atlas UI and get the pre-3.6 connection string. It will start with mongodb:// and not mongodb+srv://.
Joe's identification of the problem is spot on and help me with a resolution. This was fixed for me after adding Google's DNS server (8.8.8.8) to the Wifi settings of my computer.
On MacOS its in Settings > Network > Wi-Fi (select the appropriate network) > Advanced > DNS
Then add the DNS Server 8.8.8.8
I was a windows10 user and I was facing exactly the same problem. I have figure out it's a DNS problem. the following process worked for me
Check this! if you are non windows 10 user
Stop the server and run again your server and it will solve the problem.
Hey Guys!
So i was having this weird error below :(
So what might be causeing this error?
make sure the database you trying to create n your mongoDB collections exist for me it was "userDB" that was the issue for me!
mongoose.connect(
`mongodb+srv://admin-eniola:${process.env.PASSWORD}#cluster0.velr6at.mongodb.net/userDB`
);
makes sure you check whatever password you using, it must correlate with your user password not account password!
check where your password is stored your program either dotenv or secrets file and make sure it match with your user account password.
Thanks and i hope this solutions works for you as well!

kinit asking for password inspite of pki configuration

Invoking kinit with the principal name to get kerberos ticket asks for password, even though I want it to authenticate with client certs which I have configured in /etc/krb5.conf. What is the way to force it skip password and only use the client cert in the AS REQ and do the pki authentication to get the kerberos ticket? I have the principal in the UPN of the kclientcert2.pem whose priv key is kclientkey2.pem and is issued by kclientca.pem. I have all of them in my /root folder. I then invoke kinit with principal name as parameter. Then I am prompted for password.
My /etc/krb5.conf realm config looks like below.
[realms]
myrealm = {
kdc = <ldap server IP>:88
kdc_tcp_ports = 88
pkinit_eku_checking = kpServerAuth
pkinit_anchors = FILE:/root/kclientca.pem
pkinit_identities = FILE:/root/kclientcert2.pem,/root/kclientkey2.pem
}
Now I installed krb5-pkinit package in one ubuntu client. After this it did not prompt for password here. But it gave the message "kinit: KDC name mismatch while getting initial credentials". the tcpdump shows a 2nd AS-REQ with AS-REP with code 11.
On an embedded client running linux, I couldn't install the package like in ubuntu, so I copied krb5 shared libs like preauth/pkinit.so and other s0 to /usr/lib path which is the standard path of the embedded client, but it still prompts for password. ###https://stackoverflow.com/users/696632/michael-o, we got disconnected on the other thread, can you please help to understand why the kdc name mismatch occurs on ubuntu and which library I am missing on the embedded client.

How to enforce SSL in keycloak with Azure PostgreSQL

I am trying to configure keycloak to run with PostgreSQL (using Azure Database for PostgreSQL) using a docker container. I was able to do this as instructed in the keycloak documentation here.
The problem that I am facing is, Azure Database for PostgreSQL has this option "Enforce SSL connection" set to "Enable" by default and the keycloak server is not working with that. It throws following error at the server startup.
ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 49) MSC000001: Failed to start service jboss.undertow.deployment.default-server.default-host./auth:
Caused by: org.postgresql.util.PSQLException: FATAL: SSL connection is required. Please specify SSL options and retry.
If the option "Enforce SSL connection" is disabled it worked fine.
I would like to know how to specify this option to work with keycloak.
I am using a custom Dockerfile to download and boot keycloak server and passing the data-source parameters as environmental variables with the docker run command. I have tried this approach which worked fine when I point it to my PostgreSQL data-source without any modifications. But when I change it to be compatible with my own Dockerfile it gives the same error.
Thanks in advance.
so here's what I did.
I overlooked the latest Dockerfile shared by jboss (which is available here) and adopted the lines that I needed with the version of my requirement. Earlier I was trying to add postgreSQL configuration by my own as keycloak document was suggesting. Since it is now supported out of the box I changed my Dockerfile to be compatible with jboss Dockerfile for keycloak.
Also I introduced new env. variable to enforce SSL connection by stating ssl=true as guided here
FATAL: SSL connection is required. Please specify SSL options and retry.
I've seen this happen when the client IP address isn't included in the firewall rules on the PostgreSQL server. Try confirming the firewall is open for your IP in the Connection Security page in the portal or in the Azure CLI using az postgres server firewall-rule list --resource-group --server-name

KRB5KDC_ERR_S_PRINCIPAL_UNKNOWN. while connection to mongodb with GSSAPI

I have setup Active directory with kerberos authentication on windows server 2012 r2, set mongodb server on a 2nd machine. Started mongodb with GSSAPI authentication, Now if I try to connect to mongodb using the follwong url
mongo.exe --host Mongo32Test.ihubtest.com.com --authenticationMechanism=GSSAPI --authenticationDatabase=$external -u mongoService#ihubtest.com --verbose
I am getting the following message.
Error: SASL(-1): generic failure: SSPI: InitializeSecurityContext: The specified target is unknown or unreachable
I have installed wireshark and the packet contains this message
"KRB5 167 KRB Error: KRB5KDC_ERR_S_PRINCIPAL_UNKNOWN"
Searching around I figured that it is related to service principle name
mongoService#ihubtest.com is a domain user and is part of $external database in mongodb.
verified the service principle name, it looks fine.
C:>setspn -l mongoService
Registered ServicePrincipalNames for CN=mongo Service,CN=Users,DC=ihubtest,DC=com:
mongodb/Mongo32test.ihubtest.com#IHUBTEST.COM
tried the troubleshooting steps mentioned in this page, https://docs.mongodb.com/manual/tutorial/troubleshoot-kerberos/, am I missing something on Active directory configuration ?
if not yet looked into this ticket MongoDB Team has a closed ticket with some steps
https://jira.mongodb.org/browse/SERVER-13885
I believe in you misquoted your hostname as "Mongo32Test.ihubtest.com.com" instead of "Mongo32Test.ihubtest.com".
Please verify whether the provided hostname is correct or not