I have a little problem with mongodb: when I connect to the http interface I have no problems, but if I try to connect after enabling authentication the browser ask me for username and password.
So far it's correct, but if I try to log in with the users I have created (one root on admin db, one userAdminAnyDatabase on admin and one dbOwner on my personal db) neither of them allows me to access! Does anyone know why? Thanks
I'll start with the usual caveat that you should not use the HTTP interface on any production system, ever - turn it off for prod. With that said, are you using MongoDB 3.0 (and in particular SCRAM SHA-1 credentials)?
The HTTP interface does not support that auth method, per the page linked:
Neither the HTTP status interface nor the REST API support the
SCRAM-SHA-1 challenge-response user authentication mechanism
introduced in version 3.0.
Hence, to use auth with the interface you will have to make sure you are using 2.6 or at least 2.6 style credentials.
Related
We're currently rewriting various services to use OpenID Connect (via Keycloak).
This works great for any modern browser-based clients, but in our case we also need to support legacy IoT devices, which:
cannot receive a firmware update (and thus are stuck in their current modes of authentication/communication)
are not aware of Keycloak and are not configured to participate in OpenID Connect. (and are also only aware of the application's URL and not the Keycloak URL)
authenticate directly with the application using either Basic Authentication or SSL Client Authentication with a certificate.
From the documentation we gathered that mapping each device to a Keycloak user and using the
Resource Owner Password Credentials would be the way to go in such cases.
We were thinking that it'd be nice to add centralized support for such devices by exposing a reverse proxy that sits in front of all services and performs the following steps:
Receive the IoT device requests (and optionally terminate SSL)
Extract the credentials from the request (either basic auth / client certificate)
Perform the Resource Owner Password Credentials Flow against Keycloak to exchange the credentials for an access token (where the IoT device acts as the OAuth Resource Owner and the reverse proxy acts as the OAuth Client
If successful, enrich the original request with the retrieved access token and forward it to the proxied service
In that way the entire OpenID Connect authentication is transparent for any legacy devices.
This design could be further improved/optimized by caching the access tokens for the duration they are valid for (using the credentials as the cache key) and refreshing them when they expire.
Now, this idea seems like such a no-brainer, that we were surprised that we couldn't find any existing gateways, reverse proxies or plugins that do this.
So I guess we're in need of a sanity check on:
Is this something that can work as described or are there any obvious flaws with the idea?
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
UPDATE 1: (responding to question) The described legacy IoT devices are (physically) Arduino microcontrollers with baked-in unique credentials. In the context of Keycloak, each such Arduino microcontroller is mapped as a Keycloak user. We're open for suggestions if this is not the most adequate mapping for this use-case.
UPDATE 2: (responding to question) Agreed that the Client Credentials Flow would be semantically more correct for such a device-to-device authentication and any future devices we produce will use it. However we can't use it for the existing legacy devices for two reasons: 1) the devices only know the server's URL and can't authenticate directly against Keycloak and 2) we also want to support SSL Client Authentication using a X.509 certificate and from our understanding Keycloak only supports X.509 client certificate user authentication for users, and not for clients.
Is this something that can work as described or are there any obvious flaws with the idea?
It works fine, so long as your OP supports the Resource Owner Password Credentials flow, which is deprecated and removed from modern OAuth2.
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
Lots of reverse proxies do this, just not with resource owner credentials. The ROPC flow was never a good idea, exists for legacy reasons, and has been removed from OAuth 2.1.
I suspect that most people move away from storing and transmitting resource owner credentials as they modernize their architecture.
Mongo C++ driver has two compilation option. From driver documentation:
--ssl Enables SSL support. You will need a compatible version of the SSL libraries available.The default authorization mechanism since MongoDB version 3.0 is SCRAM-SHA-1. If you want to use standard MongoDB authentication, you should compile with –ssl option for SCRAM-SHA-1 mechanism support.
--use-sasl-client Enables SASL, which MongoDB uses for the Kerberos authentication available on MongoDB Enterprise. You will need a compatible version of the SASL implementation libraries available. The Cyrus SASL libraries are what we test with, and are recommended.
I wonder about clients not using authentication (which is a typical scenario in which the CB-MongoDB connection is secured using other means, e.g. level 3 firewalling, or simply the user doesn't want it, for whatever reason) from the point of view of performance. I mean, it is fine that users wanting authentication pay a price for it (in terms of performance penalty of the SSL CB-MongoDB communication needed to authentication) but users not wanting authentication shouldn't be affected .
Is the driver clever enough so even having being compiled using --ssl and --use-sasl-client clients not using authentication gets the same performance than if the driver would have been compiled without these options?
Note: I know this is question about Mongo C++ legacy driver which is a legacy piece of software. However, maybe a similar one applies also to the new driver (assuming it has a similar option-based compilation configurability) so I understand that the question is meaningfull anyway.
In the beginning of the year, lots of MongoDB databases were hacked. This also included my database. Yesterday I noticed my brand new database with authorization enabled was hacked as well. The username and password is very secure (16+ characters password with random characters and symbols).
I've now decided to fully secure it, but I honestly don't know where to proceed. I already have:
security:
authorization: enabled
and that should be enough (after sudo service mongod restart). I only have 1 database and no admin user, but anonymous access from a remote connection is still allowed. I keep reading many places, that I should run mongod like mongod --auth, but that it's the same as enabling authorization as I've done above.
At this point I'm struggling to disable anonymous authentication on the server. What did I miss? Why can I authenticate without an account?
To enable security you'll want to follow the Security Checklist on the MongoDB Website.
Here you are provided with role based authorization and authentication instructions. It's also advised you disable listening to all ethernet interfaces and bind your MongoDB ports to the interfaces you'd like exposed.
For a guide to network hardening, you will want to review these instructions, but the most important aspect is to avoid unwanted network exposure. Consider using a firewall or security groups (if in cloud).
How to pass in kerberos token for authentication to kerborized mongo cluster through restheart? Should I do some custom implementation?
FYI Kerberos Authentication works fine when I use native mongo client in java/scala.
Thanks In Advance
The current RESTHeart version 1.0.3 does not support Kerberos authentication.
However if you get the latest development version from github, it allows defining the MongoDB connection via a connection URI.
This should allow to use Kerberos authentication. However I haven't tried it yet.
The new configuration option is called mongo-uri.
I am looking for a single sign on approach for an ODBC connection to a Postgres database.
The plan is to login to a web application and then use a a single sign on scheme such as oauth or CAS to automatically login to a client application.
The client application does not verify the credentials itself, but uses them via ODBC to connect to the Postgres database server. Unlike web applications we cannot use a single databaes user here, but need individual database accounts for security reasons.
In theory Postgres does support PAM and PAM supports both CAS and oauth. But I was not able to find any documentation on that. Especially the part of how to specify the token in ODBC is unclear to me.
With PAM auth, keep in mind that this is a broad field and books could be written about it. I do something similar to what you do though and can answer the part about ODBC. The following provides a walkthrough for a related service you may find helpful:
http://www.wikidsystems.com/support/wikid-support-center/how-to/how-to-secure-postgresql-using-two-factor-authentication-from-wikid
The big thing to remember is that with PAM the password provided is passed on to the PAM module, so you have to pass in the username and password. This gets sent to PAM as if the user was logging on to the system. Beyond that it's up to you to configure PAM appropriately for your service.