How to use single sign on with ODBC? - postgresql

I am looking for a single sign on approach for an ODBC connection to a Postgres database.
The plan is to login to a web application and then use a a single sign on scheme such as oauth or CAS to automatically login to a client application.
The client application does not verify the credentials itself, but uses them via ODBC to connect to the Postgres database server. Unlike web applications we cannot use a single databaes user here, but need individual database accounts for security reasons.
In theory Postgres does support PAM and PAM supports both CAS and oauth. But I was not able to find any documentation on that. Especially the part of how to specify the token in ODBC is unclear to me.

With PAM auth, keep in mind that this is a broad field and books could be written about it. I do something similar to what you do though and can answer the part about ODBC. The following provides a walkthrough for a related service you may find helpful:
http://www.wikidsystems.com/support/wikid-support-center/how-to/how-to-secure-postgresql-using-two-factor-authentication-from-wikid
The big thing to remember is that with PAM the password provided is passed on to the PAM module, so you have to pass in the username and password. This gets sent to PAM as if the user was logging on to the system. Beyond that it's up to you to configure PAM appropriately for your service.

Related

OpenID Connect: transparent authentication for legacy clients using Resource Owner Password Credentials

We're currently rewriting various services to use OpenID Connect (via Keycloak).
This works great for any modern browser-based clients, but in our case we also need to support legacy IoT devices, which:
cannot receive a firmware update (and thus are stuck in their current modes of authentication/communication)
are not aware of Keycloak and are not configured to participate in OpenID Connect. (and are also only aware of the application's URL and not the Keycloak URL)
authenticate directly with the application using either Basic Authentication or SSL Client Authentication with a certificate.
From the documentation we gathered that mapping each device to a Keycloak user and using the
Resource Owner Password Credentials would be the way to go in such cases.
We were thinking that it'd be nice to add centralized support for such devices by exposing a reverse proxy that sits in front of all services and performs the following steps:
Receive the IoT device requests (and optionally terminate SSL)
Extract the credentials from the request (either basic auth / client certificate)
Perform the Resource Owner Password Credentials Flow against Keycloak to exchange the credentials for an access token (where the IoT device acts as the OAuth Resource Owner and the reverse proxy acts as the OAuth Client
If successful, enrich the original request with the retrieved access token and forward it to the proxied service
In that way the entire OpenID Connect authentication is transparent for any legacy devices.
This design could be further improved/optimized by caching the access tokens for the duration they are valid for (using the credentials as the cache key) and refreshing them when they expire.
Now, this idea seems like such a no-brainer, that we were surprised that we couldn't find any existing gateways, reverse proxies or plugins that do this.
So I guess we're in need of a sanity check on:
Is this something that can work as described or are there any obvious flaws with the idea?
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
UPDATE 1: (responding to question) The described legacy IoT devices are (physically) Arduino microcontrollers with baked-in unique credentials. In the context of Keycloak, each such Arduino microcontroller is mapped as a Keycloak user. We're open for suggestions if this is not the most adequate mapping for this use-case.
UPDATE 2: (responding to question) Agreed that the Client Credentials Flow would be semantically more correct for such a device-to-device authentication and any future devices we produce will use it. However we can't use it for the existing legacy devices for two reasons: 1) the devices only know the server's URL and can't authenticate directly against Keycloak and 2) we also want to support SSL Client Authentication using a X.509 certificate and from our understanding Keycloak only supports X.509 client certificate user authentication for users, and not for clients.
Is this something that can work as described or are there any obvious flaws with the idea?
It works fine, so long as your OP supports the Resource Owner Password Credentials flow, which is deprecated and removed from modern OAuth2.
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
Lots of reverse proxies do this, just not with resource owner credentials. The ROPC flow was never a good idea, exists for legacy reasons, and has been removed from OAuth 2.1.
I suspect that most people move away from storing and transmitting resource owner credentials as they modernize their architecture.

SSO using Kerberos on Windows and Linux

We have a client/server based application that is developed internally. Clients and server communicate over a TCP/IP connection with an application-specific protocol. The clients run on Windows and the server runs on Linux. All machines are in the same Active Directory/Kerberos domain/realm.
Currently, the user enters a username and password when they start the application. The server checks the username and password (authentication). Based on the username, the server also determines access to resources (authorization).
We want to add Single Sign-On (SSO) capabilities to the application. That is, we do not want the user to enter a username and password but we want to automatically logon as the current Windows user.
Of course, determining the current Windows user has to be done securely.
I have come up with the following setup:
I use SSPI (Negotiate) on Windows and GSSAPI on Linux.
When the client connects to the server, it uses AcquireCredentialsHandle (Negotiate) to get the credentials of the current Windows user.
The client uses InitializeSecurityContext (Negotiate) to generate a token based on these credentials.
The client sends the token to the server.
The server uses gss_acquire_cred() to get the credentials of the service. These are stored in a .keytab file.
The server receives the token from the client.
The server uses gss_accept_sec_context() to process the token. This call also returns the "source name", that is the current Windows user of the client.
The server uses the "source name" as the username: the server performs no additional authentication. The server still performs authorization.
This works but I do have some questions:
Is this secure? It should not be possible for the client to specify any other username than the Windows user of the client process. If a user has the credentials to create a process as another user (either legally or illegally) than this is allowed.
Should I perform additional checks to verify the username?
Are there alternative ways to achieve SSO in this setup? What are their pros and cons?
What you've described here is the correct way to authenticate the user. You should not have to worry about the user specifying a different name; that's what Kerberos takes care of for you.
If the client is able to obtain a service ticket, then they must have been able to authenticate against the KDC (Active Directory). The KDC creates a service ticket that includes the user's name, and encrypts it with the service's secret key.
The client would not be able to create a ticket for the server with a fake name, because it doesn't have the necessary key to encrypt the ticket.
Of course, this all assumes that you've set everything up correctly; the client should not have access to the service's keytab file for example, and the service should not have any principals in its key tab except its own.
There's a pretty detailed explanation of how it works here.

How to securely lock down a MongoDB database?

In the beginning of the year, lots of MongoDB databases were hacked. This also included my database. Yesterday I noticed my brand new database with authorization enabled was hacked as well. The username and password is very secure (16+ characters password with random characters and symbols).
I've now decided to fully secure it, but I honestly don't know where to proceed. I already have:
security:
authorization: enabled
and that should be enough (after sudo service mongod restart). I only have 1 database and no admin user, but anonymous access from a remote connection is still allowed. I keep reading many places, that I should run mongod like mongod --auth, but that it's the same as enabling authorization as I've done above.
At this point I'm struggling to disable anonymous authentication on the server. What did I miss? Why can I authenticate without an account?
To enable security you'll want to follow the Security Checklist on the MongoDB Website.
Here you are provided with role based authorization and authentication instructions. It's also advised you disable listening to all ethernet interfaces and bind your MongoDB ports to the interfaces you'd like exposed.
For a guide to network hardening, you will want to review these instructions, but the most important aspect is to avoid unwanted network exposure. Consider using a firewall or security groups (if in cloud).

Application user validation with LDAP

My web application is currently configured to connect to LDAP for user validation without relying on application server settings. In other words, my applications utilizes naming params to connect to LDAP hence its agnostic to application server ie. JBoss or Websphere.
Naming params used are as follows:
ldapURL
ldapPrincipal (bind user)
ldapCredentials (bind user's password)
ldapAuthentication
ldapSearchBase
The requirement now is to allow encrypted password in the ldapCredentials naming param. I have a way out of this situation is using custom SecurityLoginModule to encrypt password and supply it to application using naming param. My application would then decrypt it and then proceed with LDAP user validation. However, this results into additional application installation step.
So I was wondering if there is a way to use application server security domain (or some other way) to store the user credentials in secured fashion on application server and later application would pick it up at the time of user validation with LDAP without writing server specific code in my application. I know that we can use security domain to perform data source connection without writing server specific code. But if I do this for LDAP then I make server talk to LDAP which is not what am looking. Basically may still continue to use Federated users instead of LDAP.
Any decent application server (including JBoss and WebSphere) have server provided LDAP registry, which you can configure and use without any application specific code, and I'd strongly suggest to utilize that instead of writing your own ldap connection code.
Regarding encryption:
for WebSphere traditional, you can plug in your own class into server infrastructure to encrypt passwords see - Plug point for custom password encryption
for WebSphere Liberty - you have out of the box support for aes and hash.
for JBoss first link in Google showed me this How do I encrypt the bindCredential password in Wildfly, but maybe JBoss experts will guide you to something different.

mongodb http interface authentication

I have a little problem with mongodb: when I connect to the http interface I have no problems, but if I try to connect after enabling authentication the browser ask me for username and password.
So far it's correct, but if I try to log in with the users I have created (one root on admin db, one userAdminAnyDatabase on admin and one dbOwner on my personal db) neither of them allows me to access! Does anyone know why? Thanks
I'll start with the usual caveat that you should not use the HTTP interface on any production system, ever - turn it off for prod. With that said, are you using MongoDB 3.0 (and in particular SCRAM SHA-1 credentials)?
The HTTP interface does not support that auth method, per the page linked:
Neither the HTTP status interface nor the REST API support the
SCRAM-SHA-1 challenge-response user authentication mechanism
introduced in version 3.0.
Hence, to use auth with the interface you will have to make sure you are using 2.6 or at least 2.6 style credentials.