Kerberos through SASL: Do I need to get the ticket? - kerberos

I am designing the authentication system for a piece of software and need some guidance on how SASL and Kerberos services interact.
Here is the situation:
I have a client/server application that is itself pretty standard: only registered users can use perform actions. As an MVP I would typically implement a pretty standard solution:
Database stores username + salted hash of passord
Authentication attempt from client over HTTP includes username/password over TLS
Backend checks that username/password are valid and returns a bearer token that can be used for the duration of the session
In this case, however, there is a complicating factor. Some users of our system use Kerberos internally for user authentication for all internal services. As a feature, we would like to integrate our software with Kerberos so that they don't have to manage an additional set of users.
A more senior engineer recommended I look into SASL so that we might support several auth protocols simultaneously; standard customers can authenticate their users with the PLAIN method (over TLS), for instance, while other customers could limit authentication to only the GSSAPI method.
Up to this point, I have a clear idea of how things might be set up to achieve the desired goals. However, there is one more complicating factor. Some of the customers that want our system's auth to support Kerberos have other resources that our system will rely on (like HDFS) that also require authentication with Kerberos.
My understanding of Kerberos is this:
A client authenticates with Kerberos's ticket granting server
Upon successful authentication a TGT is returned that can be used for any future interaction with any Kerberos service in the system
Now to the point: How can I make all of these technologies work in harmony? What I want is:
- Client logs into my server
- My server authenticates client using customer's Kerberos system
- Client is given the OK
- Client asks for something from my server
- My server needs access to customer's HDFS, which requires Kerberos auth
- Server authenticates without asking the client to authenticate again
One possible solution I see to this is the following:
Make my server itself a Kerberos user
When the server needs to perform an action on HDFS, have it authenticate using its own credentials
There is a big downside to this, though: pretend the customer's Kerberos system has two realms: one with access to HDFS and one without. If users of both reals are allowed to use my system, but only one set can use HDFS, then I will need my own logic (and potentially objects in a DB) to determine who can perform actions that will require access to HDFS and who cannot.
Any pointers are going to be super helpful; in case it isn't obvious, I am quite new to all of this.
Thanks in advance!

It's not clear exactly what your question(s) are, but I'll do my best to address everything I think you're asking.
Firstly, I just want to clear this up:
Upon successful authentication a TGT is returned that can be used for
any future interaction with any Kerberos service in the system
That's not quite correct. The TGT enables the user to request service
tickets from the KDC for specific services. The service ticket is what
gives the user access to a specific service. The TGT is used to prove the
user's identity to the KDC when requesting a service ticket.
Client asks for something from my server - My server needs access to
customer's HDFS, which requires Kerberos auth - Server authenticates
without asking the client to authenticate again
This is a common enough problem and the Kerberos solution is called delegation. You should try to use Kerberos delegation in preference to coming up with your own solution. That said, how well supported it is depends on the technology stack you're using.
There are 2 kinds of delegation supported by Kerberos. The first kind is just called "delegation" and it works by sending the user's TGT to the service along with the service ticket. The service can then use the TGT to get new service tickets from the KDC on behalf of the user. The disadvantage of this approach is that once a service gets a user's TGT, it can effectively impersonate that user to any service that the user would be able to access. You might not want the service to have that level of freedom.
The second kind of delegation is called constrained delegation (also known as services4user or S4U). With this approach, the client doesn't send it's TGT to the service, but the service is allowed to ask the KDC for a service ticket to impersonate the user anyway. The services that can do this have to be whitelisted on the KDC, along with the services that they can request tickets for. This ultimately makes for a more secure approach because the service can't impersonate that user to just any service.
A more senior engineer recommended I look into SASL so that we might
support several auth protocols simultaneously; standard customers can
authenticate their users with the PLAIN method (over TLS), for
instance, while other customers could limit authentication to only the
GSSAPI method
Yes this is a good idea. Specifically, I'd recommend that you use the exact same session authentication mechanism for all users. The only difference for Kerberos users should be the way in which they get a session. You can set up a Kerberos-protected login URL that gets them a session without challenging them for credentials. Any user that hits this URL and doesn't have Kerberos credentials can just be redirected to a login page, which ultimately gets them the same session object (once they log in).
On the back end, the credential checking logic can use SASL to pass Kerberos users through to the KDC, and others through to your local authentication mechanism. This gives you a seamless fallback mechanism for situations when Kerberos doesn't work for the Kerberos users (which can happen easily enough due to things like clock skew etc.)
There is a big downside to this, though: pretend the customer's
Kerberos system has two realms: one with access to HDFS and one
without. If users of both reals are allowed to use my system, but only
one set can use HDFS, then I will need my own logic (and potentially
objects in a DB) to determine who can perform actions that will
require access to HDFS and who cannot.
This kind of thing is exactly the reason that you should use Kerberos delegation instead of coming up with your own custom solution. With Kerberos delegation, the KDC administrator control who can access what. If your service tries to impersonate a user to HDFS, and they are not allowed to access it, that authentication step will just fail and everything will be ok.
If you try to shadow the KDC's authorization rules in your own application, sooner or later they'll get out of sync and bad things will happen.

Related

Which is more better between basic auth and token auth as security perspective

I am currently developing a RESTful API server, and I am choosing between using ID and password or using a token to authenticate a user.
Let me, explain my situation first. I need to include static authentication information to my library to communicate between a client and my server or provide it to a partnership company to communicate between their server and my server. And when I was researching other services which are in a similar situation as us, they are using token now (for example, Bugfender is using a token to specify a user).
However, what I think is that using ID and PW and using the token are the same or using ID and PW is better because there are two factors to compare it is correct or incorrect.
Is there any reason why other services are using a token?
Which one is better as a security perspective or is there a better way to do this?
I think, if you are going go use on your client fixed username/password, or some fixed token, then the level of the security is the same.
Username and password is not considered as multi-factor authentication. Multi factor means that you are authenticating someone by more than one of the factors:
What you know. This can be the combination of username and password, or some special token.
What you have. Might be some hardware that generates an additional one time password - Google authenticator app on your telephone, or SMS with OTP received with some time expiration.
What you are. This is for example your fingerprint or retina of the eye.
Where you are. This can be the IP address of the origin if it is applicable for your setup.
How you behave. What is your normal way of using the service.
etc.
Maybe not needed to mention that both - the token and the username/password combination have to be carried in an encrypted requests (I believe you are using HTTPS). Otherwise the client's identity can be stolen.
How are you going to provide the credentials to your client library? I thnk this is the most tricky part. If those credentials are saved as a configuration (or worse hard coded) on their server, is that storage secure enough? Who is going to have access to it. Can you avoid it?
What would happen if your partner company realize that the username/password is compromised? Can they change it easily themselves? Or how fast you can revoke the permissions of stolen credentials?
My advice is also to keep audit logs on your server, recording the activity of the client requests. Remember also the GDPR if you work with Europe servers, check for similar regulations in your country based on what you are going to audit log.
In case the credentials (ID and password) and the token are being transferred the same way (say: by a header in a REST request) over a TLS secured channel, the only difference lies in the entropy of the password VS entropy of the token. Since it is something for you to decide in both cases, there is no real difference from the security perspective.
NOTE: I don't count the ID as a secret, as it usually is something far easier to guess than a secret should be.
I'd go for a solution that is easier to implement and manage.
IMHO this would be HTTP basic authentication, as you usually get full support from your framework/web server with little danger of making security mistakes in authentication logic. You know, friends don't let friends write their own auth. ;)

MIcroservice: Best practise for Authentication

I am looking into using microservice for my application. However the application involves authentication. E.g. there is a service for user to upload their images if they are authenticated. There is also a service for them to write reviews if they are authenticated.
How should i design the microservice to ensure that the user just need to authenticate once to access different services. Should i have a API gateway layer that does the authentication and make this API gateway talk to the different services?
You can do authentication at the gateway layer, if authentication is all you need. If you are interested in authorization, you may have to pass the tokens forward for application to consider it. Also you can do that, if you have trust on other services behind the gateways. That is service 1 is free to call service 2 without authentication.
Also you loose bit of information about the principal, you can however write it back on the request in the gateway while forwarding.
Also another point to consider is JWT, they are lightweight and more importantly could be validated without calling auth server explicitly and it saves you some time specially in microservices. So even if you have to do auth at every service layer you are doing it at minimal cost, some nanoseconds. You can explore that as well.
However final call is based on how strong your security needs are compared to rest. based on that you can take a call. Auth stripping at api gateway saves you code duplication but is less secure as other services can do as they wish.
Same goes for token, you can authenticate without explicit call to auth server but then tokens are valid for some min time and bearer is free to do as they wish once they got the tokens, you cannon invalidate it.
While Anunay's answer is one the most famous solutions, there is another solution I would like to point out. You could use some distributed session management systems to persist sessions on RAM or DISK. As an example, we have authentication module that creates a token and persists it in Redis. While Redis itself can be distributed and it can persist less used data on disk, then it will be a good choice for all microservices to check client tokens with redis.
With this method, there is nothing to do with API gateway. The gateway just passes tokens to microservices. So even in case you are thinking about different authenitcation methods on different services, it will be a good solution. (Although I cant think of a reason for that need right now but I have heard of it)
Inside a session, you can store user Roles and Permissions. That's how you can strict users access to some API's. And for your private API's, you could generate a token with role ADMIN. Then each microservice can call other one with that token so your API's will be safe.
Also you could rapidly invalidate any sessions and store anything you want in those sessions. In our system, spring framework generates a X-AUTH-TOKEN that can be set in headers. The token is pointing to a session key in redis. This works with Cookies too. (and if I'm not wrong, you could even use this method with oAuth and JWT)
In a clean architecture you can create a security module that uses this validation method over API's and add it to every microservice that you want to protect.
There are other options when it comes to session persisting too. Database, LDAP, Redis, Hazelcast ... the choice depends on your need.

Getting a SAML assertion after creating a session via API

Related to Accessing Third Party Apps After Creating A Session Via API Token and to AWS API credentials with OneLogin SAML and MFA
Since AWS assumeRoleWithSAML temporary security credentials are only valid for one hour and we have a few different roles to assume it would be very annoying for the user to enter the username/password everytime he needs to switch the role or get new credentials because of the short validity. It's totally odd to the web base OneLogin usage, where he is logged in once for the whole day or even week (depending on the policy).
I know how to get a session via API. At least this would reduce the times the user needs to enters username/password to two times. One time in the web, one time on the CLI.
But is there any way to use this session token to generate a SAML assertion via API with this session token instead of submitting username/password to the API endpoint?
I don't want so store the users credentials locally for this. And with MFA enabled this wouldn't work in a seamless way.
While being able to generate a SAML assertion for any user (without the need for MFA and a user/pass) seems like a good workaround, this is unfortunately fraught with security perils.
An API that bypasses the usual authentication effectively gives that endpoint the ability to assume any user in AWS. The "assume user" privilege is locked down pretty tightly in OneLogin, and is not the sort of thing that's given out lightly.
Basically, an API to do this seems dangerous from a security perspective. This might be something we'd consider as part of an oAuth flow (or OpenID Connect resource endpoint) but that'll take some more thinking on our part before we'd implement it.
The only compromise solution I can think of that could be implemented today would be to temporarily cache the users' credentials for a longer period of time in your code. This way they could be reused to generate new SAML assertions for a longer period of time, but would effectively be thrown away after (say) eight hours.
This wouldn't allow for MFA on an app policy, but we are building out the ability to request and verify MFA via API (coming soon) so you could implement MFA in your app (independent of any app policy) once this becomes available.
Obviously the ideal solution would be for AWS to let users' configure the session length, but so far they've been unwilling to allow this.

Machine to machine REST authentication

If you have multiple RESTful web services running on different subdomains (accounts.site.com, training.site.com, etc) what is a good authentication mechanism when one service needs to consume another?
Human authentication is easy because they supply their login credentials and get back a JSON Web Token which is then sent with every request to authenticate them.
A machine having a username and password just seems odd so I was wondering what are some proven ways of solving this problem?
It depends on... From the service perspective the other service is just a REST client, so let's stick with these terms.
If you want access different user accounts with your REST client, then you must register your client by the service and you will get an API key. The user can give privileges to that API key, so you can do things in the name of the service users if they allow it.
On the other hand if your client wants to access only its own account, then it is just a regular user of the service and it can have username and password just like the other users.

Skip IDP authentication in SAML

I am creating a service provider which talks to third party IDP for authentication. But I have a concern that I have a set of dedicated machines(Desktop,tab) which are highly trusted, so is their a way in SAML that when a request is sent from such machines user is directly authenticated without the need to type username and password.
you want that user that tries to access a resource from his desktop (which is trusted) will be automatically authenticated? if this is the case, it seems that you need to identify the user using the active directory or something.
if this is the case, search a bit about Kerberos, or ADFS - it might serve your needs.