Below is my understanding about Kerberos delegation :
1] Unrestricted delegation (W2000): Windows 2000 allows a authorized user to forward a TGT: he asks for a forwardable TGT (Authentication Service) and can then ask for a forwarded TGT (Ticket Granting Service). He may just forward this TGT (with the session key) to a service (krb_cred message). The service may then request a ticket service on the user's behalf for any service and may in turn also forward the TGT with the user's session key to any other service [proxiable/proxy tickets are out of scope since it seems not to be used due to the prerequisites it requires],
2] Restricted delegation (since W2003) : A IT admin can configure a service in the AD (SPN) to be authorized to request a ticket service on behalf of a user for a set of services (SPN) : "Allowed-To-Delegate-To" (A2D2) parameter. Moreover, a new extensions (S4U2Proxy) allows a service to request a ticket service on the user's behalf for an other service since it is able to present a valid and forwardable ticket on the behalt of the user for itself (so, it means there is no need anymore to get the TGT from the user and its associated session key). To get a forwardable ticket for itself, the service shall be tagged as "Trusted-To-Authenticate-For-Delegation" (T2A4D),
3] Protocol Transition (S4U2self) (since W2003) : A service may ask the KDC for a ticket service (for itself) on behalf of the user without showing any evidence to the KDC indicating the user has been authenticated by Kerberos. This can be done by enabling the flag "Use any protocol" in the configuration of the SPN in the AD. Then, it could use constrained delegation if the proper flags (T2A4D and A2D2) are set for this service,
4] Constrained delegation cross domains (since W2012) :
¤ Where before it wasn't possible to use delegation cross domains (because not possible to set a SPN out ot the current domain of a SPN), authorization can now be configured on the target service instead of the source service (conceptually, it's more logical).
¤ A specific SID ($$) may be configured on the target service to authorize or not a delegation when the user was not explictely authenticated by the KDC (it means when a service used its protocol transition ability to get a ticket for itself) : in order for this to work, it means (I guess) that the ticket service granted to the source service for the target service contains this information,
What is not clear to me:
1) After reading MS articles, I understand that forwarded TGTs cannot be used to do constrained delegation although there is clearly the "forwarded" flag in the TGT. Indeed, this is quite different compared to a service which uses its own TGT with a ticket service for itself, because with the user's TGT, the service is authenticated as the user which requests a service ticket. Is there any meaning in using the "adress field" of a ticket request, which is intended to contain the iP/DNS address of the requestor (this could be modifiable of course) ? Is there any parameter to refuse a ticket request when a forwarded TGT is used ? Why not use the adress field (client) to check the associated rights ? Is it because it's not reliable (address may be spoofed) or because it's not precise enough to identify a SPN ?
2) Introducing the SID ($$) implies to me that the forwardable service ticket does contain a specific information saying this ticket was obtained through S42Uself extensions or was directly obtained by the user. But I don't know what it is,
3) Forwarded TGTs seem to be "deprecated" if it means delegation cannot be constrained. So, I don't understand why there are one forwardable and one forwarded TGT (so for delegation) when I display my cached tickets using klist (Windows 10 machine in a corporate environment, and this for two different compagnies). Is is a standard and recommended practice or do I miss something ?
Thanks a lot for your feedback !!
Have a great day.
Arachnide.
Related
So, when management tells us our website needs to "support SSO through SAML 2.0", with no additional details, what are they thinking?
What will our customers expect?
Note - The is not an open website, where everyone can join. To log in you need to be a configured user in the system. The customer's admins need to create an account in our system for each user.
So we aren't going to let just anyone who has an account with an IdP in to our website. We'll have to have some mechanism for mapping a SAML identity to our users.
How would our customers expect that to work?
Based on hints in your question, I am going to presume that you will be acting as a service provider.
To be what I would call a "good" service provider, I would expect the following:
You sign your AuthnRequests.
You provide a metadata endpoint that is kept up to date with your SP metadata to include current public keys for encrypting attributes (if necessary) to be sent to you as well as validating your AuthnRequest signatures.
You support dynamic consumption of my identity provider's metadata endpoint to keep your side of the connection up to date, especially with concern to my signing certificate.
You expose management of my identity provider configuration inside of your service provider mechanism to my IdP administrators through a web or API interface.
You either support a mechanism to automatically manage my users (like via SCIM or Graph or something else), or you support Just-In-Time provisioning based on an incoming assertion.
You allow me to decide my SAML Name ID format, and that format is per-tenant. As an example, I may want to use email address as the identifier, while another IdP may want to use sAMAccountName. e.g., john.doe#domain.com vs. johndoe.
You support Service-Provider-Initiated SSO. That means that the user shows up to partner1.yourdomain.com and get redirected for authentication to that partner's IdP, and that going to the location partner2.yourdomain.com would redirect to a different IdP.
As a service provider, you should make using your service easy and secure. By shifting to SAML, it allows you to get out of the business of password and user management because you get to put that back on the identity provider. It allows your users to not have to type in a password (or more, if you're doing MFA) to use your service, removing friction caused by security. It allows you to put the onus of authenticating the user back on the organization that owns the identity.
Your customers would expect that if they have an application that uses the SAML 2.0 client-side stack then when the application sends an AuthnRequest, they will see a login page on your site and once authenticated, the application will receive a set of assertions (claims) from your IDP via an AuthnResponse.
One of these assertions is NameID. This is the "primary key" between their system and yours. Normally this is UPN or email.
This mapping is outside of the SAML spec. There needs to be some kind of "on-boarding" for the customers.
I’ve done a few OpenID integrations, so I understand the problem I’m about to describe is something OpenID and OAuth can solve, but I’m new to SAML and trying to wrap my head around one particular use case:
So, say user visits Site1, which asks the user to confirm which one of the other sites he or she belongs to (Site2, Site3 etc.)
Site1 cannot authenticate user and relies on Site2 or Site3 to do authentication, so it presents user a list of sites to authenticate against.
User chooses either Site2 or Site3, performs authentication there and is redirected back to Site1. Site1 acknowledges the user and it now knows where the user is from.
Question: Is this a valid problem for SAML to solve?
There are two problems in your question:
Service provider (Site1) needs to redirect the user to the identity provider (Site2, Site3, ...) that "owns" the user identity (can authenticate them)
Authentication at the identity provider and propagation of this fact back to the service provider
SAML can solve both of these but read on for caveats:
One of the profiles in SAML is an Identity Provider Discovery profile. Its definition from the spec (section 4.3):
...a profile by which a service provider can
discover which identity providers a principal is using with the Web
Browser SSO profile. In deployments having more than one identity
provider, service providers need a means to discover which identity
provider(s) a principal uses. The discovery profile relies on a cookie
that is written in a domain that is common between identity providers
and service providers in a deployment. The domain that the deployment
predetermines is known as the common domain in this profile, and the
cookie containing the list of identity providers is known as the
common domain cookie.
As you can see, this profile relies on a common domain cookie that is issued by a discovery service hosted on a common (shared) domain:
When a service provider needs to discover which identity providers a
principal uses, it invokes an exchange designed to present the common
domain cookie to the service provider after it is read by an HTTP
server in the common domain.
The common domain (and associated cookie) requirements were an early take and some developers didn't think it was an elegant approach that met everyone's needs. This lead to the profile being revised and later issued as a separate specification called Identity Provider Discovery Service Protocol and Profile. From this spec:
This specification defines a browser-based protocol by which a
centralized discovery service can provide a requesting service
provider with the unique identifier of an identity provider that can
authenticate a principal. The profile for discovery defined in section
4.3 of [SAML2Prof] is similar, but has different deployment properties, such as the requirement for a shared domain. Instead, this
profile relies on a normative, redirect-based wire protocol that
allows for independent implementation and deployment of the service
provider and discovery service components, a model that has proven
useful in some large-scale deployments in which managing common domain
membership may be impractical
The mention of large-scale deployments is important. The first attempt (cookie-based profile) was simple but messy; the improved spec does everything the "SAML way"...and greatly increases the complexity of the implementation. It's only worth it if your collection of identity providers is sizable, like all universities in your country for example.
There are many non-SAML options for solving the identity provider discovery problem. The simplest option is "ask the user" by employing a UX-friendly technique for the user to select their identity provider. This blog does a good job of summarizing these options. For a real-world, complex solution to this problem take a look at Swiss universities' implementation.
This is a common scenario where SAML is a good fit. Take a look at SAML Technical Overview for more details.
Note: oAuth does not authenticate the user, it authenticates the client app. OpenID Connect is based on oAuth and it can authenticate the user via the id token. The id token and the associated exchange was heavily influenced by SAML
IDP's handle this via Home Realm Discovery.
If an IDP is configured with Sites 1, 2 and 3 then when the application redirects to the IDP, there will be an HRD screen asking the user to pick one of the three.
The user selects one, authenticates and is redirected through the IDP back to the application.
It's not a protocol feature - more of an IDP feature - since it does this irrespective of the protocol.
I am designing the authentication system for a piece of software and need some guidance on how SASL and Kerberos services interact.
Here is the situation:
I have a client/server application that is itself pretty standard: only registered users can use perform actions. As an MVP I would typically implement a pretty standard solution:
Database stores username + salted hash of passord
Authentication attempt from client over HTTP includes username/password over TLS
Backend checks that username/password are valid and returns a bearer token that can be used for the duration of the session
In this case, however, there is a complicating factor. Some users of our system use Kerberos internally for user authentication for all internal services. As a feature, we would like to integrate our software with Kerberos so that they don't have to manage an additional set of users.
A more senior engineer recommended I look into SASL so that we might support several auth protocols simultaneously; standard customers can authenticate their users with the PLAIN method (over TLS), for instance, while other customers could limit authentication to only the GSSAPI method.
Up to this point, I have a clear idea of how things might be set up to achieve the desired goals. However, there is one more complicating factor. Some of the customers that want our system's auth to support Kerberos have other resources that our system will rely on (like HDFS) that also require authentication with Kerberos.
My understanding of Kerberos is this:
A client authenticates with Kerberos's ticket granting server
Upon successful authentication a TGT is returned that can be used for any future interaction with any Kerberos service in the system
Now to the point: How can I make all of these technologies work in harmony? What I want is:
- Client logs into my server
- My server authenticates client using customer's Kerberos system
- Client is given the OK
- Client asks for something from my server
- My server needs access to customer's HDFS, which requires Kerberos auth
- Server authenticates without asking the client to authenticate again
One possible solution I see to this is the following:
Make my server itself a Kerberos user
When the server needs to perform an action on HDFS, have it authenticate using its own credentials
There is a big downside to this, though: pretend the customer's Kerberos system has two realms: one with access to HDFS and one without. If users of both reals are allowed to use my system, but only one set can use HDFS, then I will need my own logic (and potentially objects in a DB) to determine who can perform actions that will require access to HDFS and who cannot.
Any pointers are going to be super helpful; in case it isn't obvious, I am quite new to all of this.
Thanks in advance!
It's not clear exactly what your question(s) are, but I'll do my best to address everything I think you're asking.
Firstly, I just want to clear this up:
Upon successful authentication a TGT is returned that can be used for
any future interaction with any Kerberos service in the system
That's not quite correct. The TGT enables the user to request service
tickets from the KDC for specific services. The service ticket is what
gives the user access to a specific service. The TGT is used to prove the
user's identity to the KDC when requesting a service ticket.
Client asks for something from my server - My server needs access to
customer's HDFS, which requires Kerberos auth - Server authenticates
without asking the client to authenticate again
This is a common enough problem and the Kerberos solution is called delegation. You should try to use Kerberos delegation in preference to coming up with your own solution. That said, how well supported it is depends on the technology stack you're using.
There are 2 kinds of delegation supported by Kerberos. The first kind is just called "delegation" and it works by sending the user's TGT to the service along with the service ticket. The service can then use the TGT to get new service tickets from the KDC on behalf of the user. The disadvantage of this approach is that once a service gets a user's TGT, it can effectively impersonate that user to any service that the user would be able to access. You might not want the service to have that level of freedom.
The second kind of delegation is called constrained delegation (also known as services4user or S4U). With this approach, the client doesn't send it's TGT to the service, but the service is allowed to ask the KDC for a service ticket to impersonate the user anyway. The services that can do this have to be whitelisted on the KDC, along with the services that they can request tickets for. This ultimately makes for a more secure approach because the service can't impersonate that user to just any service.
A more senior engineer recommended I look into SASL so that we might
support several auth protocols simultaneously; standard customers can
authenticate their users with the PLAIN method (over TLS), for
instance, while other customers could limit authentication to only the
GSSAPI method
Yes this is a good idea. Specifically, I'd recommend that you use the exact same session authentication mechanism for all users. The only difference for Kerberos users should be the way in which they get a session. You can set up a Kerberos-protected login URL that gets them a session without challenging them for credentials. Any user that hits this URL and doesn't have Kerberos credentials can just be redirected to a login page, which ultimately gets them the same session object (once they log in).
On the back end, the credential checking logic can use SASL to pass Kerberos users through to the KDC, and others through to your local authentication mechanism. This gives you a seamless fallback mechanism for situations when Kerberos doesn't work for the Kerberos users (which can happen easily enough due to things like clock skew etc.)
There is a big downside to this, though: pretend the customer's
Kerberos system has two realms: one with access to HDFS and one
without. If users of both reals are allowed to use my system, but only
one set can use HDFS, then I will need my own logic (and potentially
objects in a DB) to determine who can perform actions that will
require access to HDFS and who cannot.
This kind of thing is exactly the reason that you should use Kerberos delegation instead of coming up with your own custom solution. With Kerberos delegation, the KDC administrator control who can access what. If your service tries to impersonate a user to HDFS, and they are not allowed to access it, that authentication step will just fail and everything will be ok.
If you try to shadow the KDC's authorization rules in your own application, sooner or later they'll get out of sync and bad things will happen.
If you have multiple RESTful web services running on different subdomains (accounts.site.com, training.site.com, etc) what is a good authentication mechanism when one service needs to consume another?
Human authentication is easy because they supply their login credentials and get back a JSON Web Token which is then sent with every request to authenticate them.
A machine having a username and password just seems odd so I was wondering what are some proven ways of solving this problem?
It depends on... From the service perspective the other service is just a REST client, so let's stick with these terms.
If you want access different user accounts with your REST client, then you must register your client by the service and you will get an API key. The user can give privileges to that API key, so you can do things in the name of the service users if they allow it.
On the other hand if your client wants to access only its own account, then it is just a regular user of the service and it can have username and password just like the other users.
I am creating a service provider which talks to third party IDP for authentication. But I have a concern that I have a set of dedicated machines(Desktop,tab) which are highly trusted, so is their a way in SAML that when a request is sent from such machines user is directly authenticated without the need to type username and password.
you want that user that tries to access a resource from his desktop (which is trusted) will be automatically authenticated? if this is the case, it seems that you need to identify the user using the active directory or something.
if this is the case, search a bit about Kerberos, or ADFS - it might serve your needs.