Change RelayState in AD FS - single-sign-on

Consider following situation: We're currently in a migration phase where the majority of our users should still be forwarded to the existing application A. Other users that fulfil some certain criteria (let's call them beta-testers) should instead be forwarded to the new application B.
Users reach our AD FS with a POST request that contains the SAMLResponse and the RelayState. The RelayState-parameter tells our AD FS the desired target-application. Up to now it always contains "site A" since the users don't know about site B yet ;-)
I'm wondering if there's a way to dynamically change the process our ADFS determines the target application based on the value of the RelayState-parameter?
So what I'm looking for is a way to somehow modify the RelayState based on a certain claim the user provides. E.g. if the user has a "beta-tester" entry in her role-claim, then our ADFS should forward her to site B instead of site A.
Is there a way to hook into the AD FS procssing pipeline? The only thing I found so far is this article describing how to "inject" a custom authentication method. But that's obvisiously not what I'm looking for.
So could anybody tell me if there are any other extension points I could utilize to achieve what I described above?

Sorry, no - there is no way to dynamically change RelayState.
ADFS is locked down (as it is a security system) and doesn't have extension points.
Could you have two RP during the transition?

One approach is to setup a proxy site where you can apply custom logic as necessary for scenarios like this. My experience is there are numerous times when it's handy to have a point of entry into the federation process, i.e. a psuedo-extension point, where you can apply custom logic. So, everyone from the IdP may go to https://proxy.mysite.com and then that site would make determinations based on claims and maybe querystring, posted variables or header attributes, as to where to send (redirect) the user to next, https://a.mysite.com or https://b.mysite.com.
DNS can also be folded in, to do things like direct https://a.mysite.com to the proxy site and the proxy site can then look at the hostname of the request and know that the user intended to go to a.mysite.com, but you can determine if a beta tester and direct to b.mysite.com or the actual A site.

Related

What is the best approach to stop your platform's users to "sniff" the frontend requests to backend and modify them?

So I have a platform that works like this: Users can create accounts by logging in with their Google (I USE AUTH0) and then they can create "Projects" which contain lots of other unimportant stuff regarding my current problem (like todo lists, ability to upload files etc; they can also Edit the project by changing some of it's attributes like name, description, theme and so on). There is a home page where everyone can see each other's projects and access them (but not upload files, change the tasks in the to do lists; this is possible only by the person that owns it).
By using a tool like Burp, people can see the request made from frontend to backend, for example when accessing one of the projects, and modify it on the fly.
This is what it looks like inside Burp when they access one of the projects:
As you can see there is a Get request to /projects/idOfTheProject; they can replace the GET with DELETE for example and they will successfully delete it; they can also see what is sent to the backend when a project is edited (name changed, description, thumbnail picture etc) and change anything they want about it.
How should I prevent this?
What I've looked at so far:
a. JWT - Probably the best fitting for my situation, but required the most work to be done (as I already have my platform almost finished with no such a security measure implemented yet, so I may need to rewrite a lot of things in both backend and frontend)
b. Sending the user's id that initiated the action as well to the backend and verify if it has the necessary privileges - the worst solution as users can access each other's profile and see the id, then just change another field in the request's JSON
c. Have a sort of token for each user and send that instead of the user's id - in this way somebody can't get your token by just looking at the communication between frontend and backend (only if it is using YOUR account). That token should be taken maybe somewhere from the auth0 when they create their account? If they provide something like that; or I can just create it myself and store it alongside the other user variables. You would still see the requests in plain text but even if you modified something you would still have to "guess" the owner's token, which will be impossible.
For frontend I use NextJS and for backend Flask.
Thank you in advance!
The TL;DR is that you don’t. A determined user will always be able to see what requests are being sent out by the code running on their computer and over their network. What you are describing when asking how to prevent people from “sniffing” these requests is security through obscurity, which isn’t actually secure at all.
What you should do instead is have an authorization system on your backend which will check if the current user can perform a given action on a given resource. For example, verifying that a user is an administrator before allowing them to delete a blog post, or making sure that the current user is on the same account as another user before allowing the current user to see details about the other user.

REST API design for resource modification: catch all POST vs multiple endpoints

I'm trying to figure out best or common practices for API design.
My concern is basically this:
PUT /users/:id
In my view this endpoint could by used for a wide array of functions.
I would use it to change the user name or profile, but what about ex, resetting a password?
From a "model" point of view, that could be flag, a property of the user, so it would "work" to send a modification.
But I would expect more something like
POST /users/:id/reset_password
But that means that almost for each modification I could create a different endpoint according to the meaning of the modification, i.e
POST /users/:id/enable
POST /users/:id/birthday
...
or even
GET /user/:id/birthday
compared to simply
GET /users/:id
So basically I don't understand when to stop using a single POST/GET and creating instead different endpoints.
It looks to me as a simple matter of choice, I just want to know if there is some standard way of doing this or some guideline. After reading and looking at example I'm still not really sure.
Disclaimer: In a lot of cases, people ask about REST when what they really want is an HTTP compliant RPC design with pretty URLs. In what follows, I'm answering about REST.
In my view this endpoint could by used for a wide array of functions. I would use it to change the user name or profile, but what about ex, resetting a password?
Sure, why not?
I don't understand when to stop using a single POST/GET and creating instead different endpoints.
A really good starting point is Jim Webber's talk Domain Driven Design for RESTful systems.
First key idea - your resources are not your domain model entities. Your REST API is really a facade in front of your domain model, which supports the illusion that you are just a website.
So your resources are analogous to documents that represent information. The URI identifies the document.
Second key idea - that URI is used by clients to cache representations of the resource, so that we don't need to send requests back to the server all the time. Instead, we have built into HTTP a bunch of standard ways for communicating caching meta data from the server to the client.
Critical to that is the rule for cache invalidation: a successful unsafe request invalidates previously cached representations of the same resource (ie, the same URI).
So the general rule is, if the client is going to do something that will modify a resource they have already cached, then we want the modification request to go to that same URI.
Your REST API is a facade to make your domain model look like a web site. So if we think about how we might build a web site to do the same thing, it can give us insights to how we arrange our resources.
So to borrow your example, we might have a web page representation of the user. If we were going to allow the client to modify that page, then we might think through a bunch of use cases (enable, change birthday, change name, reset password). For each of these supported cases, we would have a link to a task-specific form. Each of those forms would have fields allowing the client to describe the change, and a url in the form action to decide where the form gets submitted.
Since what the client is trying to achieve is to modify the profile page itself, we would have each of those forms submit back to the profile page URI, so that the client would know to invalidate the previously cached representations if the request were successful.
So your resource identifiers might look like:
/users/:id
/users/:id/forms/enable
/users/:id/forms/changeName
/users/:id/forms/changeBirthday
/users/:id/forms/resetPassword
Where each of the forms submits its information to /users/:id.
That does mean, in your implementation, you are probably going to end up with a lot of different requests routed to the same handler, and so you may need to disambiguate them there.

Handling User Preferences/States in REST API

We're starting to migrate our Website to a REST Service based system and are in the process of developing the core right now.
In our current setup a user has one or more "accounts" assigned which define what data he can see on the website. Only one account can be active for a given user at any time. Right now we store the selected account in the database and use it to filter all queries.
Now I'm not sure how to handle this properly in a REST environment. Possible solutions I found are:
Sending the requested account with every request
Storing the current account in the auth token. (We're using JWT for that)
Having the current account stored on the server and calling a specific resource to change it
Each of these has its pros and cons for our setup. Currently we're using the 3rd approach in our Website. But what would be the correct way to handle such a thing in a REST environment?
Yea the design you are dealing with is fairly bad, and what you really want to do is remove the state completely out of this system.
For that reason the first option is by far superior:
Sending the requested account with every request
If this is simply an id, there's a very simple way to do this, just prefix all your (relevant) routes / uris with this account id. For example:
http://api.example.org/accounts/{id}/...
This way the 'state' is maintained by virtue of which url you are accessing, and the server can be unaware of the state.

Pass through incoming claims

Is it possible to send a SAML claim to ADFS and then have ADFS use values from that incoming claim to generate its own?
Basically, we need to send a) information about the user (fairly straightforward), and b) information about the target (the question at hand). The target is chosen by the user at time of SSO.
I've had it suggested to me to store the dynamic data in a database and then pull it in ADFS, but that runs the risk of creating issues if a user tries to open two targets in two windows at the same time.
EDIT: When a user SSOs into the target application, they will be taken to a screen that shows information about a specific item. We need to provide which item the user will need to see - and that will be selected by the user in the source application.
Essentially, user goes to Site A, clicks on Item 2, which SSOs them into Site B with Item 2 in context. If the user selects Item 7 instead, they SSO into Site B with Item 7 in context. This information isn't tied to the user because the user can access any of the items, but it needs to be provided in the SAML token to Site B.
First of all "maweeras" is very authorative. You can trust has answer/comment to be correct :-).
As maweeras said: To get it into the SAML Token you have to use "claims rules". The trouble is getting it into the input set of the claimrules. That can either be something from: a. specific to the user (you said you don't want that, multiple windows could be fixed, but it is awful indeed), b. another SAML Token Issuer, or c. from some very specific HTTP headers.
As you specify it, only option c. remains. Already being tough, I must warn you to be extremely cautious because all of them already may have specific consequences. Some people would say that you are abusing them. Shooting yourself in the foot.
Not an answer, but a tip. You do not specify why you want it in the SAML token. If possible I would try to put it in a query parameter of a redirect from app A to app B. That will be preserved in the wctx (if authentications kicks in). You may already have to add several other things there to make sure the user will get the correct SSO (IdP, authnlevel etc.). If you need it signed, then sign it before you stuff it in the redirect?

How much authentication is necessary with Restful PUT/POST?

I have a single organization that needs to send me a predetermined set of very sensitive data. My current process looks like this,
Created web page https://mywebsite.com/random/
The page requires HTTPS and only accepts POST/PUT requests or it redirects
The first thing I do is check for two variables, "unique_id_1" and "unique_id_2". Each of those variables must match exactly to accounts already in my database.
At this point, a malicious person would have to first find the web page, then have to figure out the name for those two variables and also fill them with the correct matching data. How likely would that scenario play out?
I've thought about adding a 3rd variable, "shared_key" and then share a string of text with the submitter to include with every PUT/POST request. How helpful would this be?
Another thought I had was both of us writing a date hashed with a pre shared key. They send the variable and I match it against my own. That way the key changes every single day. Overkill?
What about Basic Authentication, is it even that secure? I currently reject and redirect incorrect visitors/data. It would seem that the website asking for authentication would only do more to tip off potential hacking programs.
It would seem that the website asking for authentication would only do
more to tip off potential hacking programs.
This is a terrible reason to not implement authentication. You don't need to do it for the whole site, you can do it for just your API endpoint.
If your data is "very sensitive" you might want to consider some or all of the following in addition to HTTPS:
Make sure your HTTPS itself is secure with the Qualsys checker.
Have the API user register their IP address and lock down the service so that it answers only to that IP.
Require a client certificate (that you create), like with SSLVerifyClient require.
Use basic or digest authentication on top of the request. This obviates the need for your id1/id2 parameters.
If you feel sufficiently motivated, implement OAuth.
Instead of your 3rd "shared key" parameter, implement URL signing.
Also:
Don't compare a hash of a client date against a hash of server date. It will break near midnight, especially if client and server are in different timezones or have drifting clocks.