We have implemented a custom STS solution (there are sensible reasons in this, and I don't want to make this the point of the question). In this STS, the user can update his/her profile data (first name, last name, email, etc.) and by doing that, that same user obviously updates their own claims.
However, I am still trying to figure out how to notify RPs that there was a change in the claims for one particular user. I have checked other threads on SO regarding RP claims invalidation, and it seems that most answers relate or point to http://garrettvlieger.com/blog/2010/03/refreshing-claims-in-a-wif-claims-aware-application/.
However: this link is (1) old and (2) assumes the RP is doing the update, which in my case, it isn't.
So: how does one go about pinging each RP to update its claims? Is there a built-in mechanism, or do I have to roll my own?
Sub-question 1: an acceptable solution for me would be to invalidate each FedAuth cookie (on each RP), which means I just have to perform a massive sign-out out of all the RPs. Any thoughts on this, perhaps?
There's nothing explicitly in the WS-Fed protocol for this. Think about it -- it's like a driver's license -- how can the DMV invalidate your license? They'd need some back-channel and not every RP would be in a position to check.
Perhaps you could trigger a signoutcleanup from the IdP -- that's certainly possible and would achieve what you're looking for in a forceful way.
Related
I am designing an API for domain admin to manage user cookie sessions, specifically
GET users/{userKey}/sessions to get a list of a user's all sessions
DELETE users/{userKey}/sessions/{sessionId} to delete a user's specific session
I want to expose another method for the admin to delete (reset) a user's all sessions. I am considering 2 options, I wonder which one is more Restful
DELETE users/{userKey}/sessions - {sessionId} left blank to delete all sessions
POST users/{userKey}/sessions/reset
REST was never designed for bulk transaction support, it's for representing the state of individual objects. That said, API design is very opinionated and you have to balance REST "pureness" with functionality. If I were designing this, I would go with option 1 and use delete at the "sessions" endpoint since you are removing all of the user sessions and not just a single or subset.
This answer may be opinion based, so take it as such.
I would use DELETE if you are removing the resource (since you are going to be removing sessions).
If you keep the sessions (but change some data in those resources eg sliding expiration) then I would consider using PATCH as you're modifying (resetting and not replacing) existing sessions.
I would go with DELETE # users/sessions
If you think about it, a reset is simply an admin dropping a session. The user gets their new session when/if they return. So a reset route does not make much sense as you are not reissuing sessions to all of your users in this action.
My preference is users/sessions rather then users/{*}/sessions. The later route suggests that you are wanting to remove all sessions of the parent resource, in this case being a single user.
I want to expose another method for the admin to delete (reset) a user's all sessions. I am considering 2 options, I wonder which one is more Restful....
You probably want to be using POST.
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.” -- Fielding, 2008.
HTTP DELETE isn't often the right answer
Relatively few resources allow the DELETE method -- its primary use is for remote authoring environments, where the user has some direction regarding its effect. -- RFC 7231
HTTP Methods belong to the "transfer documents over a network" domain, not to your domain.
REST doesn't actually care about the spelling of the target-uri -- that's part of the point. General-purpose HTTP components don't assume that the uri has any specific semantics encoded into it. It is just a opaque identifier.
That means that you can apply whatever URI design heuristics you like. It's a lot like choosing a name for a variable or a namespace in a general-purpose programming language; the compiler/interpreter don't usual care if the symbol "means" anything or not. We choose names that make things easier for the human beings that interact with the code.
And so it is with URI as well. You'll probably want to use a spelling that is consistent with other identifiers in your API, so that it looks as though the api were designed by "one mind".
A common approach starts from the notion that a resource is any information that can be named (Fielding, 2000). Therefore, it's our job to first (a) figure out the name of the resource that handles this request, then (b) figure out an identifier that "matches", that name. Resources are closely analogous to documents, so if you can think of the name of the document in which you would write this message, you are a good ways toward figuring out the name (ex: we write expiring sessions into the "security log", or into the "sessions log". Great, now figure out the corresponding URI.)
If I ran the zoo: I imagine that
GET /users/{userKey}/sessions
would probably return a representation of the users cookie sessions. Because this representation would be something that changes when we delete all of the users sessions, I would want to post the delete request to this same target URI
POST /users/{userKey}/sessions
because doing it that way makes the cache invalidation story a bit easier.
I am investigating a slow login time and some profile synchronisation problems of a large enterprise AEM project. The system has around 1.5m users. And the website is served by 10 publishers.
The way this project is built, is that they have enabled the SAML_login for all these end-users and there is a third party IDP which I assume SAML_login talks to. I'm no expert on this SSO - SAML_login processes, so I'm trying to understand if this is the correct way to go at the first step.
Because of this setup and the number of users, SAML_login call takes 15 seconds on avarage. This is getting unacceptable day by day as the user count rises. And even more importantly, the synchronization between the 10 publishers are failing occasionally, hence some of the users sometimes can't use the system as they are expected to.
Because the users are stored in the JCR for SAML_login, you cannot even go and check the home/users folder from crx browser. It times out as it is impossible to show 1.5m rows at once. And my educated guess is, that's why the SAML_login call is taking so long.
I've come accross with articles that tells how to setup SAML_login on AEM, and this makes it sound legal for what it is used in this case. But in my opinion this is the worst setup ever as JCR is not a well designed quick access data store for this kind of usage scenarios.
My understanding so far is that this approach might work well but with only limited number of users, but with this many of users, it is an inapplicable solution approach. So my first question would be: Am I right? :)
If I'm not right, there is certainly a bottleneck somewhere which I'm not aware of yet, what can be that bottleneck to improve upon?
The AEM SAML Authentication handler has some performance limitations with a default configuration. When your browser does an HTTP POST request to AEM under /saml_login it includes a base 64 encoded "SAMLResponse" request parameter. AEM directly processes that response and does not contact any external systems.
Even though the SAML response is processed on AEM itself, the bottle-necks of the /saml_login call are the following:
Initial login where AEM creates the user node for the first time - you can look at creating the nodes ahead of time. You could write a script to create the SAML user nodes (under /home/users) in AEM ahead of time.
During each login when the session is first created - a token node is created under the user node under /home/users/.../{usernode}/.tokens - this can be avoided by enabling the encapsulated token feature.
Finally, the last bottle-neck occurs when it saves the SAMLResponse XML under the user node (for later use required for SAML-based logout). This can be avoided by not implementing SAML-based logout. The latest com.adobe.granite.auth.saml bundle supports turning off the saving of the SAML response. Service packs AEM 6.4.8 and AEM 6.5.4 include this feature. To enable this feature, set the OSGI configuration properties storeSAMLResponse=false and handleLogout=false and it would not store the SAML response.
I was at a developer conference where the speaker argued that the following set of URLs are not RESTful:
/users/username/changepassword
/users/username/resetpassword
The main reason given was that the same URLs might be used in different context and that this didn't facilitate HATEOAS in a meaningful way.
He then continued to argue that a more viable approach is to use the following URLs:
/account/changepassword
/administration/server/users/username/resetpassword
According to the speaker this latter approach allowed for each use-case to have a specifically tailored (html-)form for each URL, which could then be posted to the same URL. No more problems with the same URL used in different contexts.
I would spontaneously say that neither of these URL sets are RESTful, simply due to the fact that they are both centered around actions (verbs) which in my eyes do not really qualify as resources except for in exceptional cases (like search). I feel like this setup is very RPC-like.
I would have suggested something more noun-like and granular like
//Change password
PUT /users/username/account/password
//Register reset
POST /users/username/account/password/resets
//Verify reset
PUT /users/username/account/password/resets/0/verification_code
What is your opinion? Is the speakers approach RESTful or not, or is there simply not enough information here?
I agree, the whole idea of a RESTful interface (as I understand it) is to allow access to "resources". So neither of those URL schemes seem very nice to me.
Having said that REST isn't set in stone, it is more of a guide than a set of rules. Some things don't sit that well with it, so you have to get as close as you can just using the HTTP verbs.
A password reset isn't a resource, however a password is. So, I would say something along these lines for a password reset operation ...
GET /users/antonyscott/password
PUT /users/antonyscott/password
With the 2nd call requiring authentication of some sort derived from the first call and passing in the new password. Actually that's more of a straight password change than a reset. If you're after a reset (ie - following a link in an email to confirm the reset) then what you had seems okay.
Obviously designing an API is an iterative process, so I would say have a go and see how it works, then refine it.
I don't mind so much about pirating etcetera, but I want to ensure that the backend (Rails based) isn't open to automated services that could DOS it etc. Therefore I'd like to simply ensure that all access to the backend (which will be a few REST queries to GET and PUT data) will be via a valid iPhone application, and not some script running on a machine.
I want to avoid the use of accounts so that the user experience is seamless.
My first intention is to hash the UDID and a secret together, and provide that (and the UDID) over a HTTPS connection to the server. This will either allow an authenticated session to be created or return an error.
If eavesdropped, then an attacker could take the hash and replay it, leaving this scheme open to replay attacks. However shouldn't the HTTPS connection protect me against eavesdropping?
Thanks!
Like bpapa says, it can be spoofed, but then, like you say, you aren't worried about that so much as anybody coming along and just sending a thousand requests to your server in a row, and your server having to process each one.
Your idea of the hash is a good start. From there, you could also append the current timestamp to the pre-hashed value, and send that along as well. If the given timestamp is more than 1 day different from the server's current time, disallow access. This stops replay attacks for more than a day later anyway.
Another option would be to use a nonce. Anybody can request a nonce from your server, but then the device has to append that to the pre-hash data before sending the hash to the server. Generated nonces would have to be stored, or, could simply be the server's current timestamp. The device then has to append the server's timestamp instead of its own timestamp to the pre-hashed data, allowing for a much shorter period than a full day for a replay attack to occur.
Use SSL with client certificate. Have a private key in your client and issue a certificate for it, and your web server can require this client cert to be present in order for the sessions to proceed.
I can't give code details for Rails, but architecture-wise it's the most secure thing to do, even though might be a bit overkill. SSL with certificates is a standard industry solution and libraries exist for both the iPhone/client end and server end, so you don't have to invent anything or implement much, just get them to work nicely together.
You could also consider HMAC, like HMAC-SHA1, which is basically a standardization of the hashes stuff that other people here talk about. If you added nonces to it, you'd also be safe against replay attack. For an idea about how to implement HMAC-SHA1 with nonces, you could look at OAuth protocol (not the whole flow, but just how they tie nonce and other parameters together into an authenticated request).
There is no way to ensure it, since it can be spoofed.
If you really want to go this route (honestly, unless you're doing something really super mission critical here you are probably wasting your time), you could pass along the iPhone device token. Or maybe hash it and then pass it along. Of course, you have no way to validate it on the Server Side or anything, but if a bad guy really wants to take you down, here is roadblock #1 that he will have to deal with first.
I'm designing an iPhone app that communicates with a server over HTTP.
I only want the app, not arbitrary HTTP clients, to be able to POST to certain URL's on the server. So I'll set up the server to only validate POSTs that include a secret token, and set up the app to include that secret token. All requests that include this token will be sent only over an HTTPS connection, so that it cannot be sniffed.
Do you see any flaws with this reasoning? For example, would it be possible to read the token out of the compiled app using "strings", a hex editor, etc? I wouldn't be storing this token in a .plist or other plain-text format, of course.
Suggestions for an alternate design are welcome.
In general, assuming that a determined attacker can't discover a key that is embedded in application on a device under his physical control (and, probably, that he owns anyway) is unwarranted. Look at all of the broken DRM schemes that relied on this assumption.
What really matters is who's trying to get the key, and what their incentive is. Sell a product aimed at a demographic that isn't eager to steal. Price your product so that it's cheaper to buy it than it is to discover the key. Provide good service to your customers. These are all marketing and legal issues, rather than technological.
If you do embed a key, use a method that requires each client to discover the key themselves, like requiring a different key for each client. You don't want a situation where one attacker can discover the key and publish it, granting everyone access.
The iPhone does provide the "KeyChain" API, which can help the application hide secrets from the device owner, for better or worse. But, anything is breakable.
The way I understand it, yes, the key could be retrieved from the app one way or another. It's almost impossible to hide something in the Objective-C runtime due to the very nature of it. To the best of my knowledge, only Omni have managed it with their serial numbers, apparently by keeping the critical code in C (Cocoa Insecurity).
It might be a lot of work (I've no idea how complex it is to implement), but you might want to consider using the push notifications to send an authentication key with a validity of one hour to the program every hour. This would largely offload the problem of verifying that it's your app to Apple.
I suggest to add some checksum (md5/sha1) based on the sent data and a secret key that your app and the server knows.
Applications can be disassembled so that they could find your key.
More information is needed to determine whether the approach is sound. It may be sound for one asset being protected and unsound for another, all based on the value of the asset and the cost if the asset is revealed.
Several earlier posters have alluded to the fact that anything on the device can be revealed by a determined attacker. So, the best you can do is determine valuable the asset is and put enough hurdles in the way of the attacker that the cost of the attack exceeds the value of the asset.
One could add to your scheme client-side certificates for the SSL. One could bury that cert and the key for the token deep in some obfuscated code. One could probably craft a scheme using public/private key cryptography to further obscure the token. One could implement a challenge/response protocol that has a time boxed response time wherein the server challenges the app and the app has X milliseconds to respond before it's disconnected.
The number and complexity of the hurdles all depend on the value of the asset.
Jack
You should look into the Entrust Technologies (www.entrust.com) product line for two-factor authentication tied to all sorts of specifics (e.g., device, IMEI, application serial number, user ID, etc.)