Is there a way of handling Page Expired errors in PingFederate? - single-sign-on

The SSO product PingFederate produces a "Page Expired" when it cannot find the request in its table of recent requests. They state, in a manner reminiscent of "640K ought to be enough for anybody.", that
This is unlikely since PingFederate's state table handles up to 10000 requests by default.
Well, guess what, this PingFederate server is kind of busy (producing 10MB of logs per minute), so if the user should wait for, say, an hour, 10K requests have been produced and that state table no longer contains the lookup key (of cookie+nonce).
So, apart from trying to keep the user from staying on the logon screen, is there a way I can instruct PF to "redirect user back to logon screen in case of Pag Expired"?
Logout requests has exactly this feature, through the InErrorResource parameter, so the opposite seems likely to exist.

PingFederate does support an InErrorResource parameter for both the IdP-init SSO as well as SP-init SSO endpoints. This being said, I doubt that the InErrorResource value will be kept when the state is dropped, PingFederate might end up with no knowledge of the user and request, resulting in the same error.
If the environment is busy as you stated it would make more sense to adjust the size limits to avoid the state being lost. The documentation explains here how these can be configured, and what each limit controls. It's worth noting that increasing these limits will have an impact on memory usage, handle with care.

Related

REST API - "GET /user" changes user in database

We have a simple User API including "GET /user" to request user information. When processing the request we store the current datetime as "lastVisit" in our database. As a result we have a GET request updating the user in our database, which seems to be bad practice.
As we don't handle the login process on ourselves, GET /user is the first request to our backend. We cannot use /login to retrieve and store "lastVisit".
Is it bad practice? How to solve the issue?
There's nothing wrong with updating your database when you receive a GET request - the uniform interface of HTTP constrains what the GET method token means, but you have a lot of freedom in how your server implements the handling of that request.
So that much is fine.
"lastVisit", however, may be a problem - which is to say, your interpretation of what it means that somebody asked for a copy of the page ignores various edge cases: a web spider following links to index the documents (think Google), or a smart browser that is trying to reduce latency by downloading a link before the user clicks on it.
You don't know, from the request, whether the fetch was triggered by the client, or by the general purpose agent acting in the client's stead. Similarly, you don't know about any requests for the resource that were intercepted and handled by a cache that had a valid copy of the resource.
It may be that using request handling time as a proxy for last visit is a good enough cost effective approximation of what you want to get by, but you should keep in mind that it is an estimate, not a truth.

POST/PUT response REST in a CQRS/ES system

I'm implementing a CQRS/ES based system with a RESTful interface which is used by a webapp.
When performing certain actions e.g. creating a new profile I need to be able to check certain conditions, such as uniqueness of the profile ID, or that the person has the right to create a resource under a group. Which means I have a couple of options:
Context: POST/profiles { "email": "unique#example.com" }
From my REST API return 202 from my service with a location of the new resource where my client can poll for it. In this case, however, how do I handle errors as in effect the view will not exist or ever exist.
Create a saga on the initial request then dispatch the event. Once my service creates the view or finds the error then the result is written to the saga. When the saga has been completed return the result to the user.
From these two options - the second seems more reasonable to me, if not more complex. Is this a viable option for building RESTful request/response models on a CQRS/ES event sourced backend?
Yes, the second solution seems to better fit the business.
From what I understand from your case, from the DDD point of view, the creation of a user profile is a business process, with more than one steps (verifying the uniqueness of the profile, creating the profile and recovering from a duplicate profile situation). This process acts like an entity, it starts, runs and ends with a result (success or error). Being an entity it has an ID and it can be viewed as a REST resource. A Saga will be responsible for executing it.
So, in response to the client's request you send the URI of the process resource where the client can poll for the status. In case of error, it reads the error message. In case of success, it gets the URI of its profile.
The first solution can still be used if the use-case is simpler, if the command can be executed synchronously and the client gets the final result (error or success) as an immediate response.
From my REST API return 202 from my service with a location of the new resource where my client can poll for it. In this case, however, how do I handle errors as in effect the view will not exist or ever exist.
The usual answer here is that, as part of the 202 Accepted response, you include monitoring information
The representation sent with this response ought to describe the request's current status and point to (or embed) a status monitor that can provide the user with an estimate of when the request will be fulfilled.
In other words, a link to a resource that will change when the accepted request is finally run.
So in describing the protocol, in addition to the resource that you create, you'll also need to document the representation used when you defer the work for later, and the representation used by the monitor.
When the saga has been completed return the result to the user.
Depending on the work, that may be overkill.
Which is to say, you are raising two different questions here; one of those is whether the request should be handled synchronously (don't respond until the work is done) or asynchronously (return right away, but give the client the means to monitor progress).
The other question is how the work looks from the business layer. If you are going to need multiple transactions to make the change, and if you may need to "revert" previously committed transactions in some variants of the process, then a saga (or a process manager) makes sense.
Set Validation -- the broader term for enforcing an invariant like "uniqueness" -- is awkward. Make sure you study, and ensure that you and the business understand the impact of having a failure.

Lightweight stateful server session vs. stateless token (that requires server revocation list)

When I use server stateful sessions, I always use it for lightweight purposes, such as just storing the userid, name, timezone, and last page hit of the user. For 10,000 users, this might end up being ~3MB. It's not much memory, it is easy to keep in sync with other servers, and it is easy to log out/revocate the sessionid.
It seems that if I used a stateless token such as JWT, I would need to check the token for every request to the server and see if it's on a revocation list. And that revocation list would have to have at least two fields, the token id, and how long the token was originally valid for (so that the entry could be eventually removed, otherwise the revocation list would always just grow). Also, I am sure for security reasons, users need to know all sessions logged in, so I would still need to keep details on every active token, including the userid it is for and the last page hit for the token.
So, are there benefits to using stateless token (that requires server revocation list) vs a lightweight stateful server session?
There are a couple of benefits of a truly stateless server:
Scaling. Having to replicate the state, however small it is will limit the number of nodes.
Load balancing. If you don't replicate the state, you have to use sticky sessions (same user always connects to the same instance), which will eventually cause uneven loads.
Rolling updates. When updating servers one-by-one (if you do continuous delivery for example) both session replication and sticky sessions will make things more complicated, possibly prevent rolling updates altogether.
Anyway, I guess if you can schedule downtimes for updates, have a limited amount of users, having a stateful session will not be a problem, and may be easier to implement.
These benefits apply to cryptographic tokens like JWT, where there is no server session required. A revocation list complicates things, but as it does not need to be stored on any specific server, it does not interfere.

Application Request Limit issue (Occuring Random with Random Scenarios)

I have tried raising this concern on Facebook/Support/Bugs but they said I should post implementation issues here. I have read it everywhere and it seems to be quiet open issue till now. I am not sure, If this will be solved or not.
So, what we are doing is, we have clients - Android and iOS.
Apps on Android/iOS allows users to login into the app and generate the token on the basis of permissions set we have, and we are passing this token to server for fetching further data as and when required for client. As our userbase is increasing we are getting Application request limit reached quiet often.
We are fetching photos of users and their friends using FQL. So, when parallely fetching photos for around 8-10 different users, we are reaching the Application request limit sometimes, which is quiet random and we are not aware of the actual scenario when it breaks up and how. According to facebook the limit, which is 1M calls per day, but we are hitting around 80K - 1 Lac API calls in a day, but as users are increasing it is stretching a bit further, Less than or equal to 200 approax calls/user. We tried doing batch calls as well and we hit the application request limit as well.
If anyone of you could help us understand the complete concept of API limit and how this can be handled, then we will really appreciate the help. We want to understand how API limit is decided and it's rate is calculated over which interval so that we will be able to configure on our side accordingly.
Earlier in the day, we ran into a unique API call issue. Our server started to break for API calls for user tokens that are with us, we (on our systems, other than server) tried fetching the data for those tokens (Simple calls - /me or /me/home), and it was working alright for us but not for server, then we tried setting up another server and redirected the requests to our new server then this server works well for the same set of users. Not sure, what went wrong in this case and how it breaks up. Please help.
Many Thanks,
Reno Jones
Did you look at the Insights -> Developer section of developer.facebook.com for your app?
This will show you a breakdown per api call, including warnings and ones that are currently being throttled and why.
Also, are you sure you're using User token authorization and not just your App token?
Beyond that, we take the information from Insights to find api calls to cache on our side rather than hitting Facebook every time. You will likely have to do something similar if you're not already. They have limits for calling too often, as well as for requesting too much data. For those, we had to reduce the limits of historical data we requested.

REST and HttpSession object

I know that REST is not supposed to use HttpSession.
From the other side, the REST service is running within a servlet container.
From what I saw, the HttpSession object will be created only when:
HttpSession session = request.getSession();
code is executed. Is it always the case? Besides using JSP?
My question is: will be HttpSession objects be created when the REST method is executed or not?
Let's say I use the JAX-RS framework, if it can make any difference.
If such objects are not created, it actually can mean that the size of the server memory may not grow irrespective of how many clients use it the server.
HTTP sessions are actually used quite often with REST interfaces, but should never contain anything truly critical. Thus, they can be used to contain the fact that you've authenticated or what your preferred default ordering of some list is; in the former case, you could also support other authentication mechanisms at the same time allowing fully stateless operation, and in the latter you can easily also support explicit overrides. So long as you don't require a session — well, under the assumption that your site was using HTTP BASIC auth for the sake of argument; if you're using OAuth then you need sessions enabled to stop performance from being crippled — then you're still potentially reasonably close to RESTful (in this area for sure; REST is not “don't use sessions” after all).
Is there a concern about how long a session lasts before timing out? Well, maybe but not really. A session is really an object that you've mapped into some database table, and you can configure the expiry policy on them so that they last long enough to support effective use without being over-burdensome. Which depends on how many clients use the site at once, what their usage patterns are, and what hardware resources you've got available (of course).
I think this is the limitation of Java EE framework at the moment I haven't seen it done otherwise any other server yet. If you need to have a container managed security-constraint a session will be created.
That being said you do not require to implement your code to use container managed authentication. People do implement authentication login/mechanisms themselves like Shiro and what not.
If you're concerned about scalibility, you may have to handle the authentication on your own. However, before you continue with this path consider the following... how many people are you expecting to use your app? Unless you're some really big and popular service like Facebook or Google etc, present hardware/cloud offerings should be able to handle your load with HTTP Sessions with a lot of room to spare.
However, if you wanted to do it an implement yourself then I suggest the following:
unauthenticated client passes credentials (via WWW-Authorization is the easiest to test with)
credentials are validated and a token is returned. The token is an encoded encrypted string containing client ID, an expiration and a reauth token. This token is passed back to the client with Set-Cookie
Client makes future requests with the Cookie containing the token
The token can be used as long as it hasn't expired, this would just be crypto calculations on a server node and thus can be scaled across multiple servers if needed there's no single data store to deal with.
The reauth token can be used to generate a new token for the client should it expire (this is useful for user applications where the interaction can last for minutes).
You can add an enterprise cache to store which ones are still valid at the expense of an extra backend call.