Token Record Lifecycle - doorkeeper

We are using Doorkeeper to handle authentication with a Ruby On Rails API. When I was looking through a database on the server, I noticed that there are a lot of records in the oauth_tokens table, a good number have expired already! To be fair, our tokens expire every 2 hours...but that still will add up for a lot of users over time.
I have looked through the documentation, and the code and am still lost
Is there a way for doorkeeper to automatically delete old, expired access tokens? (I'd prefer a set and forget sort of solution.)

Related

Having a body on a delete request as an additional layer of security

I'm developing an web API and I'm trying to follow the principles of REST. I'm now at the stage of creating the endpoint where a user can delete its own account. I have also implemented JWT functionality where the JWT is valid for 1 day.
I want to add an extra layer of security when the user is deleting its own account. My idea is that the user have to provide its current password in the body of the delete request. I did some googling which pointed to having a body in a delete request is a bad idea because some entities does not support it, e.g. the Angular HttpClient, some web services might strip the body, etc.
I know GitHub have a similar functionality when deleting a repository, you have to provide your password. I like this feature because it prevents unauthorized persons from spoofing the JWT on critical operations, right?
What are your recommendations
Proceed with using DELETE and deal with the potential problems that might come along with this approach?
Instead use POST, PUT or PATCH even though it would look semantically wrong?
Other solution?
I would not recommend to use other http methods like put or patch if you really want to delete it and not only disable it. That would not be intuitive for the API user and could lead to misunderstandings.
One solution for your use case is to introduce an additional resource (e. g. deletionRequest) to request (but not immediately execute) the deletion of the profile with a post call. Then you could do the actual deletion with a delay (preferably longer than the token life span). You could then inform the user via email about the deletion, so the real user has the chance to revoke the deletion. If the user does not react in time, the deletion is executed.

Quarkus, Keycloak and OIDC token refresh

I’m currently working on a PoC with multiple Quarkus services and Keycloak RBAC. Works like a charm, easily to bootstrap and start implementing features.
But I encountered an issue that I could not solve in my mind. Imagine:
User accesses a protected service
quarkus-oidc extension does fancy token obtaining by HTTP redirecting, JWT in cookie lasts 30 minutes
User is authenticated and gets returned to the web application
User works in application, fills in forms and data
Data is being stored by JWT-enriched REST calls (we do validation by hibernate-validator)
User works again, taking longer than 30 min
Wants to store another entry, but token from step 3 is now expired and API call fails
User won’t be happy, so me neither
Possible ways to solve:
Make the JWT last longer than the current 30 minutes, but that just postpones the issue and opens some security doors
Storing users’ input in local storage to restore it later after a token refresh (we also would do that to not loose users’ work)
Refresh the token „silently“ in JS without user knowing. Is there a best practice for that?
I missed something important and the internet now tells me a better architecture for my application.
Thank you internet!
Re the step 3. In Quarkus 1.5.0 adding quarkus.oidc.token.refresh-expired=true will get the ID token refreshed and the user session extended if the refresh grant has succeeded
For such use cases, I tend to prefer the reverse of JWT. I keep the user data in a shared data service (a data grid like Infinispan or Redis). So that this data is keyed by the user and available. I do control the TTL of that data in the shared data service.
It can either be specific to an app, or shared between a small number of apps. It does bring some coupling but so does the JWT property structure.
For Quarkus, there is an Infinispan client integration, a Hazelcast one, mongodb and AWS dynamoDB. And you can bring other libraries.

Application Request Limit issue (Occuring Random with Random Scenarios)

I have tried raising this concern on Facebook/Support/Bugs but they said I should post implementation issues here. I have read it everywhere and it seems to be quiet open issue till now. I am not sure, If this will be solved or not.
So, what we are doing is, we have clients - Android and iOS.
Apps on Android/iOS allows users to login into the app and generate the token on the basis of permissions set we have, and we are passing this token to server for fetching further data as and when required for client. As our userbase is increasing we are getting Application request limit reached quiet often.
We are fetching photos of users and their friends using FQL. So, when parallely fetching photos for around 8-10 different users, we are reaching the Application request limit sometimes, which is quiet random and we are not aware of the actual scenario when it breaks up and how. According to facebook the limit, which is 1M calls per day, but we are hitting around 80K - 1 Lac API calls in a day, but as users are increasing it is stretching a bit further, Less than or equal to 200 approax calls/user. We tried doing batch calls as well and we hit the application request limit as well.
If anyone of you could help us understand the complete concept of API limit and how this can be handled, then we will really appreciate the help. We want to understand how API limit is decided and it's rate is calculated over which interval so that we will be able to configure on our side accordingly.
Earlier in the day, we ran into a unique API call issue. Our server started to break for API calls for user tokens that are with us, we (on our systems, other than server) tried fetching the data for those tokens (Simple calls - /me or /me/home), and it was working alright for us but not for server, then we tried setting up another server and redirected the requests to our new server then this server works well for the same set of users. Not sure, what went wrong in this case and how it breaks up. Please help.
Many Thanks,
Reno Jones
Did you look at the Insights -> Developer section of developer.facebook.com for your app?
This will show you a breakdown per api call, including warnings and ones that are currently being throttled and why.
Also, are you sure you're using User token authorization and not just your App token?
Beyond that, we take the information from Insights to find api calls to cache on our side rather than hitting Facebook every time. You will likely have to do something similar if you're not already. They have limits for calling too often, as well as for requesting too much data. For those, we had to reduce the limits of historical data we requested.

How to avoid too many sessions stored?

I'm using Perl Catalyst with Catalyst::Plugin::Session::State::Cookie and Catalyst::Plugin::Session::Store::Redis. I have at most 2,000 users logged in, but I have more than 2 millions keys in my Redis store.
Most of the authentications are done through an API key. I wonder if each API call gets a new session created and stored (there is likely no cookie in the API call), or if all new visitors to the web site gets a session created automatically.
It looks like a solution would be to set up a very short expiration by default (a few minutes), and override it with a longer expiration when users log in through the web interface.
I was wondering was is the best way to restrict the number of sessions stored to a minimum.
Redis Time out is meant for this purpose, Unless you have any specific pressing usecase to prevent all your sessions from expiring (I Can't see any) you should set it to practical time limit (default:300).
However this has problems in older version of redis so before testing this feature you need to get latest redis installed to fix it.

with offline_access migration enabled, trying to extend the access tokens to 60 days seems to fail ~70% of the time. Why would that happen?

I'm extending client-side access tokens as documented here: https://developers.facebook.com/roadmap/offline-access-removal/#extend_token
But I've noticed that very frequently the request returns the short lived access token instead of extending it. Across all of our users, this appears to be the case ~70% of the time after enabling the migration.
Is there something that we can do to fix it? Why might this be failing so frequently?
Thanks!