Different Idle times for Clients - KeyCloak - keycloak

I'm using Keycloak as a auth service for my applications.
We have two applications that will use the same realm for login, but we would like to have different SSO Session Idle time for each applications.
Example:
Application A - We would like to allow idle time up to 30 minutes
Application B - We would like to allow idle time up to 45 minutes.
However the setting to control the idle time, is set in the Realm settings, and not on the clients settings, which makes it hard for us to solve the scenario above.
Is there anyway to solve the problem for Keycloak - Or perhaps by making a background request from Application B after X amount of idle time?
Thanks
Daniel

What you trying to achieve contradicts to what SSO is. SSO means single session for all application from your environment. For example i open your application A and then go to application B in separate browser tab. After 30 minutes i should be logged out from application A by timeout, but it means that my SSO session should be killed and this will lead to auto logout from application B.
So if you really want to make it so far, you have to move idle logic to your applications, so they will keep global SSO session alive and track current idle for every user of every application.

Related

HarshiCorpe Vault: DB credentials collected gets expired, What are the best practices for this case?

I want to understand this case properly from deployment perspective, Suppose DB credentials are retrieved from Vault and they are going to expire. App which is using these credentials is handling financial transactions. Transactions has to be processed without any issue. How to make sure that transactions gets processed even during the updation of DB credentials.
Also, there are side questions like what happens in case where vault updated DB credentials but took a bit of time to update app or vault itself crashed [what I read is that even if one node goes down in Vault, it starts in sealed state] (in which case when it will recover it will again be in sealed state and hence will require a good amount of time) which in my opinion cause sufficiently big production outages.
Need expert opinion on my thoughts and questions
You will want to run a side process (also known as sidecar) that renews your secret for as long as the current deployment of your application is running.
For example - your app starts, vault generates secret for the app with expiry of 6 hours. A sidecar process runs and check if the expiry is less than 1 hour, is so - it renews the token for 6 hours.
After two days, your application crashes. The sidecar process stops working with it and the token expires.
You can look at this as an example for renewal sidecar.

How appropriate it is to use SAML_login with AEM with more than 1m users?

I am investigating a slow login time and some profile synchronisation problems of a large enterprise AEM project. The system has around 1.5m users. And the website is served by 10 publishers.
The way this project is built, is that they have enabled the SAML_login for all these end-users and there is a third party IDP which I assume SAML_login talks to. I'm no expert on this SSO - SAML_login processes, so I'm trying to understand if this is the correct way to go at the first step.
Because of this setup and the number of users, SAML_login call takes 15 seconds on avarage. This is getting unacceptable day by day as the user count rises. And even more importantly, the synchronization between the 10 publishers are failing occasionally, hence some of the users sometimes can't use the system as they are expected to.
Because the users are stored in the JCR for SAML_login, you cannot even go and check the home/users folder from crx browser. It times out as it is impossible to show 1.5m rows at once. And my educated guess is, that's why the SAML_login call is taking so long.
I've come accross with articles that tells how to setup SAML_login on AEM, and this makes it sound legal for what it is used in this case. But in my opinion this is the worst setup ever as JCR is not a well designed quick access data store for this kind of usage scenarios.
My understanding so far is that this approach might work well but with only limited number of users, but with this many of users, it is an inapplicable solution approach. So my first question would be: Am I right? :)
If I'm not right, there is certainly a bottleneck somewhere which I'm not aware of yet, what can be that bottleneck to improve upon?
The AEM SAML Authentication handler has some performance limitations with a default configuration. When your browser does an HTTP POST request to AEM under /saml_login it includes a base 64 encoded "SAMLResponse" request parameter. AEM directly processes that response and does not contact any external systems.
Even though the SAML response is processed on AEM itself, the bottle-necks of the /saml_login call are the following:
Initial login where AEM creates the user node for the first time - you can look at creating the nodes ahead of time. You could write a script to create the SAML user nodes (under /home/users) in AEM ahead of time.
During each login when the session is first created - a token node is created under the user node under /home/users/.../{usernode}/.tokens - this can be avoided by enabling the encapsulated token feature.
Finally, the last bottle-neck occurs when it saves the SAMLResponse XML under the user node (for later use required for SAML-based logout). This can be avoided by not implementing SAML-based logout. The latest com.adobe.granite.auth.saml bundle supports turning off the saving of the SAML response. Service packs AEM 6.4.8 and AEM 6.5.4 include this feature. To enable this feature, set the OSGI configuration properties storeSAMLResponse=false and handleLogout=false and it would not store the SAML response.

How does Fabric Answers send data to the server, should events be submitted periodically or immediately?

I've used Fabric for quite a few applications, however I was curious about the performance when a single application submits potentially hundreds of events per minute.
For this example I'm going to be using a Pedometer application, in which I would want to keep track of the amount of steps users are taking in my application. Considering the average user walks 100 steps per minute, I wouldn't want the application to be sending several dozen updates to the server.
How would Fabric handle this, would it just tell the server "Hey, there were 273 step events in the last 5 minutes with this meta deta" or would it sent 273 step events.
Pedometer applications typically run in the background so how would we get data to Fabric without the user opening the application
Great question! Todd from Fabric. These get batched and sent at time intervals and also certain events (like installs) trigger an upload of the queued events data. You can watch our traffic in Xcode debugger if you are curious about the specifics for your app.

Azure Mobile Services Latency

I am noticing latency in REST data the first time I visit a web site that is being served via Azure Mobile Services. Is there a cache or timeout of a connection after a set amount of time, because I am worried about user experience while waiting 7-8 seconds for the data to load (and there is not a lot of data, as I am testing 10 records returned). Once the first connection is made, subsequent visits appear to load quickly... but if I don't visit the site for a while, we are back to 7-8 seconds on first load.
Reason: The reason for this latency is the "shared" mode. When the first call to the service is made, it performs a "cold start" (initializing and starting the virtual server etc)
As you described in your question, after a while when the service is not used, it is put into the "sleep mode" again.
Solution: If you do not want this waiting-time, you can set your service to "reserved" mode, which forces the service to be active all time even when you do not access it for a while. But be aware that this requires you to pay some extra fees.

How to implement worklight server side serverSessionTimeout?

I am trying to implement serverSessionTimeout in worklight server. I enabled serverSessionTimeout=5 and sso.cleanup.taskFrequencyInSeconds=5 in worklight.properties but no luck. We have user db entry for each user login. Ideally it should remove the user db entry once the session reached 5 minutes, but I am not able do clean the user db entry from server side. I appreciate if anybody help me on this.
As Iddo mentioned in the comments:
sso.cleanup.taskFrequencyInSeconds is related to an entirely different feature
serverSessionTimeout instructs the application server to invalidate sessions after the specified amount of time, but the actual cleanup can occur at the application server's discretion (see jaalger2's answer in this question
So in order to control the session, you need to setup the values to your liking. After that, simply let the application sever handle the memory threads.
Is there any particular reason why after the above you also need to access the database and delete rows from it? This should be handled automatically, not "manually".