Keycloak: users have been disabled/blocked with no clear explanation - keycloak

After some occurrences, I've noticed that some users in Keycloak (version 19.0.1) have been disabled without a clear reason.
There is one situation that, after only 2 minutes after manually enable the user, it was disabled again by some sort of trigger. Sometimes it does not happen this fast, but same behavior could be noticed after some days.
What could disable or block users from Keycloak other than the limit of access attempts? Is there any kind of setup that rules that?

Related

On keycloak's update password screen, is it possible to show all password requirements before submitting

We are using Keycloak as authentication mechanism and also to manage user credentials (as resetting passwords)
When changing passwords, passwords requirements (at least 1 upper case, at least 1 lower case, at least 1 special char, etc) are only shown after submitted and individually.
Which is a little bit annoying for the user.
The question then is if it is possible to show all password requirements together? And/or if it is possible to show all requirements as soon the page is loaded and before submitting a password?
This is the template user for that page:
https://github.com/keycloak/keycloak/blob/main/themes/src/main/resources/theme/base/account/password.ftl
After reviewing the code, it seems that it is not possible to do this straight from Keycloak.
Although, there is the project Keycloakify that provides support for real-time input validation, amongst some additional customization options.

Whitelisting recaptcha tokens to avoid asking users to generate new tokens, is there a better solution?

I have a website that has a form with a recaptcha. The form has a lot of server-side validation that can't be moved to the client-side and so it's often for users to submit several forms with the same recaptcha token. The problem is that recaptcha is not very well suited for multiple validations.
According to google's recaptcha documentation:
Each reCAPTCHA user response token can only be verified once.
If one attempts to validate the same token more than once, google's api returns timeout-or-duplicate.
So, in order for the form to have a smooth user experience and not request to fill the captcha another time every time a user files a form that fails server-side validation I either need to postpone captcha token validation to the end of the server-side validation (which slows the server down) or I need to whitelist captcha tokens for, say, 3 minutes. However, whitelisting a captcha for 3 minutes means someone can make a robot that attacks my website for 3 minutes...
I feel that the above solution might be a comprimise in security so I wanted to know what is the common practice or if you guys have better solutions. Thanks!
I believe that the standard practice is to set a cookie or session variable when a user passes reCAPTCHA the first time, and use that indicator to decide whether to display/check reCAPTCHA or not. You can then set a period of validity for that indicator, or keep it indefinitely.
Now on to the security question. The purpose of reCAPTCHA and other humanity-validation mechanisms isn't necessarily to prevent bots from using your service entirely, but instead to reduce the volume of bots using your service, and the rate at which an attacker can attempt new attacks. You are adding a step to any attack that requires either manual intervention or an immense dedication of resources; either way, you are increasing the time it takes an attacker to attempt one attack, and limit the absolute number of simultaneous attacks they can attempt. An attacker can't spin up 50 clients and launch 50 attacks instantaneously if they need to solve 50 distinct reCAPTCHAs first. This idea, making attacks more difficult and slower rather than outright impossible, is the basic concept behind most security systems and patterns.
With that in mind, forcing your users to solve a reCAPTCHA on every request gives you little advantage compared to having them solve a single reCAPTCHA at the start of their session; I'd argue that the user experience concerns outweigh the security gains. Let the initial reCAPTCHA do it's job of making it difficult for an attacker to start multiple simultaneous attacking sessions, and use some simple user-activity heuristics (i.e. making 20 form submissions in as many seconds) to find and kick out the attacking sessions they do create.

How to Limit Kubernetes Dashboard Users from Seeing Secrets?

The Kubernetes Dashboard allows users to see all secrets, including their raw values with just a couple clicks. These secrets will likely contain very sensitive data, such as production database passwords and private keys.
How do you limit users of the Dashboard, so that they can't see the sensitive data?
This is a known issue and it is simply not officially supported at the moment - the Dashboard is a super-user level administration tool. This should not be the case forever, but more help is needed to get it there.
There are some workarounds discussed in that issue thread that work currently. Here are some notable quirks around them to be aware of beforehand:
Should the dashboard be under a dashboard user, and limited by that? If so, like Anirudh suggested you can neuter parts of the Dashboard and it will work fine and get 403s if they access the Secrets panel.
Should the dashboard be under a logged in user, and be limited to what that user can see? This means that kubectl proxy will be necessary without some browser plugin or MITM proxy to attach the needed auth to dashboard server calls but it is possible.

Oak Login Token Configuration

I'm facing an issue in AEM6.1 were the users have an invalid login-token (due to having an expired session). They make a request to AEM author which then has an error because they are basically an anonymous user attempting to view a page. The access problem results in a 404. The ACS error page tries to handle it, but the error page like everything else on author is not readable to an anonymous user. So it has a Java exception, and the user is left with a white screen of death
The login-token cookies in the browser have no expiration. They appear to be configured to stick around until the session is closed by the users. I would like to set expiration on the login-token cookies.
I've research around but do not see how this is done. The aemstuff site http://www.aemstuff.com/#article964 points to "Apache Jackrabbit Oak TokenConfiguration" But this was already set to 43200000, and further changes do not effect the login cookie expiration as far as I can see.
My question for SO is; is there a way to set the login-token expiration on the cookie? It seems like a bug with "Apache Jackrabbit Oak TokenConfiguration" or is it?
Create Apache Jackrabbit Oak TokenConfiguration as a config node.
In case others face similar problems, here some of the things I learned about this
login-token expiration can be configured as suggested by disha. It is used on the backend only. I think its very typical for browser session cookies to not have an the expiry set.
The WSOD issue experienced may have been helped by adjusting this session timeout slightly less than the IDP timeout
I think it's very important to use Java 8 with AEM6.1 and perhaps other versions as well. When we upgraded from AEM6, Java 7 remained our running version. Possibly the SAML SSO uses Java 8 things. After going to Java 8, users never had the WSOD again.

How to avoid too many sessions stored?

I'm using Perl Catalyst with Catalyst::Plugin::Session::State::Cookie and Catalyst::Plugin::Session::Store::Redis. I have at most 2,000 users logged in, but I have more than 2 millions keys in my Redis store.
Most of the authentications are done through an API key. I wonder if each API call gets a new session created and stored (there is likely no cookie in the API call), or if all new visitors to the web site gets a session created automatically.
It looks like a solution would be to set up a very short expiration by default (a few minutes), and override it with a longer expiration when users log in through the web interface.
I was wondering was is the best way to restrict the number of sessions stored to a minimum.
Redis Time out is meant for this purpose, Unless you have any specific pressing usecase to prevent all your sessions from expiring (I Can't see any) you should set it to practical time limit (default:300).
However this has problems in older version of redis so before testing this feature you need to get latest redis installed to fix it.