I've updated all my users to email_verified = true. The PSQL database gets updated, but the admin console continues to have the users as not having their emails verified. I'm doing the changes through the CLI on Rancher.
The command I am using is:
UPDATE user_entity SET email_verified = true WHERE email_verified = false
The only help I was able to see on here was from Bulk update of users in KeyCloak.
Is there more complexity to updating users in bulk?
Is there other ways to mass updating users?
My guess is that the old data is still around in Keycloak's cache. Some options are:
Restart Keycloak
Clear the cache
Turn off caching permanently
For #2, you can clear the user or realm caches at runtime on the "Realm Settings -> Cache" section of the keycloak admin page:
For #3, you can read the below source for instructions: https://access.redhat.com/documentation/en-us/red_hat_single_sign-on/7.0/html/server_installation_and_configuration_guide/server_cache_configuration
8.3. Disabling Caching
To disable the realm or user cache, you must edit the keycloak-server.json file in your distribution. Where this file lives depends on your operating mode Here’s what the config looks like initially.
"userCache": {
"default" : {
"enabled": true
}
},
"realmCache": {
"default" : {
"enabled": true
}
},
To disable the cache set the enabled field to false for the cache you want to disable. You must reboot your server for this change to take effect.
8.4. Clearing Caches at Runtime
To clear the realm or user cache, go to the Red Hat Single Sign-On admin console Realm Settings→Cache Config page. On this page you can clear the realm cache or the user cache. This will clear the caches for all realms and not only the selected realm.
Related
I'm building a multitenant application and I'm using Keycloak for authentication and authorization.
Foreach each tenant, the idea is to have a dedicated Keycloak realm. Each tenant will have exactly the same roles and clients.
I have tried to export one existing realm, use it as template and import it for new tenant. Problem: I'm facing database constraint violation due to internal id.
Question: Is there an elegant way to achieve this, having a template to create a new realm ?
Be sure that the feature for uploading script is enabled. For a deployment with a docker-compose just add this:
command: -Dkeycloak.profile.feature.upload_scripts=enabled
Export your realm (the one to be used as model)
Remove all line containing "id:" and "_id:"
Search and replace template realm name by the new realm name
In Keycloak UI admin console, Add new realm, provide the file and that is all.
You can use the cleaned exported file as template.
Can't comment due to rep,
but I'd like to add to #Youssouf Maiga's answer,
that you should also modify any fields that contain values under "authenticationFlowBindingOverrides":
Replace any entries that have values assigned under "direct_grant" or "browser"
i.e
"authenticationFlowBindingOverrides": {
"direct_grant": "f5d1wb45e-27eb-4466-937439-9cc8a615ad65e",
"browser": "5b23141a1c-7af8d-410e-a9b451f-0eec12039c72e9"
},
replaced with
"authenticationFlowBindingOverrides": {},
I tried cloning my realm based on this and got an error saying:
"Unable to resolve auth flow binding override for: direct_grant" when importing the modified realm export.
Keycloak version 16.1.1
What you could do is configure everything using the Keycloak Terraform provider. That way you only have to define the configuration once, in code, and then apply it using Terraform. See for the documentation: https://registry.terraform.io/providers/mrparkers/keycloak/latest/docs
An advantage of this is that you can put your code in an SCM tool (e.g. git), so you can track your changes, and go back to a previous version if necessary.
Objective:
Our objective is to update the entire realm provided a json file.
Problem:
The issue at hand is we cannot seem to update the realm entirely to include the client changes as well.
Actions taken:
Option 1:
Based on the Keycloak Admin CLI documentation, a Keycloak realm can be updated from a JSON file using the following command:
kcadm.sh update realms/demorealm -f demorealm.json
However, when making an update to a property within the clients section of the JSON file (i.e. a client's description), the change is not reflected within the Keycloak realm.
We also tried to take a look at the kcadm.sh help update . We tried to utilize the merge flag (Merge new values with existing configuration on the server. Merge is automatically enabled unless --file is specified) . We do have a file specified and therefore tried to enable it using the flag - but to no success. The clients did not change as expected.
Option 2:
We have tried the partial import command found in Keycloak documentation
$ kcadm.sh create partialImport -r demorealm -s ifResourceExists=OVERWRITE-o -f demorealm.json
With the ifResourceExists set to OVERWRITE, it accurately changes clients. However, it alters other Realm configurations such as assigned users roles. Ex: After manually creating a new user via the Keycloak UI and setting roles for the user, the roles are lost after running the command with the OVERWRITE flag set. Setting the ifResourceExists to SKIP does not properly update values for a client as it is skipped altogether.
Question:
Is it possible, either with a different command or different flags, to update a Keycloak realm in its entirety with a single Keycloak admin command? Neither Option 1 or Option 2 listed above worked for us. We want to avoid making individual update client calls when updating the Realm.
Notes:
We have properly authenticated and confirmed that changes made at the realm level are reflected in Keycloak.
After further research, the approach we decided to go with is to update realm level settings with:
kcadm.sh update realms/demorealm -f demorealm.json
We then iterate over the clients and add/update them with:
kcadm.sh update clients/{clients-uuid} -f clientfile.json
Since the previous command does not update client roles, we must then use the following command to add the roles:
kcadm.sh update clients/{clients-uuid}/roles/{role-name} -f rolefile.json
Finally, to add in composite roles, we use this command:
kcadm.sh add-roles --cclientid {clientID} --rid {id of client role} --rolename {name of role to add}
I create 2 db, and one user per db, but user 2 can insert in db1?
use shop
db.createUser({user: "appdev",pwd:"appdev", roles:["readWrite"]})
db.auth("appdev","appdev")
show collections
db.products.insertOne({name: "A book for appdev"})
db.logout()
use shop2
db.createUser({user: "appdev2",pwd:"appdev2", roles:["readWrite"]})
db.auth("appdev2","appdev2")
show collections
db.products.insertOne({name: "A book for appdev2"})
Here, I still logged as appdev2 and can insert on db shop.
use shop
db.products.insertOne({name:"i-am-appdev2"})
{
"acknowledged" : true,
"insertedId" : ObjectId("5d8fdba878f7555a2060f1ec")
}
If your transcript is accurate, it looks like like either:
You haven't enabled access control by setting security.authorization to true in your mongod config or starting mongod with --auth.
You are logged in as a more privileged user (since you are able to run db.createUser() in the same session).
To investigate these possibilities in the mongo shell:
Make sure authorization is enabled via the output of db.serverCmdLineOpts(). If access control is enabled, the output of the parsed server configuration options should include a section like:
"security" : {
"authorization" : "enabled"
},
Check users & roles for the current session via db.runCommand({connectionStatus:1}). As noted in Authenticate a User:
Authenticating multiple times as different users does not drop the credentials of previously-authenticated users. This may lead to a connection having more permissions than intended by the user, and causes operations within a logical session to raise an error.
Access control is a separate option from configuring Role-Based Access Control (RBAC) to allow for scenarios like resetting admin access. Multiple concurrent logins provide additive permissions, but are mostly a legacy carryover from more simplistic versions of access control in earlier versions of MongoDB.
For more information on available security measures, see the Security Checklist in the MongoDB manual.
I tried to edit the master realm of my keycloak standalone installation via the admin api interface. I already created a whole realm and everything works fine. If I now try to update the client roles inside the master realm the server responds with "No content" but the data is not changed. What do I wrong?
relative url:
/auth/admin/realms/master/groups/654dc766-d307-4e44-9b6c-d53f16a2eedf
body:
{
"id":"654dc766-d307-4e44-9b6c-d53f16a2eedf",
"name":"TECHNICAL",
"path":"/TECHNICAL",
"attributes":null,
"realmRoles":null,
"clientRoles":{
"test-client-realm":[
"manage-realm",
"manage-users",
"view-realm",
"view-users"
]
},
"subGroups":[]
}
updating client goes via PUT /admin/realms/{realm}/clients/{id} and not as stated in your question. In your example you would update a group.
We have a Mongo Database for testing purposes on a cloud server.
Recently, this server almost run out of space (97% disk used), and that resulted in Mongo writes failing. I decided to resize the server to have more free space.
Important detailed that i set the auth parameter in the config to true, so each clients had to auth before using the db. I thought this is normal. I created a user with the following command, which worked:
db.addUser( { user: "username",pwd: "password",roles: [ "userAdminAnyDatabase" ] } );
Now, what happened, that when the resize happened and mongo restarted, i cannot get any reads / writes to the database, only if i set the auth = false parameter in the config. I couldn't even add a user from localhost.
The other interesting thing was that i switched off auth and recreated the same user - it succeeeded, which means the user got lost!
Ok, i have lost the user after a restart. That's bad. What's worse that still, i can't get this user to auth from the remote clients.
I have no idea why is this happening, what went wrong.
The data, which is originally intended to create still exists, count() returns 111090914, which about what is expected. I can also do find(), so that data is OK.