I’ve been looking into Keycloak as an on-prem IAM and SSO solution for my company. One thing that I’m unclear on from reading the documentation is if Keycloak’s clustered mode can handle our requirements for instance federation across sites.
We have some remote manned sites that occasionally run critical telemetry-gathering processes. Our AD domain is replicated to those sites.
The issue is that there is a single internet link to the sites. If we had keycloak at the main office, and the internet link went down for a day, any software at the remote site that relies on keycloak to authenticate wouldn’t work (which would be a big problem).
Can we set up Keycloak in a cluster mode (ie, putting an instance at each site), so that if this link went out, remote users are able to connect to their local instance automatically and authenticate with local apps? What happens when the connection is restored and the databases are out of sync - does keycloak automatically repair this?
Cheers
In general answer is "yes", you can setup two keycloak instances in different locations, and link them with each other via cluster (under the hood it would be infinispan cache replication). But it depends on details of your infrastructure.
Main goal of Keycloak cluster is to perform sessions cache replication between nodes. So in simplest case you can setup two nodes that looks to same DB instance, and when first node goes down second would handle whole job, but if DB also goes down second node would be useless. In such case each site should have both separate Keycloak node and DB replica (how to achieve DB replication is out of scope of this topic). Third option is to use multitenancy feature of keycloak application adapter, in that case you secure application by two separate Keycloak instances, that know nothing about each other.
Try to start from this documentation article:
https://www.keycloak.org/docs/latest/server_installation/index.html#crossdc-mode
Related
I am currently designing the archicture for an HA WebRTC installation (using Liveswitch Server). Among the requirements is the setup of an auto failover scenario for a postgresql database.
Since I intend to deploy nginx as a load balancer in a different part of the system already I was wondering whether the above postgresql scenario can be accomplished using nginx as well.
IMPORTANT: notification of the failover case to the admin via email or similar is a must (of course).
Is this possible using nginx and which setup should be chosen for the database instances (hot-standby, warm-standby, etc.)?
If not: what would be the solution of choice?
I'm using a standalone Keycloak server. I have created one realm and a user inside that realm. When I log in the user and restart keycloak server, the session gets lost.
I am aware that keycloak saves user sessions data in Infinispan. But is there any way I can save/persist this user session data?
Or shall I create multiple nodes cluster and replicate the keycloak session data?
Please suggest what's best.
We also have this problem.
So far I think there are 2 solutions, neither of them is perfect:
1. Keep sessions in infinispan
You can use external infinispan instance as described in the docs. This is cumbersome because you need to keep an external infinispan instance.
If you don't want to use an external infinispan instance, you can set CACHE_OWNERS_COUNT in the docker image >=2. This will rebalance the cache between nodes and will make sure that an entry is saved in at lease CACHE_OWNERS_COUNT nodes. If you have many sessions (>1Million) you will run out of memory and the startup time will increase substantially because the cache needs to be rebalanced at each deploy.
Another issue is that you would loose the sessions when updating the infinispan instance.
2. Use offline sessions
Offline sessions are kept in the db and some of them are kept in the offline_sessions_cache. You can limit the number of offline sessions keycloak keeps in memory and stop preloading offline session for faster startup as described here.
This also has drawbacks:
SSO will not work after the server is restarted
You cannot change any offline session notes
I think there are some PRs in keycloak to have a persistent session store build in so please keep an eye on the progress on the keycloak Github page
I have a cluster with 2 servers that are in HA, there is some configuration so that when I make a change for example in the password of a user or change of role, etc. the change is made immediately on the 2 servers?
The problem is that a user's password is changed and it does not update on the other server immediately, the same happens when a user is assigned a role mapping, it never updates on both servers, only when the server is reboot
OS: Linux (ubuntu 16.04)
keycloak version: 11.0
Thanks for the help
I can't tell from the first paragraph of your question whether Keycloak has ever propagated the changes to the other servers or not.
Does the setup you use usually propagate changes?
If not it sounds like there are issues with your cluster setup.
Do the nodes discover each other? You can check the logs on startup, there is a good illustration on the Keycloak blog on how to check this.
In general it would be a good idea to look over the recommended clustering setup in the docs.
You could change number of owners in the cluster, that way both nodes own the data before it is put in the database. Might help if the issue is that the changes are not immediate enough.
I am implementing a service broker for my SaaS application on Cloud Foundry.
On create-service of my SaaS application, I create instance of another service (Say service-A) also ie. a new service instance of another service (service-A) is also created for every tenant which on-boards my application.
The details of the newly created service instance (service-A) is passed to my service-broker via environment variable.
To be able to process this newly injected environment variable, the service-broker need to be restaged/restarted.
This means a down-time for the service-broker for every new on-boarding customer.
I have following questions:
1) How these kind on use-cases are handled in Cloud Foundry?
2) Why Cloud Foundry chose to use environment variables to pass the info required to use a service? It seems limiting, as it requires application restart.
As a first guess, your service could be some kind of API provided to a customer. This API must store the data it is sent in some database (e.g. MongoDb or Mysql). So MongoDb or Mysql would be what you call Service-A.
Since you want the performance of the API endpoints for your customers to be independent of each other, you are provisioning dedicated databases for each of your customers, that is for each of the service instances of your service.
You are right in that you would need to restage your service broker if you were to get the credentials to these databases from the environment of your service broker. Or at least you would have to re-read the VCAP_SERVICES environment variable. Yet there is another solution:
Use the CC-API to create the services, and bind them to whatever app you like. Then use again the CC-API to query the bindings of this app. This will include the credentials. Here is the link to the API docs on this endpoint:
https://apidocs.cloudfoundry.org/247/apps/list_all_service_bindings_for_the_app.html
It sounds like you are not using services in the 'correct' manner. It's very hard to tell without more detail of your use case. For instance, why does your broker need to have this additional service attached?
To answer your questions:
1) Not like this. You're using service bindings to represent data, rather than using them as backing services. Many service brokers (I've written quite a few) need to dynamically provision things like Cassandra clusters, but they keep some state about which Cassandra clusters belong to which CF service in a data store of their own. The broker does not bind to each thing it is responsible for creating.
2) Because 12 Factor applications should treat backing services as attached, static resources. It is not normal to say add a new MySQL database to a running application.
So I am a little confused by reading the documents.
I want to setup AppFabric caching and hosting.
Can I do the following?
DC
SQL Server
AppFabric1
AppFabric2
All these computers are joined to the DC.
I want to be able to have AppFabric1 be the mainhost but also part of the cache cluster?
What about AppFabric2? or AppFabricX? How can I make them part of the cache cluster?
Do I have to make AppFabric1 and AppFabric2 configured in Windows as part of a cluster (i.e setup the entire environment as a cluster)?
Can I install AppFabric independently on AppFabric1 and 2 and have them cluster together and "make it work"? If so - how?
I see documentation about setting it up in a webfarm but also a workgroup... and that's it. nothing about computers joined to a domain.
I want to setup AppFabric caching and hosting.
Caching and Hosting are two totaly different things and generally don't share the same use cases.
AppFabric Caching provides an in-memory, distributed cache platform for Windows Server, previously named Velocity. The cache cluster is a collection of one or more instances of the Caching Service working together. You can easily add new cache host without restarting the cluster in the "storage location" (xml or sql server).
Can I install AppFabric independently on AppFabric1 and 2 and have
them cluster together and "make it work"? If so - how?
Don't worry... this can be done easily during installation. In addition, there are powerfull PS module to to the same thing.
AppFabric Hosting enhance the hosting of WCF and Workflow Foundation services in WAS (autostart, monitoring of hosted services, workflow persistence, ...). There is no cluster here and basically you just have to configure to monitoring/persistence DB for each server.
Just try it !
When you are adding the second node in the AppFabric cluster, make sure to choose the option Join Cluster (instead of New Cluster) and point to the path of the share where you stored the configuration (assuming that you used FILE SHARE to store the configuration of the cluster). The share that you used should be accessible from Appfabric2.