I noticed that sessions that pass the SSO session idle and SSO session max aren't immediately deleted. They seem to be invalidated and therefor useless, but they are not getting immediately removed. I can view them in the sessions tab of the admin console.
Since I can't find an explanation for this, or how this mechanism works internally (didn't look into the code), I was wondering, if anyone could elaborate on what is going on? Is everything working as it should?
Keycloak relies heavily on Infinispan for caching. Many types of entities have dedicated caches configured directly to them, and sessions are not excluded.
When starting Keycloak, you specifiy a configuration file/operation mode ( via the -c parameter). For example, when I run my keycloak via docker I get the following command line:
java -D[Standalone] -server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true --add-exports=java.base/sun.nio.ch=ALL-UNNAMED --add-exports=jdk.unsupported/sun.misc=ALL-UNNAMED --add-exports=jdk.unsupported/sun.reflect=ALL-UNNAMED -Dorg.jboss.boot.log.file=/opt/jboss/keycloak/standalone/log/server.log -Dlogging.configuration=file:/opt/jboss/keycloak/standalone/configuration/logging.properties -jar /opt/jboss/keycloak/jboss-modules.jar -mp /opt/jboss/keycloak/modules org.jboss.as.standalone -Djboss.home.dir=/opt/jboss/keycloak -Djboss.server.base.dir=/opt/jboss/keycloak/standalone -Djboss.bind.address=172.19.0.3 -Djboss.bind.address.private=172.19.0.3 -c=standalone-ha.xml -Dkeycloak.profile.feature.token_exchange=enabled -Dkeycloak.profile.feature.admin_fine_grained_authz=enabled
you can see -D[Standalone] (for the operation mode) and -c=standalone-ha.xml, which points to the configuration XML file.
In it, you can see a section in the likes of:
<subsystem xmlns="urn:jboss:domain:infinispan:11.0">
<cache-container name="keycloak" module="org.keycloak.keycloak-model-infinispan">
<local-cache name="realms">
<heap-memory size="10000"/>
</local-cache>
<local-cache name="users">
<heap-memory size="10000"/>
</local-cache>
<local-cache name="sessions"/>
<local-cache name="authenticationSessions"/>
<local-cache name="offlineSessions"/>
<local-cache name="clientSessions"/>
<local-cache name="offlineClientSessions"/>
<local-cache name="loginFailures"/>
<local-cache name="work"/>
<local-cache name="authorization">
<heap-memory size="10000"/>
</local-cache>
<local-cache name="keys">
<heap-memory size="1000"/>
<expiration max-idle="3600000"/>
</local-cache>
<local-cache name="actionTokens">
<heap-memory size="-1"/>
<expiration interval="300000" max-idle="-1"/>
</local-cache>
</cache-container>
...
...
...
</subsystem>
You can try and tweak the various session caches expiration/lifespan attributes.
Have a look at Cache-Configuration section of the manual, and also on the xmlns infinispan-config specification
Recently, I hardened my Keycloak deployment to use a dedicated Infinispan cluster as a remote-store for an extra layer of persistence for Keycloak's various caches. The change itself went reasonably well, although after making this change, we started seeing a lot of login errors due to the expired_code error message:
WARN [org.keycloak.events] (default task-2007) type=LOGIN_ERROR, realmId=my-realm, clientId=null, userId=null, ipAddress=192.168.50.38, error=expired_code, restart_after_timeout=true
This error message is typically repeated dozens of times all within a short period of time and from the same IP address. The cause of this appears to be the end-user's browser infinitely redirecting on login until the browser itself stops the loop.
I have seen various GitHub issues (https://github.com/helm/charts/issues/8355) that also document this behavior, and the consensus seems to be that this is caused by the Keycloak cluster not able to correctly discover its members via JGroups.
This explanation makes sense when you consider that some of the Keycloak caches are distributed across the Keycloak nodes in the default configuration within standalone-ha.xml. However, I have modified these caches to be local caches with a remote-store pointing to my new Infinispan cluster, and I believe I have made some incorrect assumptions about how this works, causing this error to start happening.
Here is how my Keycloak caches are configured:
<subsystem xmlns="urn:jboss:domain:infinispan:7.0">
<cache-container name="keycloak" module="org.keycloak.keycloak-model-infinispan">
<transport lock-timeout="60000"/>
<local-cache name="realms">
<object-memory size="10000"/>
</local-cache>
<local-cache name="users">
<object-memory size="10000"/>
</local-cache>
<local-cache name="authorization">
<object-memory size="10000"/>
</local-cache>
<local-cache name="keys">
<object-memory size="1000"/>
<expiration max-idle="3600000"/>
</local-cache>
<local-cache name="sessions">
<remote-store cache="sessions" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="authenticationSessions">
<remote-store cache="authenticationSessions" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="offlineSessions">
<remote-store cache="offlineSessions" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="clientSessions">
<remote-store cache="clientSessions" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="offlineClientSessions">
<remote-store cache="offlineClientSessions" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="loginFailures">
<remote-store cache="loginFailures" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="actionTokens">
<remote-store cache="actionTokens" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<replicated-cache name="work">
<remote-store cache="work" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</replicated-cache>
</cache-container>
<cache-container name="server" aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server">
<transport lock-timeout="60000"/>
<replicated-cache name="default">
<transaction mode="BATCH"/>
</replicated-cache>
</cache-container>
<cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan">
<transport lock-timeout="60000"/>
<distributed-cache name="dist">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store/>
</distributed-cache>
</cache-container>
<cache-container name="ejb" aliases="sfsb" default-cache="dist" module="org.wildfly.clustering.ejb.infinispan">
<transport lock-timeout="60000"/>
<distributed-cache name="dist">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store/>
</distributed-cache>
</cache-container>
<cache-container name="hibernate" module="org.infinispan.hibernate-cache">
<transport lock-timeout="60000"/>
<local-cache name="local-query">
<object-memory size="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<invalidation-cache name="entity">
<transaction mode="NON_XA"/>
<object-memory size="10000"/>
<expiration max-idle="100000"/>
</invalidation-cache>
<replicated-cache name="timestamps"/>
</cache-container>
</subsystem>
Note that most of this cache configuration is unchanged when compared to the default standalone-ha.xml configuration file. The changes I have made here are changing the following caches to be local and pointing them to my remote Infinispan cluster:
sessions
authenticationSessions
offlineSessions
clientSessions
offlineClientSessions
loginFailures
actionTokens
work
Here is the configuration for my remote-cache server:
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<!-- Default socket bindings from standalone-ha.xml are not listed here for brevity -->
<outbound-socket-binding name="remote-cache">
<remote-destination host="${env.INFINISPAN_HOST}" port="${remote.cache.port:11222}"/>
</outbound-socket-binding>
</socket-binding-group>
Here is how my caches are configured on the Infinispan side:
<subsystem xmlns="urn:infinispan:server:core:9.4" default-cache-container="clustered">
<cache-container name="clustered" default-cache="default">
<transport lock-timeout="60000"/>
<global-state/>
<replicated-cache-configuration name="replicated-keycloak" mode="SYNC">
<locking acquire-timeout="3000" />
</replicated-cache-configuration>
<replicated-cache name="work" configuration="replicated-keycloak"/>
<replicated-cache name="sessions" configuration="replicated-keycloak"/>
<replicated-cache name="authenticationSessions" configuration="replicated-keycloak"/>
<replicated-cache name="clientSessions" configuration="replicated-keycloak"/>
<replicated-cache name="offlineSessions" configuration="replicated-keycloak"/>
<replicated-cache name="offlineClientSessions" configuration="replicated-keycloak"/>
<replicated-cache name="actionTokens" configuration="replicated-keycloak"/>
<replicated-cache name="loginFailures" configuration="replicated-keycloak"/>
</cache-container>
</subsystem>
I believe I have made some incorrect assumptions about how local caches with remote stores work, and I was hoping someone would be able to clear this up for me. My intention was to make the Infinispan cluster the source of truth for all of Keycloak's caches. By making every cache local, I assumed that data would be replicated to each Keycloak node through the Infinispan cluster, such that a write to the local authenticationSessions cache on keycloak-0 would be synchronously persisted to keycloak-1 through the Infinispan cluster.
What I believe is happening is that the write to a local cache on Keycloak is not synchronous with respect to persisting that value to the remote Infinispan cluster. In other words, when a write is performed to the authenticationSessions cache, it does not block while waiting for this value to be written to the Infinispan cluster, so an immediate read for this data on another Keycloak node results in a cache miss, locally and in the Infinispan cluster.
I'm looking for some help with identifying why my current configuration is causing this issue, and some clarification on the behavior of a remote-store - is there a way to get cache writes to a local cache backed by a remote-store to be synchronous? If not, is there a better way to do what I'm trying to accomplish here?
Some other potentially relevant details:
Both Keycloak and Infinispan are deployed to the same namespace in a Kubernetes cluster.
I am using KUBE_PING for JGroups discovery.
Using the Infinispan console, I am able to verify that all of the caches replicated to all of the Infinispan nodes have some amount of entries in them - they aren't completely unused.
If I add a new realm to one Keycloak node, it successfully shows up on other Keycloak nodes, which leads me to believe that the work cache is being propagated across all Keycloak nodes.
If I log in to one Keycloak node, my session remains on other Keycloak nodes, which leads me to believe that the session related caches are being propagated across all Keycloak nodes.
I'm using sticky sessions for Keycloak as a temporary fix for this, but I believe fixing these underlying cache issues is a more permanent solution.
Thanks in advance!
I will try to clarify some points to take in mind when you configure Keycloak in cluster.
Talking about subject of "infinite redirects", I have experienced a similar problem in development environments years ago. While the keycloak team has corrected several bugs related to infinite loops (e.g. KEYCLOAK-5856, KEYCLOAK-5022, KEYCLOAK-4717, KEYCLOAK-4552, KEYCLOAK-3878) sometimes it is happening due to configuration issues.
One thing to check if the site is HTTPS is to be accessing a Keycloak server by HTTPS as well.
I remember suffered a similar problem to the infinite loop when the Keycloak was placed behind an HTTPS reverse proxy and the needed headers were not propagated to the Keycloak (headers X-FOWARDED...). It was solved setting up the environment well. It can happen a similar problem when the nodes discovery in the cluster does not work correctly (JGroups).
About the error message "expired_code", I would verify that the clocks of each node are synchronized since it can lead to this kind of expired token / code error.
Now understanding better your configuration, it does not seem inappropriate to use the "local-cache" mode with a remote-store, pointing to the infinispan cluster.
Although, usually, the shared store (such as a remote-cache) is usually used with an invalidation-cache where it is avoided to replicate the complete data by the cluster (see comment that can be applied here https://developer.jboss.org/message/986847#986847), there may not be big differences with a distributed or invalidation cache.
I believe that a distributed-cache with a remote-store would apply better (or an invalidation-cache to avoid replicating heavy data to the owners) however I could not ensure how a "local-cache" works with a remote storage (shared) since I have never tried this kind of configuration.
I would first choose to test a distributed-cache or an invalidation-cache given by how it works with the evicted / invalidated data. Normally, local caches do not synchronize with other remote nodes in the cluster. If this kind of implementation keeps a local map in memory, it is likely that even if the data in the remote-storage is modified, these changes may be not reflected in some situations.
I can give you a Jmeter test file that you can use so that you can try to perform your own tests with both configurations.
Returning to the topic of your configuration, you have to take into account in addition to that the replicated cache have certain limitations and are usually a little slower than the distributed ones that only replicate the data to the defined owners (the replicated ones write in all the nodes). There is also a variant called scattered-cache that performs better but for example lacks Transaction support (you can see here a comparative chart https://infinispan.org/docs/stable/user_guide/user_guide.html#which_cache_mode_should_i_use).
Replication usually only performs well in small clusters (under 8 or 10 servers), due to the number of replication messages that need to send. Distributed cache allows Infinispan to scale linearly by defining a number of replicas by entry.
The main reason to make a configuration of the type you are trying to do instead of one similar to the one proposed by Keycloak (standalone-ha.xml), is when you have a requirement to independently scale the infinispan cluster of the application or using infinispan as a persistent store.
I will explain how Keycloak manages its cache and how it divides it into two or three groups basically so you can better understand the configuration you need.
Usually, to configure Keycloak in a cluster, simply raise and configure the Keycloak in HA mode just as you would do with a traditional instance of Wildfly. If one observes the differences between the standalone.xml and the standalone-ha.xml that comes in the keycloak installation, one notices that basically support is added to "Jgroups", "modcluster", and the caches are distributed (which were previously local) between the nodes in Wildfly / Keycloak (HA).
In detail:
jgroups subsystem is added, which will be responsible for connecting the cluster nodes and carrying out the messaging / communication in the cluster. JGroups provides network communication capabilities, reliable communications and other features like node discovery, point-to-point communications, multicast communication, failure detection, and data transfer between cluster nodes.
the EJB3 cache goes from a SIMPLE cache (in local memory without transaction handling) to a DISTRIBUTED. However, I would ensure that the Keycloak project does not require using EJB3 according to my experience extending this project.
cache: "realms", "users", "authorization", and "keys" are kept local since they are only used to reduce the load on the database.
cache: "work" becomes REPLICATED since it is the one that Keycloak uses to notify to the cluster nodes that an entry of the cache must be evicted/invalidated since its status has been modified.
cache "sessions", "authenticationSessions", "offlineSessions", "clientSessions", "offlineSessions", "loginFailures", and "actionTokens" becomes DISTRIBUTED because they perform better than replicated-cache (see https://infinispan.org/docs/stable/user_guide/user_guide.html#which_cache_mode_should_i_use) because you only have to replicate the data to the owners.
The other changes proposed by keycloak for its default HA configuration are to distributing"web" and "ejb" (and above) cache container, and to change "hibernate" cache to an "invalidation-cache" (like a local cache but with invalidation sync).
I think that your cache configuration should be defined as "distributed-cache" for caches like "sessions", "authenticationSessions", "offlineSessions", "clientSessions", "offlineClientSessions", "loginFailures" and "actionTokens" (instead of "local"). However, because you use a remote shared store, you should test it to see how it works as I said before.
Also, cache named "work" should be "replicated-cache" and the others ("keys", "authorization", "realms" and "users") should be defined as "local-cache".
In your infinispan cluster you can define it as "distributed-cache" (or "replicated-cache").
Remember that:
In a replicated cache all nodes in a cluster hold all keys i.e. if a
key exists on one node, it will also exist on all other nodes. In a
distributed cache, a number of copies are maintained to provide
redundancy and fault tolerance, however this is typically far fewer
than the number of nodes in the cluster. A distributed cache provides
a far greater degree of scalability than a replicated cache.
A distributed cache is also able to transparently locate keys across a
cluster, and provides an L1 cache for fast local read access of state
that is stored remotely. You can read more in the relevant User Guide
chapter.
Infinispan doc. ref: cache mode
As the Keycloak (6.0) documentation says:
Keycloak has two types of caches. One type of cache sits in front of
the database to decrease load on the DB and to decrease overall
response times by keeping data in memory. Realm, client, role, and
user metadata is kept in this type of cache. This cache is a local
cache. Local caches do not use replication even if you are in the
cluster with more Keycloak servers. Instead, they only keep copies
locally and if the entry is updated an invalidation message is sent to
the rest of the cluster and the entry is evicted. There is separate
replicated cache work, which task is to send the invalidation messages
to the whole cluster about what entries should be evicted from local
caches. This greatly reduces network traffic, makes things efficient,
and avoids transmitting sensitive metadata over the wire.
The second type of cache handles managing user sessions, offline
tokens, and keeping track of login failures so that the server can
detect password phishing and other attacks. The data held in these
caches is temporary, in memory only, but is possibly replicated across
the cluster.
Doc. Reference: cache configuration
If you want to read another good document, you can take a look to "cross-dc" section (cross-dc mode) especially section "3.4.6 Infinispan cache" (infinispan cache)
I tried with Keycloak 6.0.1 and Infinispan 9.4.11.Final, here is my test configuration (based on standalone-ha.xml file).
Keycloak infinispan subsystem:
<subsystem xmlns="urn:jboss:domain:infinispan:8.0">
<cache-container name="keycloak" module="org.keycloak.keycloak-model-infinispan">
<transport lock-timeout="60000"/>
<local-cache name="realms">
<object-memory size="10000"/>
</local-cache>
<local-cache name="users">
<object-memory size="10000"/>
</local-cache>
<distributed-cache name="sessions" owners="1" remote-timeout="30000">
<remote-store cache="sessions" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<distributed-cache name="authenticationSessions" owners="1" remote-timeout="30000">
<remote-store cache="authenticationSessions" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<distributed-cache name="offlineSessions" owners="1" remote-timeout="30000">
<remote-store cache="offlineSessions" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<distributed-cache name="clientSessions" owners="1" remote-timeout="30000">
<remote-store cache="clientSessions" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<distributed-cache name="offlineClientSessions" owners="1" remote-timeout="30000">
<remote-store cache="offlineClientSessions" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<distributed-cache name="loginFailures" owners="1" remote-timeout="30000">
<remote-store cache="loginFailures" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<replicated-cache name="work"/>
<local-cache name="authorization">
<object-memory size="10000"/>
</local-cache>
<local-cache name="keys">
<object-memory size="1000"/>
<expiration max-idle="3600000"/>
</local-cache>
<distributed-cache name="actionTokens" owners="1" remote-timeout="30000">
<remote-store cache="actionTokens" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
<object-memory size="-1"/>
<expiration max-idle="-1" interval="300000"/>
</distributed-cache>
</cache-container>
Keycloak socket bindings:
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
<socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/>
<socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>
<socket-binding name="http" port="${jboss.http.port:8080}"/>
<socket-binding name="https" port="${jboss.https.port:8443}"/>
<socket-binding name="jgroups-mping" interface="private" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/>
<socket-binding name="jgroups-tcp" interface="private" port="7600"/>
<socket-binding name="jgroups-udp" interface="private" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>
<socket-binding name="modcluster" multicast-address="${jboss.modcluster.multicast.address:224.0.1.105}" multicast-port="23364"/>
<socket-binding name="txn-recovery-environment" port="4712"/>
<socket-binding name="txn-status-manager" port="4713"/>
<outbound-socket-binding name="remote-cache">
<remote-destination host="my-server-domain.com" port="11222"/>
</outbound-socket-binding>
<outbound-socket-binding name="mail-smtp">
<remote-destination host="localhost" port="25"/>
</outbound-socket-binding>
</socket-binding-group>
Infinispan cluster configuration:
<subsystem xmlns="urn:infinispan:server:core:9.4" default-cache-container="clustered">
<cache-container name="clustered" default-cache="default" statistics="true">
<transport lock-timeout="60000"/>
<global-state/>
<distributed-cache-configuration name="transactional">
<transaction mode="NON_XA" locking="PESSIMISTIC"/>
</distributed-cache-configuration>
<distributed-cache-configuration name="async" mode="ASYNC"/>
<replicated-cache-configuration name="replicated"/>
<distributed-cache-configuration name="persistent-file-store">
<persistence>
<file-store shared="false" fetch-state="true"/>
</persistence>
</distributed-cache-configuration>
<distributed-cache-configuration name="indexed">
<indexing index="LOCAL" auto-config="true"/>
</distributed-cache-configuration>
<distributed-cache-configuration name="memory-bounded">
<memory>
<binary size="10000000" eviction="MEMORY"/>
</memory>
</distributed-cache-configuration>
<distributed-cache-configuration name="persistent-file-store-passivation">
<memory>
<object size="10000"/>
</memory>
<persistence passivation="true">
<file-store shared="false" fetch-state="true">
<write-behind modification-queue-size="1024" thread-pool-size="1"/>
</file-store>
</persistence>
</distributed-cache-configuration>
<distributed-cache-configuration name="persistent-file-store-write-behind">
<persistence>
<file-store shared="false" fetch-state="true">
<write-behind modification-queue-size="1024" thread-pool-size="1"/>
</file-store>
</persistence>
</distributed-cache-configuration>
<distributed-cache-configuration name="persistent-rocksdb-store">
<persistence>
<rocksdb-store shared="false" fetch-state="true"/>
</persistence>
</distributed-cache-configuration>
<distributed-cache-configuration name="persistent-jdbc-string-keyed">
<persistence>
<string-keyed-jdbc-store datasource="java:jboss/datasources/ExampleDS" fetch-state="true" preload="false" purge="false" shared="false">
<string-keyed-table prefix="ISPN">
<id-column name="id" type="VARCHAR"/>
<data-column name="datum" type="BINARY"/>
<timestamp-column name="version" type="BIGINT"/>
</string-keyed-table>
<write-behind modification-queue-size="1024" thread-pool-size="1"/>
</string-keyed-jdbc-store>
</persistence>
</distributed-cache-configuration>
<distributed-cache name="default"/>
<replicated-cache name="repl" configuration="replicated"/>
<replicated-cache name="work" configuration="replicated"/>
<replicated-cache name="sessions" configuration="replicated"/>
<replicated-cache name="authenticationSessions" configuration="replicated"/>
<replicated-cache name="clientSessions" configuration="replicated"/>
<replicated-cache name="offlineSessions" configuration="replicated"/>
<replicated-cache name="offlineClientSessions" configuration="replicated"/>
<replicated-cache name="actionTokens" configuration="replicated"/>
<replicated-cache name="loginFailures" configuration="replicated"/>
</cache-container>
</subsystem>
P.S. Change attribute "owners" from 1 to your favorite value.
I hope to be helpful.
Great exchange here guys, incredibly I had exactly the same assumptions as you Michael, I configured my local-cache to use remote-store and expected that the keys would be read/write always form the remote-store, but apparently it is not how it works.
Sadly from all the exchange here done, I couldn't find why is this, why can't we configure the local infinispan to serve only as a proxy to a remote infinispan, allowing to keep this instances stateless and have an easier process to redeploy.
We are running Keycloak (v4.4, standalone mode) inside of 2 Docker containers. We wish these containers to be stateless, so we must persist all cached data to a backing store (either database or other caching solution such as Redis). We can not allow cached data to only exist in-memory, because either of our containers may be destroyed at any time.
Ideally, we would like to persist cached data to our own Redis instance. Since Keycloak uses Infinispan, it seems like this is the way to configure Infinispan to use Redis: http://infinispan.org/docs/cachestores/redis/.
Naively, I tried to have Keycloak store session information in Redis by updating my standalone-4.4.0.xml file to look like this (notice the redis-store element on line 5):
<subsystem xmlns="urn:jboss:domain:infinispan:6.0">
<cache-container name="keycloak">
<local-cache name="sessions">
<persistence passivation="false">
<redis-store xmlns="urn:infinispan:config:store:redis:8.0"
topology="server" socket-timeout="10000" connection-timeout="10000">
<redis-server host="server1" />
<connection-pool min-idle="6" max-idle="10" max-total="20" min-evictable-idle-time="30000" time-between-eviction-runs="30000" />
</redis-store>
</persistence>
</local-cache>
<local-cache name="realms">
<object-memory size="10000"/>
</local-cache>
<local-cache name="users">
<object-memory size="10000"/>
</local-cache>
<local-cache name="authenticationSessions"/>
<local-cache name="offlineSessions"/>
<local-cache name="clientSessions"/>
<local-cache name="offlineClientSessions"/>
<local-cache name="loginFailures"/>
<local-cache name="work"/>
<local-cache name="authorization">
<object-memory size="10000"/>
</local-cache>
<local-cache name="keys">
<object-memory size="1000"/>
<expiration max-idle="3600000"/>
</local-cache>
<local-cache name="actionTokens">
<object-memory size="-1"/>
<expiration max-idle="-1" interval="300000"/>
</local-cache>
</cache-container>
<cache-container name="server" default-cache="default" module="org.wildfly.clustering.server">
<local-cache name="default">
<transaction mode="BATCH"/>
</local-cache>
</cache-container>
<cache-container name="web" default-cache="passivation" module="org.wildfly.clustering.web.infinispan">
<local-cache name="passivation">
<redis-store xmlns="urn:infinispan:config:store:redis:8.0"
topology="server" socket-timeout="10000" connection-timeout="10000">
<redis-server host="server1" />
<connection-pool min-idle="6" max-idle="10" max-total="20" min-evictable-idle-time="30000" time-between-eviction-runs="30000" />
</redis-store>
</persistence>
</local-cache>
</cache-container>
<cache-container name="ejb" aliases="sfsb" default-cache="passivation" module="org.wildfly.clustering.ejb.infinispan">
<local-cache name="passivation">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store passivation="true" purge="false"/>
</local-cache>
</cache-container>
<cache-container name="hibernate" module="org.infinispan.hibernate-cache">
<local-cache name="entity">
<transaction mode="NON_XA"/>
<object-memory size="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<local-cache name="local-query">
<object-memory size="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<local-cache name="timestamps"/>
</cache-container>
</subsystem>
But when I start Keycloak, I get this error:
'persistence' isn't an allowed element here.
Question: Is there a straightforward way to configure Keycloak to save cached data in Redis or another persistent data store?
The subsystem version urn:jboss:domain:infinispan:6.0 doesn't know about this schema of your xml, so you would have to either update the subsystem or if you using the latest image of Keycloak (6.0.1) maybe would be easier to just implement a new InfinispanConnectionProviderFactory, which just involves basically doing this with Wildfly:
/subsystem=keycloak-server/spi=connectionsInfinispan/:remove()
/subsystem=keycloak-server/spi=connectionsInfinispan/:add(default-provider=custom)
/subsystem=keycloak-server/spi=connectionsInfinispan/provider=custom/:add(properties={},enabled=true)
For that, of course, you would have to implement an extension and deploy it. But then at the code level, you can use the full power of the latest Infinispan.
I see that you want to use Redis, that is another big problem with it, please read this answer https://stackoverflow.com/a/57362238/571689 where I tell you the following problems that you will bump into.
I'm using Widlfy 10, but do not want to use the DistributableSessions that are used by Wildfly out of the box (I am having some session handling issues and need to debug things at a basic level). I see that Undertow has an InMemorySessionManager which I would rather use instead. But I haven't been able to figure out how to specify a different SessionManager.
I've tried to configure my Wildfly cache as a local cache:
<subsystem xmlns="urn:jboss:domain:infinispan:4.0">
<cache-container name="server" aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server">
<transport lock-timeout="60000"/>
<replicated-cache name="default" mode="SYNC">
<transaction mode="BATCH"/>
</replicated-cache>
</cache-container>
<cache-container name="web" default-cache="passivation" module="org.wildfly.clustering.web.infinispan">
<local-cache name="passivation">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store passivation="true" purge="false"/>
</local-cache>
<local-cache name="persistent">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store passivation="false" purge="false"/>
</local-cache>
</cache-container>
...
...
However, in debugging my application, I still see that Wildfly is using the DistributableSessionManager and DistributableSessions instead.
Is there anyway to enable the Undertwo's InMemorySessionManager instead? Do I have to go through the effort of creating my own ServletExtension and Factory and configuring it in the META-INF/services/io.undertow.servlet.ServletExtension or is there an out-of-the-box way of enable functionality that already exists via the config file? Or do the required classes already exist as part of the Undertow/Wildfly packaging?
There are only conditions that result in the use of the distributed session manager:
in web.xml
Using shared sessions across web application within an ear, via shared-session-config.xml
Given that you've already stated that #1 is not the case, I'll assume #2. To disable the use of the distributed session manager for shared sessions, remove the org.wildfly.clustering.web.undertow module from your distribution.
We are working with this platform:
JBoss 6.1.0.GA
Modeshape 3.6.0
I just need to create a new workspace and to put inside images,javascripts, and other files I need for a webapp we are developing.
I tried to connect via webdav to our modeshape repository and create a new test directory inside, but I always receive this exception:
2015-02-03 16:47 WARN [org.modeshape.web.jcr.webdav.ModeShapeWebdavStore] (http-/0.0.0.0:8021-1) Cannot obtain a session for the repository 'repository': The workspace test was not found
I looked on stackoverflow and on the official guide of modeshape, but I still cannot catch how to do this "easy" task.
It seems there's no documentation that explains how to manually create a new workspace in a repository.
I add the configurations from standalone.xml I'm using for cache:
<subsystem xmlns="urn:jboss:domain:infinispan:1.4">
<cache-container name="hibernate" default-cache="local-query" module="org.jboss.as.jpa.hibernate:4">
<local-cache name="entity">
<transaction mode="NON_XA"/>
<eviction strategy="LRU" max-entries="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<local-cache name="local-query">
<transaction mode="NONE"/>
<eviction strategy="LRU" max-entries="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<local-cache name="timestamps">
<transaction mode="NONE"/>
<eviction strategy="NONE"/>
</local-cache>
</cache-container>
<cache-container name="modeshape" default-cache="repository" module="org.modeshape">
<local-cache name="repository">
<transaction mode="NON_XA"/>
<string-keyed-jdbc-store datasource="java:/jdbc/blablablaDatasource" shared="true" passivation="false" purge="false">
<property name="databaseType">
oracle
</property>
<property name="createTableOnStart">
true
</property>
<string-keyed-table prefix="CONTENT_REPO_STRING">
<id-column name="id_column" type="VARCHAR2(255)"/>
<data-column name="data_column" type="BLOB"/>
<timestamp-column name="timestamp_column" type="NUMBER(20)"/>
</string-keyed-table>
</string-keyed-jdbc-store>
</local-cache>
</cache-container>
<cache-container name="binary_cache_container" default-cache="binary_fs">
<local-cache name="binary_fs">
<transaction mode="NON_XA"/>
<string-keyed-jdbc-store datasource="java:/jdbc/blablablaDatasource" shared="true" preload="false" passivation="false" purge="false">
<write-behind flush-lock-timeout="1" modification-queue-size="1024" shutdown-timeout="25000" thread-pool-size="1"/>
<property name="databaseType">
oracle
</property>
<string-keyed-table prefix="CONTENT_REPO">
<id-column name="id_column" type="VARCHAR(255)"/>
<data-column name="data_column" type="BLOB"/>
<timestamp-column name="timestamp_column" type="NUMBER(20)"/>
</string-keyed-table>
</string-keyed-jdbc-store>
</local-cache>
<local-cache name="binary_fs_meta">
<transaction mode="NON_XA"/>
<string-keyed-jdbc-store datasource="java:/jdbc/blablablaDatasource" shared="true" preload="false" passivation="false" purge="false">
<write-behind flush-lock-timeout="1" modification-queue-size="1024" shutdown-timeout="25000" thread-pool-size="1"/>
<property name="databaseType">
oracle
</property>
<string-keyed-table prefix="CONTENT_REPO">
<id-column name="id_column" type="VARCHAR(255)"/>
<data-column name="data_column" type="BLOB"/>
<timestamp-column name="timestamp_column" type="NUMBER(20)"/>
</string-keyed-table>
</string-keyed-jdbc-store>
</local-cache>
</cache-container>
</subsystem>
and also for the modeshape conf:
<subsystem xmlns="urn:jboss:domain:modeshape:1.0">
<repository name="repository" security-domain="modeshape-internal-security">
<workspaces default-workspace="default" allow-workspace-creation="true">
<workspace name="ops">
<initial-content>
initial-content-default.xml
</initial-content>
</workspace>
<workspace name="other"/>
<workspace name="extra">
<initial-content>
initial-content-default.xml
</initial-content>
</workspace>
<workspace name="default"/>
</workspaces>
<indexing rebuild-upon-startup="ALWAYS"/>
<cache-binary-storage data-cache-name="binary_fs" metadata-cache-name="binary_fs_meta" cache-container="binary_cache_container"/>
<sequencers>
<sequencer name="fixed-width-text-sequencer" classname="org.modeshape.sequencer.text.FixedWidthTextSequencer" module="org.modeshape.sequencer.text" commentMarker="#" path-expression="/files(//*.txt[*])/jcr:content[#jcr:data] => /derived/text/fixedWidth/$1"/>
<sequencer name="xml-sequencer" classname="xml" module="org.modeshape.sequencer.xml" path-expression="/files(//)*.xml[*]/jcr:content[#jcr:data] => /derived/xml/$1"/>
<sequencer name="image-sequencer" classname="image" module="org.modeshape.sequencer.image" path-expression="/files(//*.(png|jpg|gif)[*])/jcr:content[#jcr:data] => /derived/image/$1"/>
</sequencers>
<text-extractors>
<text-extractor name="tika-extractor" classname="tika" module="org.modeshape.extractor.tika"/>
</text-extractors>
</repository>
</subsystem>
You can create a new workspace programmatically using the standard JCR API (see this StackOverflow question, but you can also define workspaces in the ModeShape configuration file.
Since you're deploying ModeShape to JBoss EAP, you can configure new workspaces in the ModeShape subsystem configuration within the installation's standalone-modeshape.xml file. Here's an example (that actually is in that configuration file) to define 3 workspaces named default, other, and extra upon startup, defines some initial content for the workspace named default, and it enables the programmatic creation of workspaces.
<repository name="artifacts">
<!-- ... -->
<!-- Define 3 workspaces to exist upon startup -->
<workspaces default-workspace="default" allow-workspace-creation="false">
<workspace name="default">
<initial-content>initial-content-default.xml</initial-content>
</workspace>
<workspace name="other"/>
<workspace name="extra"/>
</workspaces>
<!-- ... -->
<repository name="artifacts">
The structure of this XML fragment is dictated by the modeshape_1_0.xsd file in your EAP installation (or the modeshape_2_0.xsd file in Wildfly installations).
For those not deploying ModeShape in JBoss EAP (or Wildfly for ModeShape 4.x), you can do the same thing in ModeShape's JSON configuration file. For example, this defines exactly the same workspaces described above:
"workspaces" : {
"predefined" : ["other", "extra"],
"default" : "default",
"allowCreation" : true,
"initialContent" : {
"default" : "initial-content-default.xml"
}
},
See ModeShape's JSON schema for more details and options.
Also, be sure that when you log into a Session that you correctly specify the workspace name.
I managed to get it work only changing the configuration to this one:
JBoss 6.3.0.GA
Modeshape 3.8.1