Modeshape workspace creation - jboss

We are working with this platform:
JBoss 6.1.0.GA
Modeshape 3.6.0
I just need to create a new workspace and to put inside images,javascripts, and other files I need for a webapp we are developing.
I tried to connect via webdav to our modeshape repository and create a new test directory inside, but I always receive this exception:
2015-02-03 16:47 WARN [org.modeshape.web.jcr.webdav.ModeShapeWebdavStore] (http-/0.0.0.0:8021-1) Cannot obtain a session for the repository 'repository': The workspace test was not found
I looked on stackoverflow and on the official guide of modeshape, but I still cannot catch how to do this "easy" task.
It seems there's no documentation that explains how to manually create a new workspace in a repository.
I add the configurations from standalone.xml I'm using for cache:
<subsystem xmlns="urn:jboss:domain:infinispan:1.4">
<cache-container name="hibernate" default-cache="local-query" module="org.jboss.as.jpa.hibernate:4">
<local-cache name="entity">
<transaction mode="NON_XA"/>
<eviction strategy="LRU" max-entries="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<local-cache name="local-query">
<transaction mode="NONE"/>
<eviction strategy="LRU" max-entries="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<local-cache name="timestamps">
<transaction mode="NONE"/>
<eviction strategy="NONE"/>
</local-cache>
</cache-container>
<cache-container name="modeshape" default-cache="repository" module="org.modeshape">
<local-cache name="repository">
<transaction mode="NON_XA"/>
<string-keyed-jdbc-store datasource="java:/jdbc/blablablaDatasource" shared="true" passivation="false" purge="false">
<property name="databaseType">
oracle
</property>
<property name="createTableOnStart">
true
</property>
<string-keyed-table prefix="CONTENT_REPO_STRING">
<id-column name="id_column" type="VARCHAR2(255)"/>
<data-column name="data_column" type="BLOB"/>
<timestamp-column name="timestamp_column" type="NUMBER(20)"/>
</string-keyed-table>
</string-keyed-jdbc-store>
</local-cache>
</cache-container>
<cache-container name="binary_cache_container" default-cache="binary_fs">
<local-cache name="binary_fs">
<transaction mode="NON_XA"/>
<string-keyed-jdbc-store datasource="java:/jdbc/blablablaDatasource" shared="true" preload="false" passivation="false" purge="false">
<write-behind flush-lock-timeout="1" modification-queue-size="1024" shutdown-timeout="25000" thread-pool-size="1"/>
<property name="databaseType">
oracle
</property>
<string-keyed-table prefix="CONTENT_REPO">
<id-column name="id_column" type="VARCHAR(255)"/>
<data-column name="data_column" type="BLOB"/>
<timestamp-column name="timestamp_column" type="NUMBER(20)"/>
</string-keyed-table>
</string-keyed-jdbc-store>
</local-cache>
<local-cache name="binary_fs_meta">
<transaction mode="NON_XA"/>
<string-keyed-jdbc-store datasource="java:/jdbc/blablablaDatasource" shared="true" preload="false" passivation="false" purge="false">
<write-behind flush-lock-timeout="1" modification-queue-size="1024" shutdown-timeout="25000" thread-pool-size="1"/>
<property name="databaseType">
oracle
</property>
<string-keyed-table prefix="CONTENT_REPO">
<id-column name="id_column" type="VARCHAR(255)"/>
<data-column name="data_column" type="BLOB"/>
<timestamp-column name="timestamp_column" type="NUMBER(20)"/>
</string-keyed-table>
</string-keyed-jdbc-store>
</local-cache>
</cache-container>
</subsystem>
and also for the modeshape conf:
<subsystem xmlns="urn:jboss:domain:modeshape:1.0">
<repository name="repository" security-domain="modeshape-internal-security">
<workspaces default-workspace="default" allow-workspace-creation="true">
<workspace name="ops">
<initial-content>
initial-content-default.xml
</initial-content>
</workspace>
<workspace name="other"/>
<workspace name="extra">
<initial-content>
initial-content-default.xml
</initial-content>
</workspace>
<workspace name="default"/>
</workspaces>
<indexing rebuild-upon-startup="ALWAYS"/>
<cache-binary-storage data-cache-name="binary_fs" metadata-cache-name="binary_fs_meta" cache-container="binary_cache_container"/>
<sequencers>
<sequencer name="fixed-width-text-sequencer" classname="org.modeshape.sequencer.text.FixedWidthTextSequencer" module="org.modeshape.sequencer.text" commentMarker="#" path-expression="/files(//*.txt[*])/jcr:content[#jcr:data] => /derived/text/fixedWidth/$1"/>
<sequencer name="xml-sequencer" classname="xml" module="org.modeshape.sequencer.xml" path-expression="/files(//)*.xml[*]/jcr:content[#jcr:data] => /derived/xml/$1"/>
<sequencer name="image-sequencer" classname="image" module="org.modeshape.sequencer.image" path-expression="/files(//*.(png|jpg|gif)[*])/jcr:content[#jcr:data] => /derived/image/$1"/>
</sequencers>
<text-extractors>
<text-extractor name="tika-extractor" classname="tika" module="org.modeshape.extractor.tika"/>
</text-extractors>
</repository>
</subsystem>

You can create a new workspace programmatically using the standard JCR API (see this StackOverflow question, but you can also define workspaces in the ModeShape configuration file.
Since you're deploying ModeShape to JBoss EAP, you can configure new workspaces in the ModeShape subsystem configuration within the installation's standalone-modeshape.xml file. Here's an example (that actually is in that configuration file) to define 3 workspaces named default, other, and extra upon startup, defines some initial content for the workspace named default, and it enables the programmatic creation of workspaces.
<repository name="artifacts">
<!-- ... -->
<!-- Define 3 workspaces to exist upon startup -->
<workspaces default-workspace="default" allow-workspace-creation="false">
<workspace name="default">
<initial-content>initial-content-default.xml</initial-content>
</workspace>
<workspace name="other"/>
<workspace name="extra"/>
</workspaces>
<!-- ... -->
<repository name="artifacts">
The structure of this XML fragment is dictated by the modeshape_1_0.xsd file in your EAP installation (or the modeshape_2_0.xsd file in Wildfly installations).
For those not deploying ModeShape in JBoss EAP (or Wildfly for ModeShape 4.x), you can do the same thing in ModeShape's JSON configuration file. For example, this defines exactly the same workspaces described above:
"workspaces" : {
"predefined" : ["other", "extra"],
"default" : "default",
"allowCreation" : true,
"initialContent" : {
"default" : "initial-content-default.xml"
}
},
See ModeShape's JSON schema for more details and options.
Also, be sure that when you log into a Session that you correctly specify the workspace name.

I managed to get it work only changing the configuration to this one:
JBoss 6.3.0.GA
Modeshape 3.8.1

Related

Configure Infinispan for Keycloak 17

I want to run Keycloak 17 (Quarkus Edition) in HA mode with the provided infinispan. Because we are running Keycloak on serveral stages, I want to specify a infinispan cluster name. As I understood from the documentation I should configure this in the given infinispan config xml ./conf/cache-ispn.xml
I altered
<transport lock-timeout="60000"/>
to
<transport cluster="myClusterName" lock-timeout="60000"/>
After that I ran .\kc.bat build --cache=ispn --cache-config-file=conf/cache-ispn.xml
and started up the server with .\kc.bat start
Sadly the output logging shows this:
[org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN`
[org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000094: Received new cluster view for channel ISPN: [MyHostName-14281|0] (1) [MyHostName-14281]
As seen in the logs, the cluster name still is the default "ISPN".
I already consulted the infinispan docs here: https://infinispan.org/docs/stable/titles/configuring/configuring.html
as well as the Keycloak docs:
https://www.keycloak.org/server/caching
https://www.keycloak.org/server/configuration
Can anyone help me out? Is this a bug related to Keycloak 17 or am I missing something in the infinispan config?
Full Infinispan Config:
<?xml version="1.0" encoding="UTF-8"?>
<!--
~ Copyright 2019 Red Hat, Inc. and/or its affiliates
~ and other contributors as indicated by the #author tags.
~
~ Licensed under the Apache License, Version 2.0 (the "License");
~ you may not use this file except in compliance with the License.
~ You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:11.0 http://www.infinispan.org/schemas/infinispan-config-11.0.xsd"
xmlns="urn:infinispan:config:11.0">
<cache-container name="keycloak">
<transport cluster="myClusterName" lock-timeout="60000"/>
<local-cache name="realms">
<encoding>
<key media-type="application/x-java-object"/>
<value media-type="application/x-java-object"/>
</encoding>
<memory max-count="10000"/>
</local-cache>
<local-cache name="users">
<encoding>
<key media-type="application/x-java-object"/>
<value media-type="application/x-java-object"/>
</encoding>
<memory max-count="10000"/>
</local-cache>
<distributed-cache name="sessions" owners="2">
<expiration lifespan="-1"/>
</distributed-cache>
<distributed-cache name="authenticationSessions" owners="2">
<expiration lifespan="-1"/>
</distributed-cache>
<distributed-cache name="offlineSessions" owners="2">
<expiration lifespan="-1"/>
</distributed-cache>
<distributed-cache name="clientSessions" owners="2">
<expiration lifespan="-1"/>
</distributed-cache>
<distributed-cache name="offlineClientSessions" owners="2">
<expiration lifespan="-1"/>
</distributed-cache>
<distributed-cache name="loginFailures" owners="2">
<expiration lifespan="-1"/>
</distributed-cache>
<local-cache name="authorization">
<encoding>
<key media-type="application/x-java-object"/>
<value media-type="application/x-java-object"/>
</encoding>
<memory max-count="10000"/>
</local-cache>
<replicated-cache name="work">
<expiration lifespan="-1"/>
</replicated-cache>
<local-cache name="keys">
<encoding>
<key media-type="application/x-java-object"/>
<value media-type="application/x-java-object"/>
</encoding>
<expiration max-idle="3600000"/>
<memory max-count="1000"/>
</local-cache>
<distributed-cache name="actionTokens" owners="2">
<encoding>
<key media-type="application/x-java-object"/>
<value media-type="application/x-java-object"/>
</encoding>
<expiration max-idle="-1" lifespan="-1" interval="300000"/>
<memory max-count="-1"/>
</distributed-cache>
</cache-container>
</infinispan>
I figured it out:
First I copied the cache-ispn.xml to a new file in the same directory and named it cache.xml
I changed the build paramter --cache-config-file=conf/cache-ispn.xml to --cache-config-file=cache.xml
So I just removed the folder specification as Keycloak seems to auto set the config directory to the conf folder.

Keycloak: remote-store configuration for dedicated Infinispan cluster

Recently, I hardened my Keycloak deployment to use a dedicated Infinispan cluster as a remote-store for an extra layer of persistence for Keycloak's various caches. The change itself went reasonably well, although after making this change, we started seeing a lot of login errors due to the expired_code error message:
WARN [org.keycloak.events] (default task-2007) type=LOGIN_ERROR, realmId=my-realm, clientId=null, userId=null, ipAddress=192.168.50.38, error=expired_code, restart_after_timeout=true
This error message is typically repeated dozens of times all within a short period of time and from the same IP address. The cause of this appears to be the end-user's browser infinitely redirecting on login until the browser itself stops the loop.
I have seen various GitHub issues (https://github.com/helm/charts/issues/8355) that also document this behavior, and the consensus seems to be that this is caused by the Keycloak cluster not able to correctly discover its members via JGroups.
This explanation makes sense when you consider that some of the Keycloak caches are distributed across the Keycloak nodes in the default configuration within standalone-ha.xml. However, I have modified these caches to be local caches with a remote-store pointing to my new Infinispan cluster, and I believe I have made some incorrect assumptions about how this works, causing this error to start happening.
Here is how my Keycloak caches are configured:
<subsystem xmlns="urn:jboss:domain:infinispan:7.0">
<cache-container name="keycloak" module="org.keycloak.keycloak-model-infinispan">
<transport lock-timeout="60000"/>
<local-cache name="realms">
<object-memory size="10000"/>
</local-cache>
<local-cache name="users">
<object-memory size="10000"/>
</local-cache>
<local-cache name="authorization">
<object-memory size="10000"/>
</local-cache>
<local-cache name="keys">
<object-memory size="1000"/>
<expiration max-idle="3600000"/>
</local-cache>
<local-cache name="sessions">
<remote-store cache="sessions" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="authenticationSessions">
<remote-store cache="authenticationSessions" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="offlineSessions">
<remote-store cache="offlineSessions" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="clientSessions">
<remote-store cache="clientSessions" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="offlineClientSessions">
<remote-store cache="offlineClientSessions" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="loginFailures">
<remote-store cache="loginFailures" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<local-cache name="actionTokens">
<remote-store cache="actionTokens" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</local-cache>
<replicated-cache name="work">
<remote-store cache="work" remote-servers="remote-cache" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</replicated-cache>
</cache-container>
<cache-container name="server" aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server">
<transport lock-timeout="60000"/>
<replicated-cache name="default">
<transaction mode="BATCH"/>
</replicated-cache>
</cache-container>
<cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan">
<transport lock-timeout="60000"/>
<distributed-cache name="dist">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store/>
</distributed-cache>
</cache-container>
<cache-container name="ejb" aliases="sfsb" default-cache="dist" module="org.wildfly.clustering.ejb.infinispan">
<transport lock-timeout="60000"/>
<distributed-cache name="dist">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store/>
</distributed-cache>
</cache-container>
<cache-container name="hibernate" module="org.infinispan.hibernate-cache">
<transport lock-timeout="60000"/>
<local-cache name="local-query">
<object-memory size="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<invalidation-cache name="entity">
<transaction mode="NON_XA"/>
<object-memory size="10000"/>
<expiration max-idle="100000"/>
</invalidation-cache>
<replicated-cache name="timestamps"/>
</cache-container>
</subsystem>
Note that most of this cache configuration is unchanged when compared to the default standalone-ha.xml configuration file. The changes I have made here are changing the following caches to be local and pointing them to my remote Infinispan cluster:
sessions
authenticationSessions
offlineSessions
clientSessions
offlineClientSessions
loginFailures
actionTokens
work
Here is the configuration for my remote-cache server:
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<!-- Default socket bindings from standalone-ha.xml are not listed here for brevity -->
<outbound-socket-binding name="remote-cache">
<remote-destination host="${env.INFINISPAN_HOST}" port="${remote.cache.port:11222}"/>
</outbound-socket-binding>
</socket-binding-group>
Here is how my caches are configured on the Infinispan side:
<subsystem xmlns="urn:infinispan:server:core:9.4" default-cache-container="clustered">
<cache-container name="clustered" default-cache="default">
<transport lock-timeout="60000"/>
<global-state/>
<replicated-cache-configuration name="replicated-keycloak" mode="SYNC">
<locking acquire-timeout="3000" />
</replicated-cache-configuration>
<replicated-cache name="work" configuration="replicated-keycloak"/>
<replicated-cache name="sessions" configuration="replicated-keycloak"/>
<replicated-cache name="authenticationSessions" configuration="replicated-keycloak"/>
<replicated-cache name="clientSessions" configuration="replicated-keycloak"/>
<replicated-cache name="offlineSessions" configuration="replicated-keycloak"/>
<replicated-cache name="offlineClientSessions" configuration="replicated-keycloak"/>
<replicated-cache name="actionTokens" configuration="replicated-keycloak"/>
<replicated-cache name="loginFailures" configuration="replicated-keycloak"/>
</cache-container>
</subsystem>
I believe I have made some incorrect assumptions about how local caches with remote stores work, and I was hoping someone would be able to clear this up for me. My intention was to make the Infinispan cluster the source of truth for all of Keycloak's caches. By making every cache local, I assumed that data would be replicated to each Keycloak node through the Infinispan cluster, such that a write to the local authenticationSessions cache on keycloak-0 would be synchronously persisted to keycloak-1 through the Infinispan cluster.
What I believe is happening is that the write to a local cache on Keycloak is not synchronous with respect to persisting that value to the remote Infinispan cluster. In other words, when a write is performed to the authenticationSessions cache, it does not block while waiting for this value to be written to the Infinispan cluster, so an immediate read for this data on another Keycloak node results in a cache miss, locally and in the Infinispan cluster.
I'm looking for some help with identifying why my current configuration is causing this issue, and some clarification on the behavior of a remote-store - is there a way to get cache writes to a local cache backed by a remote-store to be synchronous? If not, is there a better way to do what I'm trying to accomplish here?
Some other potentially relevant details:
Both Keycloak and Infinispan are deployed to the same namespace in a Kubernetes cluster.
I am using KUBE_PING for JGroups discovery.
Using the Infinispan console, I am able to verify that all of the caches replicated to all of the Infinispan nodes have some amount of entries in them - they aren't completely unused.
If I add a new realm to one Keycloak node, it successfully shows up on other Keycloak nodes, which leads me to believe that the work cache is being propagated across all Keycloak nodes.
If I log in to one Keycloak node, my session remains on other Keycloak nodes, which leads me to believe that the session related caches are being propagated across all Keycloak nodes.
I'm using sticky sessions for Keycloak as a temporary fix for this, but I believe fixing these underlying cache issues is a more permanent solution.
Thanks in advance!
I will try to clarify some points to take in mind when you configure Keycloak in cluster.
Talking about subject of "infinite redirects", I have experienced a similar problem in development environments years ago. While the keycloak team has corrected several bugs related to infinite loops (e.g. KEYCLOAK-5856, KEYCLOAK-5022, KEYCLOAK-4717, KEYCLOAK-4552, KEYCLOAK-3878) sometimes it is happening due to configuration issues.
One thing to check if the site is HTTPS is to be accessing a Keycloak server by HTTPS as well.
I remember suffered a similar problem to the infinite loop when the Keycloak was placed behind an HTTPS reverse proxy and the needed headers were not propagated to the Keycloak (headers X-FOWARDED...). It was solved setting up the environment well. It can happen a similar problem when the nodes discovery in the cluster does not work correctly (JGroups).
About the error message "expired_code", I would verify that the clocks of each node are synchronized since it can lead to this kind of expired token / code error.
Now understanding better your configuration, it does not seem inappropriate to use the "local-cache" mode with a remote-store, pointing to the infinispan cluster.
Although, usually, the shared store (such as a remote-cache) is usually used with an invalidation-cache where it is avoided to replicate the complete data by the cluster (see comment that can be applied here https://developer.jboss.org/message/986847#986847), there may not be big differences with a distributed or invalidation cache.
I believe that a distributed-cache with a remote-store would apply better (or an invalidation-cache to avoid replicating heavy data to the owners) however I could not ensure how a "local-cache" works with a remote storage (shared) since I have never tried this kind of configuration.
I would first choose to test a distributed-cache or an invalidation-cache given by how it works with the evicted / invalidated data. Normally, local caches do not synchronize with other remote nodes in the cluster. If this kind of implementation keeps a local map in memory, it is likely that even if the data in the remote-storage is modified, these changes may be not reflected in some situations.
I can give you a Jmeter test file that you can use so that you can try to perform your own tests with both configurations.
Returning to the topic of your configuration, you have to take into account in addition to that the replicated cache have certain limitations and are usually a little slower than the distributed ones that only replicate the data to the defined owners (the replicated ones write in all the nodes). There is also a variant called scattered-cache that performs better but for example lacks Transaction support (you can see here a comparative chart https://infinispan.org/docs/stable/user_guide/user_guide.html#which_cache_mode_should_i_use).
Replication usually only performs well in small clusters (under 8 or 10 servers), due to the number of replication messages that need to send. Distributed cache allows Infinispan to scale linearly by defining a number of replicas by entry.
The main reason to make a configuration of the type you are trying to do instead of one similar to the one proposed by Keycloak (standalone-ha.xml), is when you have a requirement to independently scale the infinispan cluster of the application or using infinispan as a persistent store.
I will explain how Keycloak manages its cache and how it divides it into two or three groups basically so you can better understand the configuration you need.
Usually, to configure Keycloak in a cluster, simply raise and configure the Keycloak in HA mode just as you would do with a traditional instance of Wildfly. If one observes the differences between the standalone.xml and the standalone-ha.xml that comes in the keycloak installation, one notices that basically support is added to "Jgroups", "modcluster", and the caches are distributed (which were previously local) between the nodes in Wildfly / Keycloak (HA).
In detail:
jgroups subsystem is added, which will be responsible for connecting the cluster nodes and carrying out the messaging / communication in the cluster. JGroups provides network communication capabilities, reliable communications and other features like node discovery, point-to-point communications, multicast communication, failure detection, and data transfer between cluster nodes.
the EJB3 cache goes from a SIMPLE cache (in local memory without transaction handling) to a DISTRIBUTED. However, I would ensure that the Keycloak project does not require using EJB3 according to my experience extending this project.
cache: "realms", "users", "authorization", and "keys" are kept local since they are only used to reduce the load on the database.
cache: "work" becomes REPLICATED since it is the one that Keycloak uses to notify to the cluster nodes that an entry of the cache must be evicted/invalidated since its status has been modified.
cache "sessions", "authenticationSessions", "offlineSessions", "clientSessions", "offlineSessions", "loginFailures", and "actionTokens" becomes DISTRIBUTED because they perform better than replicated-cache (see https://infinispan.org/docs/stable/user_guide/user_guide.html#which_cache_mode_should_i_use) because you only have to replicate the data to the owners.
The other changes proposed by keycloak for its default HA configuration are to distributing"web" and "ejb" (and above) cache container, and to change "hibernate" cache to an "invalidation-cache" (like a local cache but with invalidation sync).
I think that your cache configuration should be defined as "distributed-cache" for caches like "sessions", "authenticationSessions", "offlineSessions", "clientSessions", "offlineClientSessions", "loginFailures" and "actionTokens" (instead of "local"). However, because you use a remote shared store, you should test it to see how it works as I said before.
Also, cache named "work" should be "replicated-cache" and the others ("keys", "authorization", "realms" and "users") should be defined as "local-cache".
In your infinispan cluster you can define it as "distributed-cache" (or "replicated-cache").
Remember that:
In a replicated cache all nodes in a cluster hold all keys i.e. if a
key exists on one node, it will also exist on all other nodes. In a
distributed cache, a number of copies are maintained to provide
redundancy and fault tolerance, however this is typically far fewer
than the number of nodes in the cluster. A distributed cache provides
a far greater degree of scalability than a replicated cache.
A distributed cache is also able to transparently locate keys across a
cluster, and provides an L1 cache for fast local read access of state
that is stored remotely. You can read more in the relevant User Guide
chapter.
Infinispan doc. ref: cache mode
As the Keycloak (6.0) documentation says:
Keycloak has two types of caches. One type of cache sits in front of
the database to decrease load on the DB and to decrease overall
response times by keeping data in memory. Realm, client, role, and
user metadata is kept in this type of cache. This cache is a local
cache. Local caches do not use replication even if you are in the
cluster with more Keycloak servers. Instead, they only keep copies
locally and if the entry is updated an invalidation message is sent to
the rest of the cluster and the entry is evicted. There is separate
replicated cache work, which task is to send the invalidation messages
to the whole cluster about what entries should be evicted from local
caches. This greatly reduces network traffic, makes things efficient,
and avoids transmitting sensitive metadata over the wire.
The second type of cache handles managing user sessions, offline
tokens, and keeping track of login failures so that the server can
detect password phishing and other attacks. The data held in these
caches is temporary, in memory only, but is possibly replicated across
the cluster.
Doc. Reference: cache configuration
If you want to read another good document, you can take a look to "cross-dc" section (cross-dc mode) especially section "3.4.6 Infinispan cache" (infinispan cache)
I tried with Keycloak 6.0.1 and Infinispan 9.4.11.Final, here is my test configuration (based on standalone-ha.xml file).
Keycloak infinispan subsystem:
<subsystem xmlns="urn:jboss:domain:infinispan:8.0">
<cache-container name="keycloak" module="org.keycloak.keycloak-model-infinispan">
<transport lock-timeout="60000"/>
<local-cache name="realms">
<object-memory size="10000"/>
</local-cache>
<local-cache name="users">
<object-memory size="10000"/>
</local-cache>
<distributed-cache name="sessions" owners="1" remote-timeout="30000">
<remote-store cache="sessions" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<distributed-cache name="authenticationSessions" owners="1" remote-timeout="30000">
<remote-store cache="authenticationSessions" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<distributed-cache name="offlineSessions" owners="1" remote-timeout="30000">
<remote-store cache="offlineSessions" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<distributed-cache name="clientSessions" owners="1" remote-timeout="30000">
<remote-store cache="clientSessions" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<distributed-cache name="offlineClientSessions" owners="1" remote-timeout="30000">
<remote-store cache="offlineClientSessions" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<distributed-cache name="loginFailures" owners="1" remote-timeout="30000">
<remote-store cache="loginFailures" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
</distributed-cache>
<replicated-cache name="work"/>
<local-cache name="authorization">
<object-memory size="10000"/>
</local-cache>
<local-cache name="keys">
<object-memory size="1000"/>
<expiration max-idle="3600000"/>
</local-cache>
<distributed-cache name="actionTokens" owners="1" remote-timeout="30000">
<remote-store cache="actionTokens" remote-servers="remote-cache" socket-timeout="60000" fetch-state="false" passivation="false" preload="false" purge="false" shared="true">
<property name="rawValues">
true
</property>
<property name="marshaller">
org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
</property>
</remote-store>
<object-memory size="-1"/>
<expiration max-idle="-1" interval="300000"/>
</distributed-cache>
</cache-container>
Keycloak socket bindings:
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
<socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/>
<socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>
<socket-binding name="http" port="${jboss.http.port:8080}"/>
<socket-binding name="https" port="${jboss.https.port:8443}"/>
<socket-binding name="jgroups-mping" interface="private" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/>
<socket-binding name="jgroups-tcp" interface="private" port="7600"/>
<socket-binding name="jgroups-udp" interface="private" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>
<socket-binding name="modcluster" multicast-address="${jboss.modcluster.multicast.address:224.0.1.105}" multicast-port="23364"/>
<socket-binding name="txn-recovery-environment" port="4712"/>
<socket-binding name="txn-status-manager" port="4713"/>
<outbound-socket-binding name="remote-cache">
<remote-destination host="my-server-domain.com" port="11222"/>
</outbound-socket-binding>
<outbound-socket-binding name="mail-smtp">
<remote-destination host="localhost" port="25"/>
</outbound-socket-binding>
</socket-binding-group>
Infinispan cluster configuration:
<subsystem xmlns="urn:infinispan:server:core:9.4" default-cache-container="clustered">
<cache-container name="clustered" default-cache="default" statistics="true">
<transport lock-timeout="60000"/>
<global-state/>
<distributed-cache-configuration name="transactional">
<transaction mode="NON_XA" locking="PESSIMISTIC"/>
</distributed-cache-configuration>
<distributed-cache-configuration name="async" mode="ASYNC"/>
<replicated-cache-configuration name="replicated"/>
<distributed-cache-configuration name="persistent-file-store">
<persistence>
<file-store shared="false" fetch-state="true"/>
</persistence>
</distributed-cache-configuration>
<distributed-cache-configuration name="indexed">
<indexing index="LOCAL" auto-config="true"/>
</distributed-cache-configuration>
<distributed-cache-configuration name="memory-bounded">
<memory>
<binary size="10000000" eviction="MEMORY"/>
</memory>
</distributed-cache-configuration>
<distributed-cache-configuration name="persistent-file-store-passivation">
<memory>
<object size="10000"/>
</memory>
<persistence passivation="true">
<file-store shared="false" fetch-state="true">
<write-behind modification-queue-size="1024" thread-pool-size="1"/>
</file-store>
</persistence>
</distributed-cache-configuration>
<distributed-cache-configuration name="persistent-file-store-write-behind">
<persistence>
<file-store shared="false" fetch-state="true">
<write-behind modification-queue-size="1024" thread-pool-size="1"/>
</file-store>
</persistence>
</distributed-cache-configuration>
<distributed-cache-configuration name="persistent-rocksdb-store">
<persistence>
<rocksdb-store shared="false" fetch-state="true"/>
</persistence>
</distributed-cache-configuration>
<distributed-cache-configuration name="persistent-jdbc-string-keyed">
<persistence>
<string-keyed-jdbc-store datasource="java:jboss/datasources/ExampleDS" fetch-state="true" preload="false" purge="false" shared="false">
<string-keyed-table prefix="ISPN">
<id-column name="id" type="VARCHAR"/>
<data-column name="datum" type="BINARY"/>
<timestamp-column name="version" type="BIGINT"/>
</string-keyed-table>
<write-behind modification-queue-size="1024" thread-pool-size="1"/>
</string-keyed-jdbc-store>
</persistence>
</distributed-cache-configuration>
<distributed-cache name="default"/>
<replicated-cache name="repl" configuration="replicated"/>
<replicated-cache name="work" configuration="replicated"/>
<replicated-cache name="sessions" configuration="replicated"/>
<replicated-cache name="authenticationSessions" configuration="replicated"/>
<replicated-cache name="clientSessions" configuration="replicated"/>
<replicated-cache name="offlineSessions" configuration="replicated"/>
<replicated-cache name="offlineClientSessions" configuration="replicated"/>
<replicated-cache name="actionTokens" configuration="replicated"/>
<replicated-cache name="loginFailures" configuration="replicated"/>
</cache-container>
</subsystem>
P.S. Change attribute "owners" from 1 to your favorite value.
I hope to be helpful.
Great exchange here guys, incredibly I had exactly the same assumptions as you Michael, I configured my local-cache to use remote-store and expected that the keys would be read/write always form the remote-store, but apparently it is not how it works.
Sadly from all the exchange here done, I couldn't find why is this, why can't we configure the local infinispan to serve only as a proxy to a remote infinispan, allowing to keep this instances stateless and have an easier process to redeploy.

How to save cached Keycloak data to a persistent data store?

We are running Keycloak (v4.4, standalone mode) inside of 2 Docker containers. We wish these containers to be stateless, so we must persist all cached data to a backing store (either database or other caching solution such as Redis). We can not allow cached data to only exist in-memory, because either of our containers may be destroyed at any time.
Ideally, we would like to persist cached data to our own Redis instance. Since Keycloak uses Infinispan, it seems like this is the way to configure Infinispan to use Redis: http://infinispan.org/docs/cachestores/redis/.
Naively, I tried to have Keycloak store session information in Redis by updating my standalone-4.4.0.xml file to look like this (notice the redis-store element on line 5):
<subsystem xmlns="urn:jboss:domain:infinispan:6.0">
<cache-container name="keycloak">
<local-cache name="sessions">
<persistence passivation="false">
<redis-store xmlns="urn:infinispan:config:store:redis:8.0"
topology="server" socket-timeout="10000" connection-timeout="10000">
<redis-server host="server1" />
<connection-pool min-idle="6" max-idle="10" max-total="20" min-evictable-idle-time="30000" time-between-eviction-runs="30000" />
</redis-store>
</persistence>
</local-cache>
<local-cache name="realms">
<object-memory size="10000"/>
</local-cache>
<local-cache name="users">
<object-memory size="10000"/>
</local-cache>
<local-cache name="authenticationSessions"/>
<local-cache name="offlineSessions"/>
<local-cache name="clientSessions"/>
<local-cache name="offlineClientSessions"/>
<local-cache name="loginFailures"/>
<local-cache name="work"/>
<local-cache name="authorization">
<object-memory size="10000"/>
</local-cache>
<local-cache name="keys">
<object-memory size="1000"/>
<expiration max-idle="3600000"/>
</local-cache>
<local-cache name="actionTokens">
<object-memory size="-1"/>
<expiration max-idle="-1" interval="300000"/>
</local-cache>
</cache-container>
<cache-container name="server" default-cache="default" module="org.wildfly.clustering.server">
<local-cache name="default">
<transaction mode="BATCH"/>
</local-cache>
</cache-container>
<cache-container name="web" default-cache="passivation" module="org.wildfly.clustering.web.infinispan">
<local-cache name="passivation">
<redis-store xmlns="urn:infinispan:config:store:redis:8.0"
topology="server" socket-timeout="10000" connection-timeout="10000">
<redis-server host="server1" />
<connection-pool min-idle="6" max-idle="10" max-total="20" min-evictable-idle-time="30000" time-between-eviction-runs="30000" />
</redis-store>
</persistence>
</local-cache>
</cache-container>
<cache-container name="ejb" aliases="sfsb" default-cache="passivation" module="org.wildfly.clustering.ejb.infinispan">
<local-cache name="passivation">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store passivation="true" purge="false"/>
</local-cache>
</cache-container>
<cache-container name="hibernate" module="org.infinispan.hibernate-cache">
<local-cache name="entity">
<transaction mode="NON_XA"/>
<object-memory size="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<local-cache name="local-query">
<object-memory size="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<local-cache name="timestamps"/>
</cache-container>
</subsystem>
But when I start Keycloak, I get this error:
'persistence' isn't an allowed element here.
Question: Is there a straightforward way to configure Keycloak to save cached data in Redis or another persistent data store?
The subsystem version urn:jboss:domain:infinispan:6.0 doesn't know about this schema of your xml, so you would have to either update the subsystem or if you using the latest image of Keycloak (6.0.1) maybe would be easier to just implement a new InfinispanConnectionProviderFactory, which just involves basically doing this with Wildfly:
/subsystem=keycloak-server/spi=connectionsInfinispan/:remove()
/subsystem=keycloak-server/spi=connectionsInfinispan/:add(default-provider=custom)
/subsystem=keycloak-server/spi=connectionsInfinispan/provider=custom/:add(properties={},enabled=true)
For that, of course, you would have to implement an extension and deploy it. But then at the code level, you can use the full power of the latest Infinispan.
I see that you want to use Redis, that is another big problem with it, please read this answer https://stackoverflow.com/a/57362238/571689 where I tell you the following problems that you will bump into.

Application Fails to Load after enabling jdbc based session persistence

I am doing a POC with JBoss EAP 7.1 release wherein I have enabled db based session persistence, I have tested with the default cache manager persistence and it works well but somehow it doesn't stores any session data in the database schema, however the table gets created at the start of the server which I could see, for this I am starting with the sample counter.war which is present in the Redhat knowledge base. I am using Oracle 12cR1 database.
One more thing is , I am also not able to see the application from the console, same thing when I run the CLI command to read the resource. When I try to see the deployment under Deployments, it simly complains
Unable to load deployments
Unexpected HTTP response: 500 Request { "operation" => "read-children-resources", "address" => undefined, "child-type" => "deployment", "include-runtime" => true, "recursive" => true } Response Internal Server Error { "outcome" => "failed", "rolled-back" => true }
My server configuration in the standalone-ha.xml for the jdbc store is as below:
<cache-container name="server" aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server">
<transport lock-timeout="60000"/>
<replicated-cache name="default">
<transaction mode="BATCH"/>
</replicated-cache>
</cache-container>
<cache-container name="web" default-cache="jdbc" module="org.wildfly.clustering.web.infinispan">
<transport channel="ee" lock-timeout="60000"/>
<local-cache name="concurrent">
<file-store passivation="true" purge="false"/>
</local-cache>
<invalidation-cache name="jdbc">
<binary-keyed-jdbc-store data-source="Session" dialect="ORACLE" fetch-state="false" passivation="false" preload="false" purge="false" shared="true" singleton="false">
<!-- <transaction mode="BATCH"/>-->
<property name="database-Type">
oracle
</property>
<binary-keyed-table prefix="sess">
<id-column name="ID" type="VARCHAR2(500)"/>
<data-column name="DATUM" type="BINARY"/>
<timestamp-column name="MAXINACTIVE" type="NUMBER"/>
<timestamp-column name="LASTACCESS" type="NUMBER"/>
<timestamp-column name="VERSION" type="NUMBER"/>
</binary-keyed-table>
</binary-keyed-jdbc-store>
</invalidation-cache>
</cache-container>
<cache-container name="ejb" aliases="sfsb" default-cache="dist" module="org.wildfly.clustering.ejb.infinispan">
<transport lock-timeout="60000"/>
<distributed-cache name="dist">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store/>
</distributed-cache>
The table that gets created is also as below:
TNAME
TABTYPE CLUSTERID
BIN$cLKr2H7+eQ3gU1J2QgonwQ==$0
TABLE
SESS_counter_war
TABLE
sess_counter_war
TABLE
FYI, just for my satisfaction I tried by changing the prefix in the standalone-ha.xml so that's why two tables you could see.
Please guide me if I am doing something wrong.
This is quite a late reply almost a year after but as it is being said "Better Late than Never" :)
I managed to bring up the application successfully after some days of facing the error initially. I realized that there was some major issue in my configuration. Basically, I had the below problems:
Using distributed cache instead of invalidated cache.
Using binary store instead of string based store.
Invalid column and datatypes.
Refer the original post and answer here- https://developer.jboss.org/thread/278374
In EAP 7.1, you should configure session persistence using string-keyed-jdbc-store instead of using binary-keyed-jdbc-store which is deprecated in this version.

How to enable Wildfly 10 InMemorySessionManager?

I'm using Widlfy 10, but do not want to use the DistributableSessions that are used by Wildfly out of the box (I am having some session handling issues and need to debug things at a basic level). I see that Undertow has an InMemorySessionManager which I would rather use instead. But I haven't been able to figure out how to specify a different SessionManager.
I've tried to configure my Wildfly cache as a local cache:
<subsystem xmlns="urn:jboss:domain:infinispan:4.0">
<cache-container name="server" aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server">
<transport lock-timeout="60000"/>
<replicated-cache name="default" mode="SYNC">
<transaction mode="BATCH"/>
</replicated-cache>
</cache-container>
<cache-container name="web" default-cache="passivation" module="org.wildfly.clustering.web.infinispan">
<local-cache name="passivation">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store passivation="true" purge="false"/>
</local-cache>
<local-cache name="persistent">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store passivation="false" purge="false"/>
</local-cache>
</cache-container>
...
...
However, in debugging my application, I still see that Wildfly is using the DistributableSessionManager and DistributableSessions instead.
Is there anyway to enable the Undertwo's InMemorySessionManager instead? Do I have to go through the effort of creating my own ServletExtension and Factory and configuring it in the META-INF/services/io.undertow.servlet.ServletExtension or is there an out-of-the-box way of enable functionality that already exists via the config file? Or do the required classes already exist as part of the Undertow/Wildfly packaging?
There are only conditions that result in the use of the distributed session manager:
in web.xml
Using shared sessions across web application within an ear, via shared-session-config.xml
Given that you've already stated that #1 is not the case, I'll assume #2. To disable the use of the distributed session manager for shared sessions, remove the org.wildfly.clustering.web.undertow module from your distribution.