Infinispan cache table not auto-created with Wildfly 15+ using invalidation-cache and jdbc-store - wildfly

I am attempting to use the jdbc-store type for my session cache in Wildfly 15+.
I ran the following commands to configure my standalone-full-ha.xml configuration file:
/subsystem=infinispan/cache-container=web/invalidation-cache=jdbc/:add(mode=SYNC)
/subsystem=infinispan/cache-container=web/invalidation-cache=jdbc/store=none:remove(){allow-resource-service-restart=true}
/subsystem=infinispan/cache-container=web/invalidation-cache=jdbc/store=jdbc/:add(data-source="...",passivation=false,shared=true){allow-resource-service-restart=true}
/subsystem=infinispan/cache-container=web/invalidation-cache=jdbc/component=transaction/:add()
/subsystem=infinispan/cache-container=web/invalidation-cache=jdbc/component=transaction/:write-attribute(name=mode,value=BATCH)
/subsystem=infinispan/cache-container=web:write-attribute(name=default-cache,value=jdbc)
... which produces the following in the configuration file:
<cache-container name="web" default-cache="jdbc" module="org.wildfly.clustering.web.infinispan">
<transport lock-timeout="60000"/>
<invalidation-cache name="jdbc">
<transaction mode="BATCH"/>
<jdbc-store data-source="..." passivation="false" shared="true">
<table/>
</jdbc-store>
</invalidation-cache>
<distributed-cache name="dist">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store/>
</distributed-cache>
</cache-container>
It looks like I've configured the cache correctly using JBoss CLI, but when the cluster instances start-up, the session store table is not created in the database, even though everything starts up properly.
My question is, is there something that I should be setting in the <table/> element that I'm just overlooking? Looking at the documentation, I don't see any required attributes, or anything about auto-creation.
I've looked at previous examples of how to achieve this in Wildfly 11, but the string-keyed-jdbc-store element no longer seems to be valid. I know the Infinispan documentation mentions the create-on-start attribute on the string-keyed-table element, but this configuration is so wildly different in Wildfly that it's completely unhelpful.

Add
<property name="createTableOnStart">
true
</property>
in your jdbc-store element
and make sure your web.xml is <distributable />

Related

Infinispan Cache store - "class not found" Exception in wildlfy addon module

I am using the 11.0.7 Infinispan cache store which is configured with an XML file. I have imported that cache module in a Wildlfy server. I am using this dependency in my application. But when I try to fetch cache elements from cache, it gives me the exception ClassNotFound.
I have used this configuration:
<local-cache name="TaskStoreCache" statistics="false">
<locking acquire-timeout="60000" />
<persistence passivation="false">
<rocksdb-store path=" C:\CacheStore\Data\TaskStoreCache" preload="false" shared="false"
purge="false" read-only="false">
<expiration path="C: \CacheStore\Expired\TaskStoreCache"/>
</rocksdb-store>
</persistence>
<memory max-count="500"/>
<encoding media-type="application/x-java-object"/>
</local-cache>
And I used this serialization:
<serialization marshaller="org.infinispan.commons.marshall.JavaSerializationMarshaller">
<white-list>
<regex>com.xyz.cache.*</regex>
<regex>java.util.*</regex>
<regex>java.lang.*</regex>
</white-list>
</serialization>
I am sure it is a class loading related issue. Please help me with that.
You should double-check how the dependency is wrapped into your application (war/ear), and make sure the scope is correct.
If you did not pack it within your application, the module has to be present on your wildfly and known to your application (e.g. jboss-deployment-structure.xml)

Wildfly 12 - Does HttpSession replication work only if it is configured with transaction?

I'm trying to use session replication but without lock contention.
So I set up the infinispan "web" cache-container like this:
<cache-container name="web" default-cache="repl" module="org.wildfly.clustering.web.infinispan">
<transport lock-timeout="60000"/>
<replicated-cache name="repl">
<locking isolation="READ_COMMITTED"/>
<transaction locking="OPTIMISTIC" mode="NONE"/>
<file-store/>
</replicated-cache>
</cache-container>
But the session is not replicating across the cluster.
It replicates only if I use mode="BATCH" and default transaction locking PESSIMISTIC. But with this strategy does not perform well with long requests (about 2~3 seconds) and a lot of concurrent access (one request locks others requests because the first request owns the session lock).
Is there any way to replicate the session across the cluster without using transaction and consequently without session lock?
Thanks!

Monitoring the activemq queue with Jolokia

Using the activemq jms for queue mechanism, I would like to monitor my queue for example the size of the queue. And i am using Jolokia as the bridge to perform rest requests on JMX.
Queue is configured in the wildfly and works fine:
<subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
<server name="default">
<http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
...
</http-connector>
<http-acceptor name="http-acceptor" http-listener="default"/>
...
</http-acceptor>
<jms-queue name="QueueName" entries="java:/jms/queue/QueueName"/>
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/>
</server>
</subsystem>
I have deployed Jolokia war file on wildfly under deployments and the following url brings me a list of attributes:
localhost:8080/jolokia/list
Now i would like to read information about my queue, so i use the following rest request:
localhost:8080/jolokia/read/org.apache.activemq.artemis:module=JMS,type=Queue,name=*QueueName*
However, this throws back the following exception:
"stacktrace": "javax.management.InstanceNotFoundException: No MBean with pattern org.apache.activemq.artemis:module=JMS,type=Queue,name=*QueueName* found for reading attributes\n\tat org.jolokia.handler.ReadHandler.searchMBeans(ReadHandler.java:160)\n\tat org.jolokia.handler.ReadHandler.fetchAttributesForMBeanPattern(ReadHandler.java:126)\n\tat org.jolokia.handler.ReadHandler.doHandleRequest(ReadHandler.java:116)\n\tat org.jolokia.handler.ReadHandler.doHandleRequest(ReadHandler.java:37)\n\tat org.jolokia.handler.JsonRequestHandler.handleRequest(JsonRequestHandler.java:161)\n\tat org.jolokia.backend.MBeanServerHandler
I have tried to enable the jmx in standalone by adding the jmx subsystem as following:
<subsystem xmlns="urn:jboss:domain:jmx:1.3">
<remoting-connector use-management-endpoint="false"/>
</subsystem>
<connector socket-binding="jmx-remote" name="jmx-remote-connector" security- realm="ApplicationRealm"/>
<socket-binding name="jmx-remote" port="${jboss.jmx.port:7909}" fixed-port="false"/>
But it still does not work. Any help regarding the corrections for my approach or an alternative approach would be appreciated.
If *QueueName* contains a / it needs to be escaped using !/. For example jms/inputq must be converted to jms!/inputq.
If you want to avoid the escaping you can use a query parameter q. The url then ends up looking like /jolokia?p=/read/....
For more information about escaping, see https://jolokia.org/reference/html/protocol.html

Wildfly 10 Infinispan TreeCache is not working

I'm migrating from Wildfly 8.2 to 10.1 Unfortunately, I'm encountering problems with Infinispan TreeCache.
Here are several issues:
Invocation batching is no longer supported in Wildfly 10
configuration
Here's my config:
<subsystem xmlns="urn:jboss:domain:infinispan:4.0">
...
<cache-container name="my_container" default-cache="my_tree_cache" jndi-name="java:jboss/my_container">
<transport lock-timeout="60000"/>
<local-cache name="my_cache"/>
<local-cache name="my_tree_cache" batching="true"/>
</cache-container>
</subsystem>
Error on startup:
> Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[345,17]
> Message: WFLYCTL0197: Unexpected attribute 'batching' encountered
If I remove "batching" attribute. I get this error:
com.daiwacm.modjsf.dataaccess.DataException: getTreeCache has failed for jndi value (my_tree_cache)
Caused by: org.infinispan.commons.CacheConfigurationException: invocationBatching is not
enabled for cache 'my_tree_cache'. Make sure this is enabled by calling configurationBuilder.invocationBatching().enable()
If I set batching programmatically:
Context context = new InitialContext();
CacheContainer cacheContainer = (CacheContainer) context.lookup(jndiName);
TreeCacheFactory tcf = new TreeCacheFactory();
Cache cache = cacheContainer.getCache(cacheName);
cache.getCacheManager().defineConfiguration(cacheName,
new ConfigurationBuilder().read(cache.getCacheConfiguration()).invocationBatching().enable().build());
TreeCache treeCache = tcf.createTreeCache(cache);
I get this error:
> Caused by: org.infinispan.commons.CacheConfigurationException:
> ISPN000381: This configuration is not supported for simple cache
> at org.infinispan.configuration.cache.ConfigurationBuilder.validateSimpleCacheConfiguration(ConfigurationBuilder.java:219)
> ...
> at org.infinispan.configuration.cache.InvocationBatchingConfigurationBuilder.build(InvocationBatchingConfigurationBuilder.java:12)
> ...
Don't set the configuration programmatically; I am not sure this is a valid approach, despite it seems to ~work.
The configuration option you're looking for is
<local-cache name="my_cache">
<transaction transaction-mode="BATCH" />
</local-cache>
(please consult the schema in docs/schema/jboss-as-infinispan_4_0.xsd should you have any doubts)
The last problem is that for local caches, WF automatically enables certain optimizations when it's possible. When you redefine the cache programmatically, this optimization (simple cache) is set on. So you'd have to set
new ConfigurationBuilder().read(cache.getCacheConfiguration())
.simpleCache(false)
.invocationBatching().enable()

Hibernate search with infinispan, How to store the index in a persistent cache store

Hibernate search default infinispan configuration store indexes in memory,you have to reindex everything once you shutdown application.
I read infinispan document, there is a way to store index into a infinispan file store. After I do googling around and around, I still don't know how to configure it.
You can check Infinispan user guide chapters 5 (Persistence) and 16 (Infinispan as a storage for Lucene indexes). Chapter numbers are from Infinispan 8.2. Hibernate search also provides a "default-hibernatesearch-infinispan.xml" file to start with. You basically need to add persistence to metadata and actual index caches. Here is the one I use for index cache:
<distributed-cache name="LuceneIndexesData" mode="SYNC" remote-timeout="25000">
<transaction mode="NONE"/>
<state-transfer enabled="true" timeout="480000" await-initial-transfer="true"/>
<indexing index="NONE"/>
<locking striping="false" acquire-timeout="10000" concurrency-level="500" write-skew="false"/>
<eviction max-entries="-1" strategy="NONE"/>
<expiration max-idle="-1"/>
<persistence passivation="false">
<jdbc:string-keyed-jdbc-store preload="true" fetch-state="true" read-only="false" purge="false">
<jdbc:data-source jndi-url="java:comp/env/jdbc/..."/>
<jdbc:string-keyed-table drop-on-exit="false" create-on-start="true" prefix="ISPN_STRING_TABLE">
<jdbc:id-column name="ID" type="VARCHAR(255)"/>
<jdbc:data-column name="DATA" type="MEDIUMBLOB"/>
<jdbc:timestamp-column name="TIMESTAMP" type="BIGINT"/>
</jdbc:string-keyed-table>
<property name="key2StringMapper">org.infinispan.lucene.LuceneKey2StringMapper</property>
<write-behind/>
</jdbc:string-keyed-jdbc-store>
</persistence>
</distributed-cache>
This example uses JDBC because it works on a dynamic cluster. You need to replace "jdbc:string-keyed=jdbc-store" with a "file-store" if you want to store the index a file.