Infinispan 11.0.9 -rocksdb - Java Process Exit - wildfly

I am using Inifnispan 11.0.9 rocksdb cache store in wildlfy module. I observed that Java process gets exit after add & remove 1K elements in cache. when I checked java process dump logs, I found culprit librocksdbjni14410366623956126638.dll, I am using default configuration of rocksdb cache store.
Configuration cache -
`
Is it bug ? or Am I doing wrong?

Related

Zookeeper dataLogDir config invalid

I built a zookeeper cluster and it runs very well. But I found that the log directory I set in the zoo.cfg seems not working. Below is my config about log directory and snapshots directory.
dataDir=/var/lib/zookeeper
dataLogDir=/var/lib/zookeeper/logs
However, file zookeeper.out is generated in /var/lib/zookeeper rather than the subsidiary log folder /var/lib/zookeeper/logs.
I restarted zookeeper on every server many times, but made no sense.
This happens because zookeeper.out is related to other type of log (application log) instead of the one specified by dataLogDir which relates to transaction log.
dataLogDir
This option will direct the machine to write the transaction log to
the dataLogDir rather than the dataDir. This allows a dedicated log
device to be used, and helps avoid competition between logging and
snaphots.
By checking zkServer.sh you'll see that zookeeper.out is related to _ZOO_DAEMON_OUT which depends on ZOO_LOG_DIR which is set by default by zkEnv.sh. Depending on your environment and zookeeper (ZK) version the zookeeper.out file might land in different places (according to this answer even in the working directory from which ZK is started).
For application logging you'll better configure the log4j.properties file; that's because ZK uses log4j.

Huge amount of memory is required for deploy WAR file using jboss-CLI

I use wildfly appserver, when deploying a war file using Command-Line Interface (CLI) the process requires JVM heap size greater than 10 times the war file size.
How can I reduce this memory size that is consumed by jboss-cli during the deployment.
Problem detail:
I have to deploy 8 war files with 100 MB for each file, this process is applied in one transaction using "batch" and "batch.run", the memory consumed by this process exceeds 8GB.
I'm using the batch behavior because i have remote injections between wars, and i don't know the deployment order.
My question is how can I reduce the memory size consumed by wildfly when using jboss-cli, and if there is no way to reduce it, how can i know the deployment order between wars. (e.g. if app1 injects a remote session bean from app2, then the app2 must be deployed before app1).
You can define JVM options in $JAVA_OPTS environment variable, which will be loaded by WildFly.
For default JVM behavior take a brief look into bin/standalone.conf or bin/domain.conf.

Jobs removed from registry when Spring XD is killed

I'm running Spring XD as single-node for my Sandbox environment with a MySQL DB for the batch tables. If I kill -15 the Spring XD process, then all the current definitions for my jobs and streams are lost (in the case of the jobs, the XD_JOB_REGISTRY is apparently deleted). Consequently, if I start up Spring XD again, I have lost all the previous jobs and streams definitions.
I would like to know whether this is intentional in Spring XD, or maybe due to the fact that I run in single-node mode? Or is it a bug?
EDITED TO ADD THE GIST OF SERVERS.YML:
https://gist.github.com/emedina/486b52f11bc146203534
The job and stream definitions are stored in Zookeeper while the stats for any executed jobs are stored in the database. The single-node server uses an embedded Zookeeper instance by default and that's my guess why your definitions are gone when restarting. Try setting up a separate Zookeeper instance with a permanent data location.

AppFabric cache not clearing on cluster restart

I have an AppFabric Cache installation (the cluster just has one node, viz. my local machine). I just saw the strangest behavior- I put something in the cache, then restarted the cache cluster using the AF Cache PowerShell command Restart-CacheCluster. When I then try to retrieve the value from the cache, it's still there.
Has anyone seen this behavior? Is it due to some configuration setting I'm missing? This is not causing me problems, but the fact that it is not behaving the way I expect scares me in case other issues arise later.
The cache is using a SQL Server database for configuration (as opposed to an XML file).
As specified here :
The Stop-CacheCluster or Restart-CacheCluster cache cluster commands
cause all data to be flushed from the memory of all cache hosts in the
cluster.
No, it's not possible and I've never seen this kind of issue. I suggest you check the whole process.
Are you using a Read-Through provider ? In that scenario, the cache detects the missing item and calls a specific provider to perform the data load. The item is then seamlessly returned to the cache client and will never be null.
Other things you may have to check
Check the result of Restart-CacheCluster cmdlet (sucess/failure)
Maybe a background task is still running, putting data into the cache
By using cmdlet Get-CacheStatistics, you can check how many items are really in the cache

How do I set the maximum memory size that Redis can use?

To be specific, I only have 1GB of free memory and would like to use only 300MB for Redis. How can I configure it so that it is only uses up to 300MB of memory?
Out of curiosity, what happen when you try to insert a new data and Redis is already used all the memory allocated?
maxmemory is the correct configuration option to prevent Redis from using too much RAM.
If an insert causes maxmemory to be exceeded, the insert operation will sometimes fail.
Redis will do everything in its power to prevent the operation from failing, though. In the newer versions of Redis, you can configure the memory reclaiming policies in the configuration, as well by setting the maxmemory-policy option.
Also, if you have virtual memory options turned on, Redis will begin to store stale data to the disk.
More info:
What does Redis do when it runs out of memory?
You can do that using maxmemory option: maxmemory 314572800 means 300mb.
Since the last answer is from 2011. I am going to write some updated information for users reading in 2019 using Ubuntu 18.04. The configuration file is located in /etc/redis/redis.conf and if you have installed using (default/recommended method) apt install redis-server the default memory limit is set to "0" which practically means there is "no limit" which can be troublesome if user has limited/small amount of RAM/memory. To set your custom memory limit you may simply edit configuration file and type "maxmemory 1gb" as the very first line. Restart redis service for changes to take effect. To verify changes use redis-cli config get maxmemory
Ubuntu 18.04 users may read more here: How to install and configure REDIS on Ubuntu 18.04